@Stevek I can chip-in a little on this one, along with what @Hooverdan has cleared up thus far:
Re:
We have need of the long overdue replication docs section improvements I think:
The last time we tended to the replication system, where each replication job concerns itself with only a single share, was in the following GitHub Pull request: where there is also a proof of function post the changes presented there (now merged). We had updated some dependencies in the current testing phase earlier on and it had broken what function we had before. This PR restored that same function under our newer Python etc.:
Re:
That is the stage when the weirdly named snapshots:
is turned into a proper share and the strange named ‘share’ disapears from the Web-UI.
The missing docs has again caused some confusion. Folks expect the first replication to have copied over a share. It does, to a remove snapshot type share. Only after 3 replication events does the target share appear. We do this to have some more redundancy. The btrfs send/receive system that our replication wraps, sends differences between a share and a snapshot. Hence we create snapshots of the source share and send the difference: this results in a strangely named first send, and only after 3 send (replication) events do things smooth-out on the Web-UI. That first visibility of the strangely named snapshot/share thing is actually a bug. A confusing one but it shows some the initial function. When previously working on replication I attempted to hide that weirdly named share: but the fact that snapshots/shares/subvolumes are all simply btrfs subvolumes caused some confusion in the code as well !
I would suggest that you ‘oldest Snapshot to Share.’ event could have been from you making changes while the process is still settling.
Do as you did before and create a test share with identifiable small content. Delete all existing send/receive tasks and start out fresh with sending this small share only. Set the replication task to happen every 5 or 10 mins (if very small amounts of data) and watch what happens. After 5 replication events it settle into a stable state: where each replication shuffles along the 3 snapshots held at each end (send/receive). It’s confusing without docs. Especially when unfamiliar with the other things we do here, i.e. the double mount etc. Btrfs has a lot of apparent ‘magic’ and sometimes it’s tricky to get to grips with what’s going on. The Rockstor Web-UI attempts to present what is complex in a simple manner: it fails to do this in some areas is the basic answer here. Plus we have some bugs that push us in the wrong direction. However over-all we are progressing, and the replication code/procedure is in need of more attention/folks familiar with it, better docs and explanations.
Hope that helps, at least with some context.