Migrating old 3.9 shares to 3.9.2-58

Continuing the discussion from Built on openSUSE testing channel live (early-adopters/developers only):

I had created most of the shares at the time I initially set up my rockstor install, so they should all have vaguely the same config. From what I can tell it’s exactly the same as the home share on the root pool, too. The only difference is that the subvol path is wrong now. Instead of going to /@/ it goes to the (old?) snapshot dir, as already mentioned.

While I have no problem with deleting the shares, I’d like to keep the data that is already there (and also keep issues with having to rejiggle the shares everywhere at a minimum, but I fear it’s too late for that already). Your answer only provides guidance on how to remove the faulty shares, but does not mention how to make new shares that will keep the data around. Or is it just as simple as choosing the share name and it’ll just use what’s already on disk?

What I a most worried about is that my rock-ons services share is affected by this. Will the rock-on service realize that its share is gone and ask me again to set it once I start it again?

I have tested just simple deleting share and then creating a new one with the same name. Sadly it seems this deletes the original share data and doesn’t keep it. I guess the way to go then is to back up the share data and then move it back into the share once the new share has been created?

@freaktechnik Hello again.
Re:

This bring up the important question of if our current delete-share dialog is doing it’s job properly:

share-delete-dialog

If you interpreted this as not deleting all of the shares data then it’s highly likely others will interpret it similarly! In which case what would your suggestion be to remove the ambiguity re removing a share (btrfs subvol) also removing it’s associated data? The template for this dialog is here:

And it’s trivial to change as it’s just html with some handlebars helpers included.

If you fancy implementing your suggestion in a pull request that would be great. I had thought this reasonably clear but if one disassociates a share from a subvol (which I now see is not obviously the case) then maybe we need some capitals and extra warning here in red or the like, or more explanation. All these little touches can really help with usability so do have a think here as your report has brought to light that we are maybe not as far along usability wise as I had hoped we were.

Hope that helps and I’ll address one of your other issues in a separate post to make back referencing easier, and once I have some example commands to address them of course.

It is doing its job properly. Deleting the shares is not what destroys the existing data, since the shares that weren’t migrated properly did not point to any contents and were completely empty. Creating a new share with the same name in the pool lead to the destruction of data, however.

I have since posting manually migrated all my shares and begrudgingly re-created all my rock-on configuration since it seemed risky to me to migrate the rock-ons service share’s contents.

Of not is that you can not delete the rock-on service share from the web UI due to the btrfs sub-shares that have to be deleted first. Both normal and force delete mode lead to errors, though not identical errors. After manually deleting the btrfs subshares I was able to delete the share.

But to end, let me re-iterate: the issue here was that after the update the shares had a seemingly outdated mount point, which lead to them losing all their contents to any share consumer. If it were possible to somehow fix the mount point of the shares this would’ve been very easy to fix, though I’m not sure if btrfs even supports that. So it may have been better to note in the update notes that any additional shares on the ROOT pool may have to be re-created due to invalid share parameters?

@freaktechnik Re:

A backup is, as always, advised. Raid is not a backup. OK, having gotten that out of the way. I’m not entirely sure your situation but I can address what a pre 3.9.2-58 system share looks like after updating to 3.9.2-58 and take it from there.

In the following, sys-share & another-sys-share were created on the ROOT pool in our prior ‘wrong’ treatment of the system pool when on a ‘Built on openSUSE’ boot-to-snap config (default when sys drive >17.5 GB or when installing using the yet to be publicly released new Rockstor installer).

2-58-transient-sys-pool-shares-post-update

As the Web-UI indicates, these were created within the snapshot that is the current boot-to-snap (it starts out as 1 but this system had been rolled back prior to making these shares):

tumbleweed:~ # cat /proc/mounts | grep " / "
/dev/vda3 / btrfs rw,relatime,space_cache,subvolid=440,subvol=/@/.snapshots/156/snapshot 0 0

And the new (3.9.2-58+) Rockstor system pool (ROOT) mount now specifies, in a boot-to-snap config, that we have a level up from this mount at “subvol=/@” thus:

new-ROOT-mount-in-mnt2

This was important as without it we can’t do a share rollback. This was an oversite on my part and closely linked to us having, for the last 2 years now, to work with both CentOS’s and openSUSE’s default system pools which are as different as can be. The main casualty was our inability to maintain, within the Web-UI a share: that is once one was created on the system pool a would, upon a page refresh, disappear (which is another confusion on my part of how you were using these shares, but I’ll try not to digress :slight_smile: ). The GitHub issue we had for this lack of feature parity with our CentOS offering was:

Anyway, the linked pull request in that issue formed part of 3.9.2-58. During the experiments of our options re system pool shares in a boot-to-snap config it became apparent that the best user experience, and the only way I could get all our usual stuff to work, was to create our system pool shares at the same level as the default home lives. Otherwise when folks did a boot-to-snap rollback their shares would again be lost to the Rockstor Web-UI with no easy way to re-gain them. This was obviously a bad user experience. So creating them outside (read above) all of that business, ended up being the way to go.

So back to my example:

We have transient (to Web-UI) Rocsktor shares created in pre 3.9.2-57 and earlier, ‘Built on openSUSE’ testing channel variant now showing as ‘all wrong’ due to our update and reboot with it’s consequent ‘more correct’ system pool mount:

cat /proc/mounts | grep "sys-share"
/dev/vda3 /mnt2/.snapshots/156/snapshot/another-sys-share btrfs rw,relatime,space_cache,subvolid=548,subvol=/@/.snapshots/156/snapshot/another-sys-share 0 0
/dev/vda3 /mnt2/.snapshots/156/snapshot/sys-share btrfs rw,relatime,space_cache,subvolid=547,subvol=/@/.snapshots/156/snapshot/sys-share 0 0

But now we have a higher level system pool mount in /mnt2/ROOT of:

tumbleweed:~ # cat /proc/mounts | grep "ROOT"
/dev/vda3 /mnt2/ROOT btrfs rw,relatime,space_cache,subvolid=257,subvol=/@ 0 0

We can hopefully move, within the same pool, these prior transient subvols (shares) to live where they might be were Rockstor to have created them a-fresh:

2-58

tumbleweed:~ # cat /proc/mounts | grep "a-fresh"
/dev/vda3 /mnt2/a-fresh-share-post-3.9.2-58 btrfs rw,relatime,space_cache,subvolid=549,subvol=/@/a-fresh-share-post-3.9.2-58 0 0

And given our /mn2/ROOT is mounted at subvol="/@" we can after stopping Rockstor services thus:

systemctl stop rockstor rockstor-pre rockstor-bootstrap

to prevent it from concurrently meddling (ie remounting these shares) while we meddle, by un-mounting and moving them (or rather their associated subvols) thus:

umount /mnt2/.snapshots/156/snapshot/sys-share
mv /mnt2/ROOT/.snapshots/156/snapshot/sys-share /mnt2/ROOT/

and then restart all Rockstor services:

systemctl start rockstor

(the rockstor service will invoke the other two, as pre-requisite services prior to starting itself)

And a browser page refresh or two later and we have:

one-old-transient-share-moved

As can be seen our moved btrfs subvol “sys-share” (a ‘share’ in Rockstor parlance), or more exactly " subvolid=547" now sits, pool heirarchy wise, along side ‘home’ (fstab mounted) and our “a-fresh-share-post-3.9.2-58”
And doing the same four our other transient pre-3.9.2-58 created share

systemctl stop rockstor rockstor-pre rockstor-bootstrap
umount /mnt2/.snapshots/156/snapshot/another-sys-share
mv /mnt2/ROOT/.snapshots/156/snapshot/another-sys-share /mnt2/ROOT/

we have:

both-old-shares-moved

Now in the case of a Rock-ons root share: http://rockstor.com/docs/docker-based-rock-ons/overview.html#the-rock-ons-root

You would first have to disable the the docker service within Rockstor’s Web-UI and then, post move, re-config that service to the new share prior to re-enabling. However given all of the above I still have no idea how you were ‘using’ shares within Rockstor’s Web-UI that only appear / are known to the Web-UI for a few seconds before the next pool/share/snapshot refresh. But I still though it might be useful to point out a method for moving a prior share, maybe created by Rockstor on the ROOT pool and then configured manually outside of Rockstor to the new ‘proper’ location so that their use might be continued but from within the Web-UI also.

So in short, one can move a subvol (within it’s parent pool only, and given an adequate mount of that pool) using ‘mv’ and thus re-position it to the new norm of Rockstor’s expected ROOT share location, along side ‘home’. This way they survive boot-to-snap rollback events. As might be reasonably expected by folks, at least that was my guess.

Hope that helps. And keep in mind the “(early-adopters/developers only)” caveat for the ‘Built on openSUSE’ testing channel currently. During alpha/beta trials, and for anything that appears in the testing channel really, it is to be expected that anything all the way up to a re-install is likely to be required from time to time.

I’ll address you 3.9.2-58 testing contribution in it’s relevant thread to try and keep this one on topic.

Cheers.

OK, it looks like the time it took for me to work through your questions was insufficient. We have cross posted. Sorry to have taken so long here.

2 Likes

Let me expand a bit on the data destruction a bit more, to make sure it is very clear:

  1. Share points to /@/snapshots/1/snapshot/<share name>; actual data is still in /@/<share name>.
  2. Delete share /@/snapshots/1/snapshot/<share name> (not deleting would actually lead to a new share with the same name pointing to this mount point, even though they were treated as separate shares)
  3. Create share with name <share name>. At this point the data in /@/<share name> is erased.

Luckily no, since the service would no longer start due to the share it tried to use not having any contents.

I do, though I will reserve the right to act surprised when things happen that weren’t mentioned in the update notes :slight_smile:

P.S.: Yes, I made backups of the data, thus I was able to avoid losing any share contents, I just decided not to use the backup for the rock-ons service

@freaktechnik Just a quick one re:

There is no share migration from pre 3.9.2-58. We are in Alpha here. They were in the wrong place before, hence my prior detailed post to assist.

Hope that helps.

Best to never put ‘real’ data on the system drive. It’s rong but we all want to do it, including me. It’s really handy for such ‘throw away’ stuff as the Rock-ons-root. Plus it’s nice to make more use of the system share. But it’s still wrong. And I had thought very seriously about not allowing any user share or existing share access to the system share. But it’s so handy.

We are to add some severe warnings against using the system pool for this very reason.

That’s what that tick is for, see my previous post re the delete dialog. But of course your’s was not in the now new norm location and so you were outside of what the Web-UI generally knows about, see: no migration. We do however support import of data shares and that remains unchanged, even between our now legacy CentOS and new norm ‘Built on openSUSE’.

Yes, Rockstor’s ‘knownedge’ of this strange and unusual share arrangement is most likely rather confused. Hence my original suggestion to delete them all.

Noted and when I write them I’ll pop that in. Really need to get that Web-UI warning in place to prevent folks from using the system pool for their own shares unless they understand the additional risks. But in this case we are alpha beta (testing channel) so we may make such changes form time to time.

I think that makes sense for the Rock-ons root share, and that makes it a nice fit for the system pool. It’s what I use the system pool for actually. And I think it’s a popular choice. And one of the main reasons I didn’t simply disable user shares on the system drive. Which would actually have removed a tone of complexity form the code. But given it’s take all this time to have this nice-to-have, I though it best to preseve it.

Incidentally you have yet to explain how you used shares that only last 2 seconds within the Web-UI after creation. That is my main issue here but just wanted to explain some of the working to folks who might be interested, and you expressed exclamation of your share data being non visible. For all the instanced of boot-to-snap I have encountered pre 3.9.2-57 and before. All user created shares simply vanish from the Web-UI post creation and page refresh. I suspect there was another element going on here. But frankly we are now at 3.9.2-58 and we take it from there.

Glad you’ve got everything sorted, btrfs is really quite different from virtually all other filesystems and it can be quite confusing at times. Take a closer look at that new mount point for the pool in 3.9.2-58 to get more of a jist of what’s going on. And keep in mind that Rockstor definitely does not, nor likely will ever, represent all that btrfs is capable of. We sort of dumb it down to an appliance nature and try and deal with those bit as well as we can. So we pretty much stick to a set structure re subvol layout etc. And pre 3.9.2-58 we had an inappropriate ROOT pool mount that pointed to the boot-to-snap subvol rather that the /@ one. Hence old shares created there were lost (sort of) to the then arrangement as everything was shifted down and hence unfamiliar to what is actually pretty tricky to work out.

Thanks again for you input on this. I’ll recommend, in the release notes to be, that the less technical do a re-install if they were using, by some means, shares on the system pool prior to 3.9.2-58. Easier all around I think. Otherwise they can move them as per my larger post earlier in this thread. I better get on with the pending work on the next ISO installer now.

1 Like

I never saw such behavior and the shares worked perfectly fine. Their data was previously available from the system root (so absolute path /<share name>). It appears with the new set up this is no longer the case, though I assume that’s due to the changed pool root (from what I understand). Either way, the shares on the ROOT pool were stable for me, prior to this update.

Which are exactly the qualities I chose rockstor for :slight_smile:

@freaktechnik Re:

Correct. We mount, and have pretty much always mounted each share at their own mount points at:

/mn2/share-name-here

In a boot to snap where the shares are within the “/” mounted snap, they will no longer be within “/” once that is then rolled back to a prior snap. Hence popping all new shares at one level up, which is what we did in the CentOS system pool and for all data pools. But the boot-to-snap wrinkle, and maintain a dual function code base, basically caught me out and I did it wrong before 3.9.2-58. From now on we should be good stability wise on the system pool user share creation. We can always tell from no on if they don’t link up with the home share. So best we keep an eye on that :slight_smile:.

Of course one can access via a pool path but this way the Web-UI knows where to find them irrespective and we can do custom mount options and the like. Another complexity in btrfs that one can access subvols via their parent root mount or their own. Subvols are ‘almost’ filesystems in their own right; but not quite.

I think we are on the same page here. Usability is hard and an appliance has to have it high on the list of aims. Otherwise one might as well just use the main upstream distros. We have to bring something, i.e. ease of use with little knowledge, to the table. But easier said than done of course. Especially while keeping things as simple as possible.

Constructive discussion. It has informed my beta release notes to be.

Cheers.