Documenting importing my BTRFS pool from another server - Stable kernel, btrfsprogs, and more

I am migrating a a 10 disk pool from my prior DIY storage server to my new Rockstor server.

The old pool is a RAID6 with metadata stored in RAID1C4, which requires Linux 5.5+ to mount, obviously newer than even Leap 15.3, so I went down the rabbit-hole on installing a backported kernel on Leap 15.2. Here’s my documentation

Per https://doc.opensuse.org/documentation/leap/reference/html/book-reference/cha-tuning-multikernel.html#sec-tuning-multikernel-latest, I ensured that /etc/zypp/zypp.conf contains the following

multiversion = provides:multiversion(kernel)
multiversion.kernels = latest,latest-1,latest-2,latest-3,running

latest-2 and -3 might be overkill, but I’d strongly prefer having a bootable server instead of saving a little disk space for now. I intend to research this more, and see if I can specify latest from Leap_15_2, latest from kernel-stable-backport-repo somehow.

I had to find a kernel to install, preferrably on the Open Build Service. I chose https://build.opensuse.org/project/show/Kernel:stable:Backport. I also searched for a newer version of btrfsprogs, and found it available in https://build.opensuse.org/project/show/filesystems

To add the repos, I logged in over ssh and did the following

$ zypper addrepo \
    https://download.opensuse.org/repositories/Kernel:/stable:/Backport/standard \
    kernel-stable-backport-repo
$ zypper addrepo \
    https://download.opensuse.org/repositories/filesystems/openSUSE_Leap_15.2/ \
    filesystems-repo
$ zypper refresh

I searched the repos for a new kernel-default, at a more modern version. I truncated the output tables here to only show relevant rows.

$ zypper search kernel-default
S  | Name                           | Type       | Version                        | Arch   | Repository
---+--------------------------------+------------+--------------------------------+--------+----------------------------
v  | kernel-default                 | package    | 5.13.12-lp153.6.1.g33df9c6     | x86_64 | kernel-stable-backport-repo
i+ | kernel-default                 | package    | 5.3.18-lp152.87.1              | x86_64 |     Leap_15_2_Updates
i+ | kernel-default                 | package    | 5.3.18-lp152.19.2              | x86_64 | Leap_15_2

$ zypper search -s btrfsprogs
S  | Name                        | Type       | Version            | Arch   | Repository
---+-----------------------------+------------+--------------------+--------+------------------
v  | btrfsprogs                  | package    | 5.13.1-lp152.347.2 | x86_64 | filesystems-repo
i+ | btrfsprogs                  | package    | 4.19.1-lp152.6.3.1 | x86_64 | Leap_15_2_Updates
v  | btrfsprogs-udev-rules       | package    | 5.13.1-lp152.347.2 | noarch | filesystems-repo
i  | btrfsprogs-udev-rules       | package    | 4.19.1-lp152.6.3.1 | noarch | Leap_15_2_Updates
$ zypper install kernel-default-5.13.12-lp153.6.1.g33df9c6 btrfsprogs-5.13.1 btrfsprogs-udev-rules-5.13.1

This also pulled in kernel-firmware-all, crda, and wireless-regdb. I don’t have wifi on the server, but I don’t know zypper or opensuse well enough to be comfortable excluding these.

Rebooted and I was able to import the pool. A little manual cleanup for errant files and paths floating around in the btrfs root subvolume, and everything seems great.

Rockstor doesn’t know about RAID1C3 yet, but that’s not a huge problem. I might have to tweak a few things over time to keep this working right, but I’m happy.

I’d love some feedback on all this, I didn’t deal with migrating rockons and just recreated them instead.

4 Likes

@kageurufu Nice. And thanks for sharing your findings on this one.

You could also custom edit our kiwi-ng installer definition, in your chosen profile, so that these updated packages get backed into a custom installer. Take care with changing output from newer kernels / btrfs as we parse this and they occasionally make syntactic / data changes. Report these if you find any however as then we can see what’s coming. We used to build for tumbleweed for this reason but our own technical debt (still Python 2) caught up with us on that front. Note also that we currently build rpms for Leap 15.3 and have installer profiles for this as well.

Linking for context to prior forum threads on issues / plans for the newer raid profiles:

and more specifically:

Big and deep changes but ultimately entirely doable. But just with care.

Note that Rockstor will enforce it’s ‘way’ and likely return raid profiles to be within it’s currently more limited knowledge. So watch out for that. Also the number of disks (minimum etc) fences / warnings will also be inaccurate / misleading so you will also be on your own on that front. I.e. btrfs raid1c3 required a minimum of 3 disks. Rockstor only knows about btrfs raid1 (c2 default), so will likely only enforce/ward about a minimum of 2 disks for a raid1c* arrangement.

As to your import. Rockstor has a necessary and by-design narrow understanding of subvolume arrangements. It looks like, from your import, that you were able to arrange your prior subvolumes to be appropriate to this. Anything outside this arrangement, which can be found by creating native Rockstor subvols (shares in Rockstor speak), will likely be either ignored or a source of confusion.

In short we don’t support the import of btrfs volumes that were not created by a prior Rockstor install. But only because of the subvol arrangement. If this is compatible there all is good. We should have a tecnical doc on exactly what this is. Oh well, much to do. Bit by bit.

Thanks again for sharing your findings on this one. Maybe a pull request in our DIY installer maker:

with the additional repos added and a script addition to edit (i.e. sed) any system files (the zypp.conf bit) to adapt the resulting system to these kernels would be good. All remarked out/disabled would be best. Then folks wishing to run these cutting edge kernels could un-remark these within the installer config and be off to the races. In case you end up playing with creating a installer addition that is. Would be good for emergency re-install type arrangements once you have this system setup.

Do keep us informed of the Web-UI shortcoming, i.e. pics with suggested missing/wrong bits here. I.e. where we currently fail to represent this for-now unsupported raid profile. These findings could then form the basis for a new feature issue in the rockstor-core repo.

Cheers. I’m looking forward to extending our raid profiles actually but again, bit by bit.

1 Like

I can take a look. I haven’t messed with editing .kiwi files before, but they really don’t look that complicated. Might be useful to have these as alternate profiles instead of commented blocks? I’m not sure if you want to risk making people think that’s “supported” though.

Ah, wish I had of found this earlier, would have helped :slight_smile:. Hopefully the title I set will help with making this more discoverable?

I’m fine doing my BTRFS editing on the command line, is there some debug mode where I can see what the command line output from Rockstor would be, without actually running the commands?

I have enough spare drives, I’ll set up another data RAID6 metadata 1C3 to experiment on. Is there a good guide to setting up Rockstor for development, extra points if I can have development and stable rockstor installations side by side!

EDIT: Done. Rockstor doesn’t seem to care about the metadata RAID level, balances and scrubs work just fine. Changing the raid level to RAID5 did change the system and metadata levels to RAID5 as well. I was able to btrfs balance convert -mconvert=raid1c4 /mnt2/danger it right back, but on a pool with lots of data, this would be a serious issue. I think the real thing to focus on for future development around would be to add a second dropdown to all the RAID level selections for metadata raid. Add detection for supported RAID levels, checking for the existence of /sys/fs/btrfs/features/raid1c34 works for this.

RadiantGarden:~ # btrfs filesystem df /mnt2/danger
Data, RAID6: total=2.00GiB, used=768.00KiB
System, RAID1C4: total=64.00MiB, used=16.00KiB
Metadata, RAID1C4: total=1.00GiB, used=128.00KiB
GlobalReserve, single: total=3.25MiB, used=0.00B

My layout was already basically just a single-layer of subvolumes on the root of the filesystem, /mnt/[pool]/[subvolume], so I basically just had to clean up a few “work” folders where I’d been storing dedupe data.

I have noticed this issue a few times with Rockstor picking up other system subvolumes, /var/lib/machines and /var/lib/libvirt, which both ended up “weird” in the UI since they had the “illegal” slashes in the pathnames. I’m at fault for the libvirt, but I don’t think I enabled systemd-machine myself. Might just be a Leap thing?

@kageurufu Hello again.
Re:

I think I’d rather have them as commented blocks actually as we already have quite a few profiles and there is quite a lot of ‘stuff’ that goes with an entire profile. Plus if it’s a profile, as you way, it’s a little too easy to just ‘select’ on the command line and then not realise quite how out-of-scope that profile has taken you. Take a look at the definition and I think you will see what I mean.

Any command that runs outside of btrfs limitations will likely throw a non rc=0 anyway so the Web-UI should indicate the command that threw this error. I.e. if the Web-UI doesn’t catch that you want to remove a disk below the minimum count for example, the Web-UI will normally have a friendly pop-up / form error explaining this. But this bit will be broken. However the then nonsense command should then throw an error at the command level and you will then get the less friendly btrfs “I can’t remove a disk from this btrfs level” type think. Which is sounds like you will be OK with anyway.

Re:

Not quite a see before run but does increase log level and so some commands that are otherwise silent will show up explicitly in the logs:

/opt/rockstor/bin/debug-mode 
currently debug flag is False
Usage: /opt/rockstor/bin/debug-mode [-h] [ON|OFF]

A dry run facility is going to be tricky as many of our ‘single’ Web-UI actions actually encompass quite a few sequential btrfs commands. You can take a look by refreshing the ‘shares’ for example while the debug mode is on.

Yes, hopefully:
Doc entry first: “Developers” subsection of “Contributing to Rockstor - Overview”
https://rockstor.com/docs/contribute/contribute.html#developers
and our in need of a little attention Wiki guide: “Built on openSUSE dev notes and status”

But they are really only if you are contemplating contributing (that would be nice).
N.B. all source builds consider any rpm version an update and will necessarily wipe all settings.
And there is not update mechanism from a source build to another source build other than via the db wipe and re-establish itself as an rpm install. A source install is an unknown so we don’t support any update from any version to any other. It will show itself up as UNKNOWN VERSION or the like.

OK, we definitely fail on that front. The django config we use has a single postgres db backend and so there is currently no way for two instances to co-exist. At least no without eating each others db. Could be done but would need a lot of work to differentiate each db. Also the following db bug/in-elegance would need to be addressed first:

https://github.com/rockstor/rockstor-core/issues/2076

That way, once we add db names to our ‘versions’ a reset in a testing would only wipe the testing related database with all that instances settings in. The problem will likely also reach into contention on devices etc also. But that could be partly approached with expectation, i.e. “… we don’t support concurrent versions running …” type thing. So mainly a db thing I think. No plans for this however and we have quite a lot on our plate so not likely to appear any time soon. Although pull requests are welcome. However keep in mind that we use a now unmaintained build system that is also python 2 only so we have much larger fish to fry so configs within this build system to approach this could well end be being deleted by those larger fish.

I’d keep an eye on this stuff thought. We concentrate on data raid levels currently and mostly (but not entirely) go with the metadata default. Good to have the feedback here thought.

Agreed and thanks for the input. However I think the initial step before this would be to surface within the Web-UI the metadata level as well as our current data level first. That way we have user visible feedback first before we start adding fences etc. Also note that a partial balance can leave some parts at the old raid level. Such situations may help to throw spanners in the works. I.e.:

Best we break it down to as manageable steps as possible. It took us quite some time to reach our current ‘fence/constrain’ level and more variables are likely to throw way more permutations up. Lets ear-mark this feature (awareness / surfacing of data & metadata raid levels) into our next testing channel shall we. If we are lucky it should slot in nicely. But given we are about to release our long awaited next Stable it’s just not appropriate just yet.

Nice. Also note that we have some other basic restrictions also, like no two subvols may have the same name (system wide actually). We have yet more ‘deep’ improvements to do on that front, i.e. bring our pool uuid tracking/management up to a state where it can replace our ‘by name’ approach. We are a little furtther along on the subvol on that front but it’s still in need of some improvement. But at least we mount by subvol id now :). It used to be by name !!

We should probably discuss this in it’s own forum thread actually as that sounds like something we can just set to be ignored. Take a looks at the following pull request:

https://github.com/rockstor/rockstor-core/pull/2270
and issue:
https://github.com/rockstor/rockstor-core/issues/2223
and initiating forum thread:

We already had a system pool specific ‘filter’ for subvols we don’t want to surface. You may well just be able to edit one or the other to have these weird subvols ‘go away’ from the Web-UI. Let us know what works for you and it may well be something we can add if it’s likely others will run into this. Just editing the python file with those exclusions in place and restarting the system should do it. But we can work through this specifically in a specific thread if you fancy. Should be an easy fix, if still at the simple code edit stage.

Thanks for the engagement, and keep us informed of your adventures here. New testing channel/branch to start shortly so we can begin throwing stuff at that soon.

Hope that helps.

1 Like

@kageurufu Hello again.

Just a note to say that I’ve now meged a remarked out repo section within our rockstor-installer:

That uses the only just populated today new location for the filesystems repo.
It’s also got the stable backports kernel you used in this guide.

For details see the following pull request:
https://github.com/rockstor/rockstor-installer/pull/88
which was in turn against the following issue in the same repository:
https://github.com/rockstor/rockstor-installer/issues/86

You might want to take a look at the changed location of the filesystems repo that you used. They re-named it and as a result it’s contents were all re-created, but there was a config issue so there was no btrfsprogs available in the new one until today as it goes. But this may not affect the 15.2 version.

So you may want to remove your filesystems repo and add the newly named one if you are now on a 15.3 base anyway: they removed the ‘openSUSE’ bit in the name, presumably as part of the jump project and due to the greater number of shared repositories between SLE and openSUSE Leap 15.3+.

From you original post you may still be using a Leap 15.2 base. I’m about to release a new update that brings our 15.3 functionality in line with our best capabilities within 15.2 so you may want to do a zypper dup at some point. Or build a custom installer using the 15.3 profile with the appropriate config lines for the Stable backport kernel and filesystems repo unremarked.

Also a little over 3 weeks ago we added the following doc section which you might like:
“Installing the Stable Kernel Backport”
https://rockstor.com/docs/howtos/stable_kernel_backport.html
Leap 15.3 only as 15.2 is now EOL.
This again was influenced by your post here. So thanks again for sharing your adventures to date.

On the zypper dup front I have the following pending pull request:
https://github.com/rockstor/rockstor-doc/pull/354
It’s publication is awaiting the imminent release of 4.1.0-0 which includes an important fix for improved behaviour during a zypper dup. Details within the pull request.

Hope that helps.

2 Likes

Awesome, thanks for the advice there!

I was able to upgrade to 15.3 today, I had to lock btrfsprogs and btrfsprogs-udev-rules to do a zypper upgrade, then unlock them for the zypper dup, but otherwise it worked perfectly.

Any reason you’re not recommending using $releasever in the repos? I went that route just for ease of future upgrades.