Regular config backups:
With the installer it now only takes a few minutes to install anyway, and given it includes all pending updates at time of build that is another thing that is not required, further speeding up deployment. But in you case of a custom install it’s a bit more tricky and down to generic means.
This would be an extreemely risky approach. Btrfs has a know issue where it will corrupt if it has pools of the same uuid. I would definatly not take this approach under any circumstances. The issue is that if you have an online device member that is a clone of another via it’s btrfs pool uuid it can’t tell them apart properly and can inadvertently write to the wrong one. Rapidly messing things up. The ‘shadow’ does have to be mounted I believe but just don’t take this approach is may advise. Plus we also have the following issue that will likely trip you up if you are not aware of it:
These off topic discussions are likely of less use to folks when under the heading we have within this thread so probably better for folks to chip in with subject specific threads. Tricky I know and I’m always digressing myself within a thread so there’s that :).
Btrfs has such a lot of magic in already I’d really try and keep your bare metal recovery to a bare minimum complexity/magic wise. Better to look to our options re multi disk boot within the kiwi-ng / btrfs realm. And if you store no state on the system drive a re-install pool import config import is trivial anyway. Others will likely have other ideas on this but take great care with clones of devices/pool members within the realm of btrfs: there be dragons.
For this you could use a generic backup program or even one of the backup options in our rockons and simply backup the relevant config file. Then on restore and restore of the rock-on, and it’s config, you would have it’s backup ‘payload’ there on the redundant pool ready to restore, via the now pre-configured rock-on to the system drive.
Our approach / recommendation is to have as little as possible stored on the system drive. If you make sure to not use the system drive for any shares you will make your life a lot simpler with regard to restore. Just a thought. We actually put quite a lot of work/time into preserving the capability to use the system drive as it’s so useful sometimes, i.e. for rock-ons-root. But still not advisable, but that does’t make it not useful and we wanted to preserve feature parity with our CentOS offering. Oh and flexibility is almost always useful. But again this doesn’t make it a good idea. Anyway I expect you get the idea.
I wouldn’t recommend it. Mdraid under btrfs is both redundant and an error. It can undermine btrfs as it will invisibly replace one copy with another without knowing which is the correct one. Best manage via higher level means until we have a better option available to us. The following old doc was our appeasement to this oft requested feature in the old CentOS varaint:
No longer relevant and mdraid for the system drive in the new/current ‘Built on openSUSE’ variant is completely untested. Plus there’s that undermining thing again.
Definitely a tricky one but we should keep an eye on these. My personal preferences is to only work on the btrfs multi dev system pool. And we will, within our own code, have to make quite a few changes for that to work. But it’s doable and much more so since we enabled the btrfs-in-partition capability, but that work has yet to make it to our treatment of the system disk and given our backlog of tecnical dept and 4 release this capability is not likely to emerge for another major release or two. But is planned as it would greatly simplify our code.
Hope that helps.
P.S. I vote for one of the backup rock-ons and configure it to grab your custom configs. And use the native Rockstor config save/restore after pool import for the rest.