During install of Rockstor-3.9.1.iso I selected to manually partition and then selected to have the initial partitions created for me. The installer created LVM instead of BTRFS and then blamed me.
Detailed step by step instructions to reproduce the problem
Start install on blank disk
Select “Installation Destination”
Select “I will configure partitioning”
Select “Done”
Note that “New mount points will use the following partition scheme” is set to “LVM”
" Select “Click here to create them automatically”
Note point 5, why does it even offer LVM, let alone have it as a default if only BTRFS will do?
Your experience exclamation was mirrored by mine when I first encountered Rockstor (3 something way back). And I only later discovered that it inherits and re-badges the generic CentOS installer but fails to fully constrain stuff that is later garbage to the rest of Rockstor code Web-UI. Now we have a far more ‘sane’ installer as per our very tightly configured kiwi-ng configuration here:
And that same repo contains the current instructions on building your own installer on what is our current effort: Rockstor 4 “Built on openSUSE”. No download as of yet but soon. However it will be build with those exact instructions and a plus is that all pending updates are pre-installed at the time of the installer being build.
We are keen for this DIY installer method to be widely understood as then folks will not have to wait for our next ISO download to get all the latest upstream updates. We did used to have instructions on building our installer but they received almost no community input. This last attempt has been far more successfull in that regard and we now have several contributors and have even had a profile added by a Hardware innovator i.e. the Leap15.2.ARM64EFI profile for Arm servers or the Ten64 platform. Plus we have a Pi4 profile in case that is of interest.
Hopefully this helps to answer at least part of your question. And yes, I at least plan to add this to our main web page shortly, hence the following issues in our newly oped GitHub repo for our main website:
I’m afraid you have joined us at a transition period where our CentOS install is now way old (read dinosaur) and our “Build on openSUSE” installer is in pre-release. And the newest Rockstor version/package is in Release Candidate 8 (4.0.7-0):
This is significantly newer than our last release on the CentOS variant which was:
Released 14 months ago.
And if one build your own Rockstor 4 installer it will be up-to-date from upstream at least up to and including the day you built it. Where as our dinosaur legacy installer on the now legacy CentOS base was released nearly 4 years ago:
If you do give the new DIY installer a go, which is fitting for a DIY NAS software solution, do report back on your findings as many forum members have now built their won and so far all feedback has been fed back into assisting with the documentation/process. It’s non trivial but not far from it for those who also have the skills to DIY NAS. But yes a download would be easier but also static and less empowering overall. In time we will have frequent automated builds of our installer but that time is now now. But soon hopefully.
Cheers. And I share/shared your frustration on this one. All in good time is the hope.
Thanks for the well thought thorough answer and great detail.
I admit the need to build my own image did stop me from trying out version 4, I wanted to be sure it was going to be worthwhile first.
It’s a toss-up between this and TrueNas-Scale for my QNAP TS-453A.
I hope this wins; partly because BTRFS seems to have lower RAM requirements and my box maxes out at 16GB
I’ve just coaxed the old Centos installer (I suppose SUSE is certain, and Rocky Linux as spiritual successor to Centos won’t be used) into letting me add an extra encrypted BTRFS mount point to the installation disk by means of which I hope to limit the rockstar_rockstar to 128G and have a new filesystem for everything else I want, with growing and adding mirroring when I insert another disk.
It’s a consequence of the NAS having a 512MB DOM (which seems to get ignored) and then 4xSATA disks, and I don’t really want to devote 2T for BTRFS to hold the system on a non-resizable file system.
But now I suspect that the new version 4 installer won’t permit that.
(And although the extra mountpoint /data is mounted, it doesn’t appear in any admin interfaces like disks, pools, or shares. Maybe I would have had more luck moving /home )
With v4 any chance of being able to store the LUKS passphrase as a file on the internal DOM (or USB?) Maybe it encrypted with some hardware info (e.g. from dmidecode) so it won’t be usable in isolation?
Yes the openSUSE move is almost done and has been on-going for around 2 years. Rocky linux is now only a few days old I believe. And has yet to prove it’s ‘support’ capabilities in maintaining what it ‘does’. Plus we now have the closer spiritual successor of a binay compatible Leap 15.3 / SLES pair on our horizons which is nice. Where the enterprise ‘agent’ SuSE(SLES) helps directly to fund the openSUSE endeavour openSUSE Leap . Not so with the Rocky linux effort. Plus our openSUSE move was influenced by CentOS being too out-of-date and dropping even their technical preview status for btrfs. OpenSUSE / SLES do default installs with the system as btrfs plus SuSE employs a few btrfs developers. This bodes far better than an upstream that does not support btrfs and has actively removed it from their offering. Horses for courses in this case.
Yes we just done, at least yet, recognise more than a single btrfs partition per device, and far prefer, on the data disk, to not use partitions at all; and the system drive we have some far older code that is really in need of improvement and consequently far less flexible in partition use.
We really try to separate system and data and thus treat the system pool (btrfs vol) differently. Your only Rockstor ‘native’ option is to use the excess space on the system drive as shares (btrfs subvols) but these are then of course restricted to the raid level of the system pool (single). This may all become more flexible in time however there is also a pressure to enforce more seperation and deny all ‘data’ access to the system drive entirely. It would have made our move from CentOS to openSUSE far quicker actually as the system drive subvol differences were massively non trivial. But we did it anyway to preserve functional parity with our Rockstor 3 offering. Plus it’s really hand to put stuff like the Rock-ons-root on the system drive; especially in smaller home setups. And in fact in your scenario where otherwise a massive amount of space will go to waste.
So in short, yes the Rockstor 4 installer absolutely inforces a known partition setup that is fully understood by the resulting install: addressing your initial difficulty. But not, our existing capability with regard to having an additional btrfs partition on the system disk be managed by Rockstor, or even understood, does not exist. It will probably be quite a while before it does as we have to migrate our existing overly fragile system disk interpretation code over to the far newer and simpler btrfs in partition code added a way back to support partitions on the data drives, but our preference/default and recomendation is to not employ partitions at all in the data drives as it’s yet another layer of complexity that is redundant given btrfs can handle the raw device directly.
Is there a possibility of using a fast wear leveled USB device as the system disk. This has worked very well for me and many here on the forum. Does that hardware have a decent USB port to support this. Btrfs is OK if using USB if the pool is only a single device, such as our system disk, otherwise it is prone to failure given USB’s notorious instability in comparison to SATA etc, i.e. spurious resets.
Potentially. But not for a bit, and definitely not on launch. There does look to be a possibility of kiwi-ng now being able to do LUKS encrypted system disks so that may be an option in our future. And I’d like to be able to have an alternative path specified within the LUKS setup screen. But again this will take some time for the core team to do. However we are open source and welcome well tested pull requests so if the fancy take you then take a look at:
A details understanding of the LUKS config and how Rockstor implements it would be usefull here as one may have to account for multiple expected LUKS config arrangements in the case of pending LUKS encrypted system disks etc. But doable I imagine.
Or of course if the system doesn’t power up one day and the dmi info is no longer available .
We do currently support password on power up, i.e. no key in file, for the LUKS stuff. Maybe that will do you for now.
Or of course if the system doesn’t power up one day and the dmi info is no longer available .
That’s the trade off. You write down your key, but at least you don’t have to find a way to remotely re-enter it every time there is power loss.
You answers are helpful.
It seems that in the short to medium term, to have full use of the front drive bays, I would have to replace my 512M small DOM with a USB header cable and add larger drive (probably SSD) internally, in order not to sacrifice a front bay for the system, which sadly means there will also be no FS duplication for the system.
In the meantime as I have a pair of 2TB and a pair of 10TB I’ll probably use the 2TB for system and most user homedirs, and the pair of 10TB for something else, I imagine it is possible.
Maybe efi-vars could store the initial passphrase for the system disk, encrypted with TPM (if available) or dmidecode values otherwise.
Then the initramfs will unlock the system disk as it boots, which can then unlock the rest.
If there is a better place for such discussion, let me know.
Yes, but we don’t yet support this anyway on the system disk. Plus there are only more recent grub efforts that enable raid on boot. It’s kind of a long standing upstream limitation, but I believe it’s on it’s way out now but we will have to update our btrfs in partition for the system disk code again to recognise this anyway.
This element of LUKS ‘support’ would have to initially be in the upstream. I.e. openSUSE Leap and the kiwi-ng installer for us to adopt it. But as mentioned I believe there was some recent movements in kiwi-ng to enable LUKS on the system disk during install.
That would be within the kiwi-ng project as they are the system we use to create our installers. And the installer is what creates the initial partition and LUKS config in the case of the system disk. We may then have to adapt our existing LUKS support (data disks only) to enable what ever they end up doing.
Kiwi-ng as referenced in our rockstor-installer repo readme:
However what you may want is not necessarily what we are prepared by motive or capability, to support in Rockstor. But hey, one has to start somewhere. And in the recent work on LUKS encryption of the resulting install (our system disk in Rockstor terms) there was some discussion re the key and it’s source. See the following issue:
Pull request (now merged):
And to see how Rockstor ‘copes’ with this you can always add the requried new LUKS configuration elements to our installer config (the rockstor.kiwi) before you build you Rockstor 4 DIY installer. You never know, it may be just fine. I did, when developing the LUKS stuff, try it out with the default LUKS system disk setup in the CentOS install (at least I think i did) but it was a long time ago now :).
Let us know how you get on if you end up experiment in that direction. But before you experiment make sure you are first able to build and install your selected profile before making any modifications to the config. That way you rule out something a drift in what we have already at the time you try the recipe.
so… gotta try v4 installer, which I think for now needs a Suse rootfs to work in, but could be chroot (probably with /dev /sys /proc bind mounted in it)
It’s a pretty sofisticated program, you will take quite a considerable time replicating what it expects; plus as mentioned in the Readme they have a boxes type thing that does their own VM setup for running on non native hosts. Or just establish a quick server install of Leap 15.2 and do the business in there. We do plan, as mentioned in the readme, to make use of their boxes/vm abstraction, and I think it’s only another parameter to the kiwi-ng command line, but I’ve just not gotten around to trying it out yet. But if you do and it works then do report back here as it would be nice to adjust the docs to suggest this as an easier alternative. I’m planning to try it out myself when I get a moment but managed to pop a referenced to it in the docs at least.