LUKS Full Disk Encryption on Rockstor 4 System Drive?

With the current latest RockStor 3.9.1-16 I was able to check Encrypt my data in the CentOS Anaconda installation wizard. My goal is to have full disk encryption on all of my drives, including the system drive, and the current version allows for this.

When trying out the Rockstor 4 beta there didn’t seem to be an option to enable LUKS on the system drive during installation. Is it possible to enable LUKS on the system drive after installation? Will a future release of Rockstor 4 allow me to encrypt the system drive during installation?

Besides the system disk, I have one Btrfs raid1 pool on top of three data disks with LUKS encryption enabled. This was very easy to setup using the Rockstor Web UI :slightly_smiling_face: These disks set to auto unluck with a keyfile (stored on the system drive).

For now I’m fine with manually unlocking the system drive upon boot. The system will be always on, backed by a UPS. But in the future I hope to remotely unlock using dracut-sshd.

I’m really looking forward to replace my Synology with Rockstor! Rockstor + LUKS on all disks + Portainer would be my ideal NAS :smiley:

1 Like

@Jip-Hop Hello again.

Not as far as I’m aware. I think this may be a limitation of our use of kiwi-ng. Needs more research to find out.

If it’s supported by kiwi-ng.

Great, I really tried to make it as clear as possible. Always a challenge and especially so in such contexts.

Thanks for the feedback.

You could look into the encrypted system disk option via the upstream project we use to build our installer:

Our DIY installer recepie is just a config for that system with some minor scripts. It’s the exact same system openSUSE / SuSE SLES uses to build some of their installers.

Hope that helps.

Thanks a lot for replying! Very helpful :slight_smile:

Would it be an option to make a vanilla openSUSE 15.2 installation, build Rockstor from source and then migrate to one of the official release channels? The openSUSE installer allows me to enable LUKS on the system partition (currently waiting for the installer to finish).

I hope kiwi can support it… I’ll try to find something on it.

@Jip-Hop Re:

That doesn’t really work for production as it’s intended for development and there is not upgrade capability. But it’s not required anyway as you can always add the repos. See the following for a guide on how you need to modify a generic Leap to what our installer setups up:

It in turn links to a forum thread to test the release rpms via their repos. But there are a number of things we have to change, such as AppArmor etc so it can be quite the fuss.

Yes me too.

Let me know of any details you find. I’ve just not had the time opportunity to look into it really.

And remember that you can always look at our kiwi-ng installer config to see all that is done. Some is a little subtle such as the systemd services rpm but all there somewhere.

Cheers. And do remember to disable the IPv6. We arn’t there yet and again we have limited contributor count to make that change just yet. But all in good time.

1 Like

@Jip-Hop Re

Just a quick note on this one; 3.9.1-16, which became 3.9.2-0 stable, was released in:
Nov 2017:

The Stable branch of the CentOS variant then went on until 3.9.2-57 (April 2020):

And our latest 4 offering, ‘Built on openSUSE’ only was last released in March 2021:

and has the accompanying forum thread:

I appreciated you’ve likely already found this tangle or releases but just wanted to pop this in for others nipping in on this thread.

We have also now removed the confusing and now inaccurate wording re ’ You are running the latest version’ type thing. That obviously aged poorly :).

We have a 4.0.4 rpm version place holder for stable in the ‘Built on openSUSE’ variant as a bug we found required a repo to be there but we are getting closer the next stable release in this non legacy variant.

Hope that helps.

Thanks again! I plan on migrating from my Synology to a Rockstor NAS once the v4 release comes out of beta (and if I don’t run into any show-stoppers during my testing).

But so far I’m happy with the available features (and support)!

Also I successfully installed Rockstor on top of a base openSUSE 15.2 with LUKS system partition! :smiley:

Great to know this will also be possible with the future v4 version :slight_smile: The installation instructions you linked to were easy to follow.

1 Like

@Jip-Hop Re:

Nice.

Thanks for the pic. Does look like we have some work to do there as it’s not identifying the pools as a system pool, but as long as you take care not to accidentally delete it !

We have some Web-UI protections for the system pool like not allowing delete and the like so take care and you should be OK.

Nice to see that it’s not too confused. And do remember that we, and openSUSE, only support the one device in this system pool. In time this will change but as yet it’s a single device pool only arrangement.

Thanks for the update and feedback. And keep an eye out for failures and report as you find as some may be due to missed steps in custom install, ours in docs or yours in action. Or they may just be bugs that would be good to know about.

Cheers.

2 Likes

Thanks for pointing out that the system pool can only span one disk. Redundancy of the OS is something I take for granted on my Synology.

What measures could I already take to ensure I don’t have to reinstall and reconfigure in case my OS Disk dies? Perhaps cloning the OS drive once with clonezilla. Keep the clone connected, auto unlock it during boot and keep it in sync with the OS drive with btrfs send/receive? Not sure if that’s even possible and sounds like I may also run into issues when booting from the clone with regards to disk ids…

I know there’s the config export backup option but that won’t include everything. I’ll probably also do some custom setup outside of the rockstor gui which would be inconvenient to redo.

Or could I install on top of mdraid? I came across this (rather old) guide: https://weblog.sourcy.io/2017/02/full-disk-encryption-raid-luks-lvm-btrfs-suse.html

@Jip-Hop Re

Regular config backups:
http://rockstor.com/docs/config_backup/config_backup.html
With the installer it now only takes a few minutes to install anyway, and given it includes all pending updates at time of build that is another thing that is not required, further speeding up deployment. But in you case of a custom install it’s a bit more tricky and down to generic means.

This would be an extreemely risky approach. Btrfs has a know issue where it will corrupt if it has pools of the same uuid. I would definatly not take this approach under any circumstances. The issue is that if you have an online device member that is a clone of another via it’s btrfs pool uuid it can’t tell them apart properly and can inadvertently write to the wrong one. Rapidly messing things up. The ‘shadow’ does have to be mounted I believe but just don’t take this approach is may advise. Plus we also have the following issue that will likely trip you up if you are not aware of it:

These off topic discussions are likely of less use to folks when under the heading we have within this thread so probably better for folks to chip in with subject specific threads. Tricky I know and I’m always digressing myself within a thread so there’s that :).

Btrfs has such a lot of magic in already I’d really try and keep your bare metal recovery to a bare minimum complexity/magic wise. Better to look to our options re multi disk boot within the kiwi-ng / btrfs realm. And if you store no state on the system drive a re-install pool import config import is trivial anyway. Others will likely have other ideas on this but take great care with clones of devices/pool members within the realm of btrfs: there be dragons.

For this you could use a generic backup program or even one of the backup options in our rockons and simply backup the relevant config file. Then on restore and restore of the rock-on, and it’s config, you would have it’s backup ‘payload’ there on the redundant pool ready to restore, via the now pre-configured rock-on to the system drive.

Our approach / recommendation is to have as little as possible stored on the system drive. If you make sure to not use the system drive for any shares you will make your life a lot simpler with regard to restore. Just a thought. We actually put quite a lot of work/time into preserving the capability to use the system drive as it’s so useful sometimes, i.e. for rock-ons-root. But still not advisable, but that does’t make it not useful and we wanted to preserve feature parity with our CentOS offering. Oh and flexibility is almost always useful. But again this doesn’t make it a good idea. Anyway I expect you get the idea.

I wouldn’t recommend it. Mdraid under btrfs is both redundant and an error. It can undermine btrfs as it will invisibly replace one copy with another without knowing which is the correct one. Best manage via higher level means until we have a better option available to us. The following old doc was our appeasement to this oft requested feature in the old CentOS varaint:

http://rockstor.com/docs/mdraid-mirror/boot_drive_howto.html

No longer relevant and mdraid for the system drive in the new/current ‘Built on openSUSE’ variant is completely untested. Plus there’s that undermining thing again.

Definitely a tricky one but we should keep an eye on these. My personal preferences is to only work on the btrfs multi dev system pool. And we will, within our own code, have to make quite a few changes for that to work. But it’s doable and much more so since we enabled the btrfs-in-partition capability, but that work has yet to make it to our treatment of the system disk and given our backlog of tecnical dept and 4 release this capability is not likely to emerge for another major release or two. But is planned as it would greatly simplify our code.

Hope that helps.

P.S. I vote for one of the backup rock-ons and configure it to grab your custom configs. And use the native Rockstor config save/restore after pool import for the rest.

2 Likes