LUKS and Encryption

Good Afternoon.

I saw that in the last version, LUKS was rolled out. I am new to LUKS and have a few questions.

  1. Is it possible to encrypt my existing RAID 10 array using LUKS?

  2. Will my NAS be able to reboot heedlessly, and still mount my array? Or will I need to SSH in after a reboot to mount the array with some kind of a password/encryption key?

@kupan787 Hello again.
I can have a go at this one.

Essentially it is an encrypted disk format that, once unlocked, is expressed in linux as a virtual block device. So when locked (not unlocked) the virtual de-crypted block device / disk is essentially non existent or in Rockstor terms disconnected (assuming it has ever been unlocked). When a whole disk is LUKS formatted it is turned into a “Full disk LUKS Container” and as such is excluded from being a pool member as it is only the resulting unlocked Open LUKS Volume that can from then on be used as a pool member.

Give LUKS is a disk format then the initial answer to this is no because the disks are already formatted as btrfs directly. However given btrfs has block pointer re-write capability it is possible, disk by disk, to live migrate a pool (volume) over to disks that have first been LUKS formatted. The procedure for this migration might take the following form.

We are assuming a minimum disk count of 4 for btrfs raid10 but given it’s not advised to make major modifications to pools that have only the minimum drive count this example will start by adding an additional disk.

  1. 4 disk as members of raid10 pool with all disk members natively formatted to btrfs.
  2. Connect an additional disk.
  3. LUKS format this currently un-assigned disk.
  4. Set the LUKS Boot up Configuration to the default of Auto unlock via keyfile (Recommended)
  5. In time you should then see the resulting Open LUKS Container / virtual disk appear named dm-name-luks-uuid.
  6. You can then add this Open LUKS Volume to your existing raid10 pool via the GUI pool resize function.

At this point you then have, in this example, 4 unencrypted (btrfs native) disks and one LUKS encrypted disk (via it’s Open LUKS Volume counterpart) as part of your raid10 pool. But this does not give full pool encryption as 3/4 of the data (by disk surface required) is still un-encrypted.

  1. You can now remove one of the native btrfs formatted disks from the pool via the GUI pool resize function.
  2. Once step 7 is finished you again have an un-formatted and un-assigned disk that can in turn be treated identically to the fresh disk connected in step 2, ie once fully removed from its prior pool it can be LUKS formatted and its consequent Open LUKS Volume added back to your raid10 pool (ie steps 3-6); again via the resize function.

At no point in the above procedure is it required that you physically disconnect any device.

So given the above procedural ‘work around’ the no answer become yes, sort of: via a disk by disk migration from natively formatted as btrfs to natively formatted as LUKS and the resulting virtual ‘Open LUKS Volumes’ used, progressively, in the place of the existing natively formatted arrangement.

I was hoping to put that more succinctly but I think it’s worth spending some time on expressing the mechanisms at play in the process as it is non trivial but still entirely doable as long as you take great care at each step. There are some safeguards in place but the wipe and format commands are pretty brutal as once you have confirmed that is what you want it’s a pretty definite outcome of the disk data being entirely lost. Hence making absolutely sure that a disk is definitely no longer a member of any pool prior to attempting a re-commission / re-format.

This mechanism of existing pool migration by successively adding LUKS formatted devices in place of native btrfs formatted devices was envisioned during the LUKS development work and if you take a look at the testing done prior to it’s addition to released Rockstor versions you will see that such a migration was enacted, only based on a raid1 array that stated out as minimum disk count +1 ie 3 natively formatted disk members:

ie the bit that starts:

“In addition a real life scenario involving purposefully slow and constrained hardware (2G RAM single core dual thread P4) with 35 GB of data on an existing 3 disk raid 1 pool totalling 225 GB was taken through the following semi typical scenario for moving an existing pool from non LUKS formatted devices to LUKS formatted devices and their consequent mapped counterpart Open LUKS Volumes.”

Note however that in those tests, although successful, a Web-UI bug was noted and documented in the following still open issue:


It’s worth reading that issue in case your system is similarly affected by it. Essentially harmless but quite worrying nevertheless.

So in short no not directly but yes if you migrate a disk at a time, while maintaining > min disk count for relevant raid btrfs level.

To your second question:

Please see the Rockstor docs on this: LUKS Full Disk Encryption.

Essential only if:

  1. Your root partition remains non LUKS encrypted (this is something set at install time so you presumably don’t have this currently).
    and
  2. You go with the default LUKS Boot up Configuration of Auto unlock via keyfile.

1 above is because a root can’t unlock itself without a keyfile / manually entered passphrase upon boot. But given Rockstor stores it’s pool keyfiles, if this option is requrested, in /root which is inside /. So yes you pool should auto mount upon boot as root will have the keyfils to do so accessible. This is a compromise in security and facilitates disk level not system level security. If the whole system is taken then the data will obviously be compromised, but given the keyfiles for each data disk is seperate from the data disks themselves you could safely discard / re-commission / return a prior LUKS formatted data disk so long as it was not also accompanied by your system disk (as it holds the keys and they arn’t themselves encrypted.).

On the Boot up Configuration screen (and in the docs) it is addressed thus:

“Auto unlock via keyfile (Recommended) Unlock on every boot by using a keyfile on the system drive. Unless Rockstor was installed using the “Encrypt my data” option the system drive will not be encrypted and so all keyfiles will also not be encrypted. This still protects against data exposure if a drive is returned to a supplier or for end-of-life scenarios; so long as it is not accompanied by the system drive. Rockstor generated keyfile example: “/root/keyfile-fd168e30-5386-43b2-9f15-353b9ecff803”. The characters after ‘-‘ are the uuid of the LUKS container and the key is 2048 bytes sourced from /dev/urandom (2^14 bit equivalent).”

also note from the same doc entry:

“Note that all members of a pool must share the same Boot up Configuration. Otherwise only some members will be unlocked and the pool will fail to mount.”

It is also common practice to use the same master password during each disks LUKS format stage across a single system.

Given the indirect nature of this process I would recommend you perform the operation within a ‘mock-up’ virtual machine with a similar arrangement to you existing ‘real’ setup to satisfy yourself of what is required and what happens at each step first.

Hope that helps.

N.B. also from the Rockstor docs LUKS page re the master passphrase:
“Rockstor does not remember or record the associated passphrase If this passphrase is forgotten and you haven’t competed Step 2: Boot up Configuration, using the recommenced keyfile option, it will no longer be possible to unlock you container and all data there in will be lost.”

1 Like

Thanks for the detailed response! I like learning about this stuff, so it provides for a good starting point for me to learn more.

I might give the “live migration” a try
in a VM first. Just to make sure I fully get the procedure.

@kupan787 Glad you found it useful. And thanks for the feedback.

Let us know how your ‘live migration’ attempts go. No harm in familiarising yourself in VM with a similar setup. Also make sure to keep an eye out for the indicated bug. Might not affect you in the VM but would be good to know if it does.

Also best you do a backup of your main machine before this rather large and many moving parts procedure is carried out ‘for real’ as no point in risking any data. And make sure to reboot once you have your desired configuration in place, just to make sure you have the correct ‘Boot up configuration’ config in play to open the relevant LUKS containers.

Cheers.