I saw that in the last version, LUKS was rolled out. I am new to LUKS and have a few questions.
Is it possible to encrypt my existing RAID 10 array using LUKS?
Will my NAS be able to reboot heedlessly, and still mount my array? Or will I need to SSH in after a reboot to mount the array with some kind of a password/encryption key?
@kupan787 Hello again.
I can have a go at this one.
Essentially it is an encrypted disk format that, once unlocked, is expressed in linux as a virtual block device. So when locked (not unlocked) the virtual de-crypted block device / disk is essentially non existent or in Rockstor terms disconnected (assuming it has ever been unlocked). When a whole disk is LUKS formatted it is turned into a âFull disk LUKS Containerâ and as such is excluded from being a pool member as it is only the resulting unlocked Open LUKS Volume that can from then on be used as a pool member.
Give LUKS is a disk format then the initial answer to this is no because the disks are already formatted as btrfs directly. However given btrfs has block pointer re-write capability it is possible, disk by disk, to live migrate a pool (volume) over to disks that have first been LUKS formatted. The procedure for this migration might take the following form.
We are assuming a minimum disk count of 4 for btrfs raid10 but given itâs not advised to make major modifications to pools that have only the minimum drive count this example will start by adding an additional disk.
4 disk as members of raid10 pool with all disk members natively formatted to btrfs.
Set the LUKS Boot up Configuration to the default of Auto unlock via keyfile (Recommended)
In time you should then see the resulting Open LUKS Container / virtual disk appear named dm-name-luks-uuid.
You can then add this Open LUKS Volume to your existing raid10 pool via the GUI pool resize function.
At this point you then have, in this example, 4 unencrypted (btrfs native) disks and one LUKS encrypted disk (via itâs Open LUKS Volume counterpart) as part of your raid10 pool. But this does not give full pool encryption as 3/4 of the data (by disk surface required) is still un-encrypted.
You can now remove one of the native btrfs formatted disks from the pool via the GUI pool resize function.
Once step 7 is finished you again have an un-formatted and un-assigned disk that can in turn be treated identically to the fresh disk connected in step 2, ie once fully removed from its prior pool it can be LUKS formatted and its consequent Open LUKS Volume added back to your raid10 pool (ie steps 3-6); again via the resize function.
At no point in the above procedure is it required that you physically disconnect any device.
So given the above procedural âwork aroundâ the no answer become yes, sort of: via a disk by disk migration from natively formatted as btrfs to natively formatted as LUKS and the resulting virtual âOpen LUKS Volumesâ used, progressively, in the place of the existing natively formatted arrangement.
I was hoping to put that more succinctly but I think itâs worth spending some time on expressing the mechanisms at play in the process as it is non trivial but still entirely doable as long as you take great care at each step. There are some safeguards in place but the wipe and format commands are pretty brutal as once you have confirmed that is what you want itâs a pretty definite outcome of the disk data being entirely lost. Hence making absolutely sure that a disk is definitely no longer a member of any pool prior to attempting a re-commission / re-format.
This mechanism of existing pool migration by successively adding LUKS formatted devices in place of native btrfs formatted devices was envisioned during the LUKS development work and if you take a look at the testing done prior to itâs addition to released Rockstor versions you will see that such a migration was enacted, only based on a raid1 array that stated out as minimum disk count +1 ie 3 natively formatted disk members:
ie the bit that starts:
âIn addition a real life scenario involving purposefully slow and constrained hardware (2G RAM single core dual thread P4) with 35 GB of data on an existing 3 disk raid 1 pool totalling 225 GB was taken through the following semi typical scenario for moving an existing pool from non LUKS formatted devices to LUKS formatted devices and their consequent mapped counterpart Open LUKS Volumes.â
Note however that in those tests, although successful, a Web-UI bug was noted and documented in the following still open issue:
Itâs worth reading that issue in case your system is similarly affected by it. Essentially harmless but quite worrying nevertheless.
So in short no not directly but yes if you migrate a disk at a time, while maintaining > min disk count for relevant raid btrfs level.
1 above is because a root canât unlock itself without a keyfile / manually entered passphrase upon boot. But given Rockstor stores itâs pool keyfiles, if this option is requrested, in /root which is inside /. So yes you pool should auto mount upon boot as root will have the keyfils to do so accessible. This is a compromise in security and facilitates disk level not system level security. If the whole system is taken then the data will obviously be compromised, but given the keyfiles for each data disk is seperate from the data disks themselves you could safely discard / re-commission / return a prior LUKS formatted data disk so long as it was not also accompanied by your system disk (as it holds the keys and they arnât themselves encrypted.).
âAuto unlock via keyfile (Recommended) Unlock on every boot by using a keyfile on the system drive. Unless Rockstor was installed using the âEncrypt my dataâ option the system drive will not be encrypted and so all keyfiles will also not be encrypted. This still protects against data exposure if a drive is returned to a supplier or for end-of-life scenarios; so long as it is not accompanied by the system drive. Rockstor generated keyfile example: â/root/keyfile-fd168e30-5386-43b2-9f15-353b9ecff803â. The characters after â-â are the uuid of the LUKS container and the key is 2048 bytes sourced from /dev/urandom (2^14 bit equivalent).â
also note from the same doc entry:
âNote that all members of a pool must share the same Boot up Configuration. Otherwise only some members will be unlocked and the pool will fail to mount.â
It is also common practice to use the same master password during each disks LUKS format stage across a single system.
Given the indirect nature of this process I would recommend you perform the operation within a âmock-upâ virtual machine with a similar arrangement to you existing ârealâ setup to satisfy yourself of what is required and what happens at each step first.
Hope that helps.
N.B. also from the Rockstor docs LUKS page re the master passphrase: âRockstor does not remember or record the associated passphrase If this passphrase is forgotten and you havenât competed Step 2: Boot up Configuration, using the recommenced keyfile option, it will no longer be possible to unlock you container and all data there in will be lost.â
@kupan787 Glad you found it useful. And thanks for the feedback.
Let us know how your âlive migrationâ attempts go. No harm in familiarising yourself in VM with a similar setup. Also make sure to keep an eye out for the indicated bug. Might not affect you in the VM but would be good to know if it does.
Also best you do a backup of your main machine before this rather large and many moving parts procedure is carried out âfor realâ as no point in risking any data. And make sure to reboot once you have your desired configuration in place, just to make sure you have the correct âBoot up configurationâ config in play to open the relevant LUKS containers.