Best practice for recovering LUKS RAID-1?


This situation isn’t covered in the manual (that I found).

What is the best practice to replace a failing/failed drive in a RAID-1 configuration that is LUKS encrypted?


@smanley This situation is not specific to Rockstor and Rockstor itself holds no info on the LUKS arrangement other than what it finds in the /etc/crypttab (which it also writes and edits). The following command will show you the contents of that file:

cat /etc/crypttab

It also reads drive signatures to identify the LUKS format in the first place of course. All very bottom up on this one, and although there are db references they are reconstructed from the disk scans and the mentioned file and it’s references (ie those keyfiles in /root).

Note however that the structure of that file must remain the same. By that I mean that Rockstor expects additions to be exactly as you see them: even though the file itself can support a number of variations of how devices are referenced and how the keyfile or the like is referenced. Also note that if you have chosen the default “Boot up configuration” of “Auto unlock via keyfile” then those keyfiles will reside in the /root directly as indicated on that setup page (and in that file). Step 2: Boot up Configuration.

Essentially it is the same as for a non LUKS encrypted drive only you would first setup the replacement drive with LUKS as per the LUKS Full Disk Encryption Rockstor howto (and as you have done previously presumably) and then treat it’s ‘Open LUKS Volume’ counterparts as you would have done a bare drive as that is pretty much what it is. I.e: a bare drive once LUKS formatted becomes a LUKS container that in turn is useless unless it’s opened. Once opened it provides a virtual block device that is the un-encrpyted counterpart of the bare drive.

Note however that there is nothing stopping you having a pool of 2 LUKS formatted drives and one plain one. You wont have an entirely encrypted pool but it might be easier to think of if the pressure is on. I.e. each disk is individually encrypted at the block level: underneath the btrfs filesystem. And if space allows you could always remove that un-encrypted drive from the pool, LUKS format it and return it (via it’s consequent Open LUKS Volume counterpart) to the same pool later (obviously care should be taken with such a heavy handed approach though).

Hope that helps.


This is what I thought should happen; the disk is first added, set up, and then the un-encrypted container is added to the pool.

My application requires that the boot-up passphrase technique be used, but that’s a minor difference.