TLDR: is it possible to import LUKS encrypted partitions? The message in the UI is
LUKS in partition is only supported for the system drive
The story:
I’m looking to migrate my DIY btrfs NAS functionality to rockstor (rockstor didn’t exist back when I set it up…)
The general idea is: NAS drives are on a separate SATA controller, so I’m passing that controller directly to a VM, in which rockstor runs.
The drives already have a btrfs pool (draid1, mraid1c3), each drive a single LUKS encrypted partition.
I read Disks — Rockstor documentation , and it seems importing the pool itself should be possible, but I can’t luksOpen the partitions from the webui.
@serafean, welcome to the Rockstor community. I’ll let @phillxnet chime in, since he implemented the first go around of LUKS in Rockstor and has mentioned before that it’s still at the beginning of available functionality exposed in the WebUI - with that said (and you eluded to it already) the missing pieces are in the WebUI and not the underlying LUKS functionality.
I suspect you have to do some manual finagling at this time. I would imagine you need to move your key file to the place where Rockstor would expect it (as if you had set it up initially through the WebUI), which would be in the root directory of the OS disk, as well as manually maintain the /etc/crypttab in the way Rockstor will expect (as if you had set it up initially). There might be more, but I am not enough of an expert to know what that would be.
Since you already have a LUKS’d pool, this won’t matter for your case (I think, at least), but Rockstor has not moved yet to LUKS2 when you run the LUKS encryption setup on devices. Since the LUKS packages/functionalities are standard from upstream underneath the hood, it shouldn’t matter what your set up is …
Re: the message it is the case that we cannot cope with LUKS in partition. Our very strong preference is for unpartitioned drives. That way we do away with a tone of nonsense and actually several layers i.e. various different partition tables for example. So we are raw drives all the way - ideally.
And the LUKS compatibility to date and likely in the future will adhere to this. It just bring way too much complexity. and LUKS is complex enough as an add-on for us as it is.
That basically it. You will have to command-line mount and move the data over to an Rockstor constructed pool. We have some limitation on what the Web-UI can handle: so constructing a new pool and shares (subvols) within the Web-UI would be best, as you then guaranteed it’s structure is compatible. Then copy your by-hand mounted data over form your old pool with it’s LUKS encrypted partition.
Not likely actually, give we know nothing of LUKS outside of a very specific arrangement on system that likely doesn’t mean anything now as it was based on our CentOS days. And as above, we just don’t have the logic in the Web-UI to understand LUKS partitions on the data side.
But if you can mount manually you have access and you can transit the data to another pool created under Rockstor’s Web-UI. Bit of a pain and needs more drives but as you likely know, you can start with a single drive and build the pool up in drive count and redundancy profiles.
But as @Hooverdan said, bar confusing the Web-UI (possibly a lot), you can do what you like with-in the limits of our openSUSE base OS while you are moving the data over. I.e. create what you want in Rockstor with a new drive/drives. Then stop the rockstor services and mount your pool and copy the data over to the native pool/pools. Shutdown and then remove your drives and prove all data there, then there is the option of bringing your old drives in post wiping them.
During the addition of this LUKS capability I did prove moving a multi-drive pool over to LUKS one disk at a time. But I suspect we have some LUKS bugs of late as it’s not seen much attention on our part for a bit. But it is due some. That will likely be after our current testing phase however.
Hope that helps, and it would be good to have someone else who is familiar with these technologies to have a look at our current state. @Hooverdan is the current go-to on that front actually.
Thanks for the detailed answer.
That’s too bad, the pool is quite big (going 20T) , I hoped to avoid copying everything.
Especially as I need to preserve remote uuids of subvolumes, because it also serves as a backup of another system.
Oh well, now I know. I’ll see what hurts less.
OK, so maybe the better option for you is to transition what you have by hand, to what Rockstor understands. I.e. rather than create a new native Pool, and copy original Pool data, which as you say will break your uuids, you by-hand transition your existing pool over to what is understood by Rockstor.
Our requirements are no partitions, and one layer deep of subvols only. And we can’t cope with non system-wide unique pool or subvolume names. I.e. all Pools & subvols must have unique names. So you can’t have the same name used more than once system-wide.
The main problem there is likely the subvol depth. Our Shares are subvols in top-level volume (Pool). anything deeper and they are ignored. We also have some expectations regarding snapshot location, but some of this can be masked from the Web-UI via ignore mechanisms involving trivial code changes.
For the system (ROOT) Pool we have:
And for all data Pools we have a Bees exclusion as it was found that program created ‘confusing’ subvols for @kupan787 here on the forum who reported this.
So there is wiggle room.
I.e. you could alternatively by-hand transition your existing pool into a form that is acceptable. I.e. non LUKS in partition, (not ideal by two vectors) or full disk LUKS, tricky I know: but doable. And will almost inevitably involve additional disks that could at least be prepared on Rockstor, then you by hand transition you existing Pool to initially include this/these new members, and progressively not include your existing LUKS-in-partition members.
Hopefully I’ve not fumbled that too much :).
So in short, you could preserve uuid and your existing pool by, progressively, adding Rockstor compatibly drive members, and removing the incompatible members. Likely by-hand but that is how you have been working all this time so you should be good on that front. But again, our LUKS needs some attention, i.e. check the following, now outdated issue:
Where a linked suspected systemd issue breaks our currently working in 15.3 and TW at the time LUKS arrangement. I’ve not tested on a Leap 15.5 base yet but we have been publishing testing rpms for 15.5 since its beta days, and it is our next stable release target. And as stated all our testing rpms are fresh install tested before release, current 5.0.6-0 issues relate only to updating. And 2 or the 3 identified issues here on the forum have now been fixed in testing branch (or pending merge) ready for the next rpm. Working on the last later today as it goes.
Funny, just the mention of “copy the pool” made me completely forget the possibility of modifying the pool’s disk layout by adding and removing devices.
That indeed sounds like a workable procedure.