@Jorma_Tuomainen Hello again.
This is entirely possible within Rockstor but do NOT use the btrfs replace route:
Rockstor as of yet is relatively untested re btrfs replace, and has no ‘native’ support (read recognition) for the states of a pool during replace. But as of 3.9.2-49/50 the disk removal is much improved. I would thus strongly suggest you take the disk add then disk removal route. I believe the btrfs code for this is also more mature, so hopefully safer on that front also. I know that replace is quicker but from a Rockstor point of view I would definitely make sure you are on at least 3.9.2-50 and then add/remove then add/remove then add/remove: waiting for each in turn to finish of course. Much safer and as stated Rockstor should then know what’s happening and also, via the associated Pools details page, be able to give you a rough idea of the progress. Note though that, as will all such procedures, your machine will be very much less responsive during this task. I would also suggest disable quotas via the Web-UI before this operation also as that will massively speed up proceedings and lessen the possibility of the Web-UI timing out due to low level system load issues concerned with such big operations. You can always turn it back on again via the Web-UI. As state this as their were know issues re enabling quotas such as one having to execute certain related commands twice for them to fully take effect !!
So to the more specific element of LUKS, when Rockstor LUKS support was developed it was actually part of it’s remit that it facilitate moving, disk by disk, an entire pool from non LUKS to full LUKS; disk by disk. Definitely go with the default of full disk though, i.e. no partitions on the data disks.
The pull request that initially introduced LUKS was:
And it’s definitely worth a ready of the comments their as one of the test scenarios reads as such:
“… an existing 3 disk raid 1 pool totalling 225 GB was taken through the following semi typical scenario for moving an existing pool from non LUKS formatted devices to LUKS formatted devices and their consequent mapped counterpart Open LUKS Volumes.”
So that looks to fit your remit. Although the data involved was ‘play’ size it was enough to prove the concept in practice on real (and very constrained) hardware. But not that at the time we had far less capable disk / pool reporting and so we then acknowledged the caveat of
“N.B. pending existing code issue to open here re no UI feedback during this removal.”
Which was duly opened:
and was closed / fixed as of Stable Channel 3.9.2-49/50 via:
Remember to give each disk in the pool set the same LUKS passphrase, not essential but way less hassle as otherwise you will have to remember which passphrase goes with which disk, and you may be asked for them (during boot, say, if that is your choice) in varying orders. This will not result in the same encryption on each disk as their is still a random salt involved at the LUKS level.
I would also advise reading our Rockstor official docs LUKS HowTo: LUKS Full Disk Encryption which take you through how Rockstor ‘views’/presents LUKS and it’s associated ‘special’/mapped devices.
An important point here is that underneath Rockstor obviously uses vanilla LUKS but it makes some assumptions, mainly for easy of use and automation purposes, about such things as how the mapped devices are named. As a consequence it is required that you setup LUKS via the Rockstor Web-UI. But this does not mean one cannot then use this configuration elsewhere of course as it is one valid configuration ‘style’ only. Just that any random way of naming / configuring LUKS and it’s associated devices is not going to work on Rockstor. Has to be this way if we are to keep our code base sustainable. Their is method in how we have done it and hopefully that will become clear upon going through the process.
As a final recommendation I would advise that you setup a parallel arrangement within a virtual machine with at least 5-10 GB data disks and go through the entire process their first. And make sure you understand the consequence of all the actions you take. I.e. such as the “Boot up configuration” option for example. And once all is done re-assure yourself via a few reboots etc.
Now as this requires at least the very latest Stable release you could, via:
Change you existing subscriptions Appliance ID over to that of the test virtual machine install. Then once you are satisfied with how the procedure is to unfold, you can move your Appliance ID in Appman back to that of your real machine. The effect should be immediate so no worries on waiting. You existing activation code will remain unaffected which means you need not do anything to the ‘real’ machine as it will just experience a ‘lapse’ in Stable Channel subscription authentication as it’s Appliance ID is temporarily (for the duration of the test) invalidated, until you come to re-assert it’s Appliance ID their after.
Hope that helps and good luck. Also this is undoubtedly a major operation with many moving parts, literally as well as logically, so I strongly suggest you do the suggested reading and the suggested testing of the method on a virtual machine mock-up. And run the very latest versions of Rockstor of course. Plus make sure this is not your only copy of the data as this is also a very strenuous exercise for the computer as a whole and specifically the disks and their for a common failure point the PSU. Also memory is well tested with your entire data passing through it many times.
P.S. As this is such a big operation also ensure that you actually are running latest stable via:
yum info rockstor
as when moving from testing to stable it can sometimes only ‘show’ latest but not actually be running latest. OK going straight to stable though: I’m thinking of the VM setup that may end up tripping you up otherwise.