Setting up parity for os disk(s)

i finally got the time to migrate to Rockstor 4, but i can’t seem to find any information about to how set up parity for the OS disk.

so i was wondering if someone could point me in the right direction?

@HBDK Hello again and well done on moving over to the v4 “Built on openSUSE” build.

If by this you mean having raid on the OS disk, we unfortunately don’t yet support this. Our system pool, now labelled “ROOT” by our JeOS base defaults, is a little tricky to have as btrfs raid and our prior partial compatibility for mdraid on the system disk is untested in the v4, mainly as we would prefer to focus only on btrfs as our drive management. And there are, as yet, still some issues with multi-device btrfs root. Mainly I believe in the area of Grub.

Sorry to be the bearer of potentially bad news on that front. Some forum members may well be able to provide a work around but as-is we just don’t have that options within our installer. And before we can properly support mult-disk system pool function we have to backport some ‘sanity’ simplifications regarding btrfs in partition that have been long standing in our data pool management. Once that is done we should be able to far more easily accomodate a multi-disk system pool; assuming upstream grub etc is happy with this. But currently we have a few too many special treatments within our btrfs-in-partition awareness that pertain only to the system pool as it was at-first a hack of sorts. But we later incompased proper awareness of this but only in the data pools. All in good time however, but for the time being it would have to be done by laying mdraid under the root pool partitions. And again this is just not a tested or supported config for us. It would also undermined the data integrity of the system pool given mdraid can make non data ware changes underneath the btrfs filesystem as it is not able to know which copy of a block is correct, where as btrfs is. But not helpfull here I’m afraid given we can currently offer no redundancy, beyond the default dup (duplicate) metadata of the btrfs-single raid of the system pool. In other words there is already 2 copies (on the single drive) of the metadata within the system pool.

Hope that helps and yes this is planned but only as things progress and after we get our existing larger projects sorted.


dang, well i guess i have to live with that for now.

do you by any chance know of a way to schedule config backups?

(edit: ohh! and btw thanks for the fast and thorough answer! :smiley: )


@HBDK your welcome.

No, but that sounds like a nice feature/target to add to our existing Scheduled Task options:
where our above doc section is still behind our Web-UI offering as it goes, with the following outstanding doc issue to cover the current short fall:

Could you create an issue in our rockstor-core GitHub repo to cover your request and indicate how you think it might be implemented. It may well be useful to first do a dedicated forum post with your idea on how this might look with reference to our existing Scheduled Task options. I.e. what would be presented to the user; options wise. The developed idea could then be presented in GitHub, in a more detailed fashion, if it looks to have been well received here on the forum. The internal implementation is likely going to be along the lines of what we already have with the other tasks, where a cron job reaches into an API and does the deed with associated db indicators recording what has been done for future Web-UI reference as a result. I’ve not looked at the Task APIs for a bit so I’m a bit vague on their exact function. I had myself liked the idea of an S3 type target for such things also. But again this is over-complicating things and the same could be achieved by using a subvol as the target save-point and a Rock-on to back that subvol up to whatever target fits the users requirement. Or the existing replication service for that matter. But first things first, we need a scheduled extension to what we have already regarding the config backup.

Hope that helps.