I’ve become very interested in rockstor now that there’s a recent update towards fixing the major issues in parity raids in btrfs. I currently have a ubuntu box with a large mdadm raid6, and was wondering if I could reload my server with rockstor, mount the mdadm array, and slowly transfer my data to a btrfs (non-parity) pool while slowly reducing the size of my mdadm array and adding those disks to the btrfs pool. My question is, does rockstor support the mounting and changing of a mdadm array with no ill affects?
Ok, let me refine the question, as it honestly sounded a little dumb. I know I can do mdadm on centos, and I know that rockstor won’t care what disks are installed, but I don’t know if rockstor will have issues with mdadm being used without the disks being added to pools.
I’m new to this project myself, and answering from my own limited experience, I invite people with more information to correct me on any of the below.
I don’t believe Rockstor (the web application running on CentOS) will utilize the MD managed disks at all.
You won’t be able to attach parts of the partition to shares, or use them in rockons, rockstor will simply ignore the partitions.
This also means that Rockstor would not natively provide a GUI capable of sharing the existing MD disks via SMB or NFS, and I’m unsure of how the web interface would respond to you adding your own exports.
The disks will show in the physical disks list, but be unavailable for addition to a pool.
As long as you’re comfortable managing the mdadm, there should be no problem with installing it manually for data migration though, by dropping individual disks out of MD, zeroing out the superblock, adding to BTRFS pool then copying a chunk of data.
I’m not entirely sure how MD will handle the disks being dropped though.
It is worth noting however that while much progress has been made very recently with BTRFS and Parity RAID levels, it is still considered not ready for production. Also note that while the parity issues previously commonplace with BTRFS are (theoretically) resolved, the write-hole issue is still prevalent. If you’re going to use them, I’d suggest having a good UPS, and good backups.
RAID 1/10 on BTRFS are however considered to be quite stable. You’ll lose some capaciy, but likely gain some speed.
Typically, the main advantage of BTRFS is pool management (ability to easily change RAID levels, add or remove disks and resize pools to match), and ability to run on commodity hardware - no need to match drive sizes.
Thank you for the valuable input, I appreciate it. It shouldn’t be a problem for me to reduce the md array, as I’ve done it quite a few times with both this machine and others I worked on. As for the RAID 1/10 configs, that’s the next thing that I am unsure of. What’s the difference in its deployment with regards to btrfs? I ask as I am used to it in mdadm and hardware raid cards, and an wondering if it’s due to this being deployed at the file level instead of at the Dev level like other raid deployments, or am I way off?