Multi device BTRFS filesystem with disk of different size

Hello,
Reading the BTRFS documentation https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Current_status I’ve understood that this is possible but got confused a little confused :slight_smile:

If anyone used this scenario can confirm if using different disk sizes will work just fine with BTRFS/Rockstor?

I have 15 HGST 8TB HDD’s with BTRFS Raid6 and I want to add another 15 HGST 14TB HDD’s into the existing BTRFS Raid6 partition. What will be the best approach of doing this? Is this safe?

Thanks

@shocker Hello again.

Yes btrfs can mix and match drive sizes, but it’s efficiency from a drive space usage point of view may vary depending on the btrfs raid level. It works by doing raid at a chunk level, ie a chunk of the drive space, usually 1GB for data, is paired with a chunk on another drive in for example raid1. Thus raid at chunk level and so different sized devices can take part in a raid1 pool as long as they have unallocated space (so appropriately new chunks can be made) or existing raid1 chunks (with free space) active in the pool.

I can also chip in with a potentially relevant but now fixed in stable Rockstor bug that may affect systems with 27 or more drives: See the referenced issue, originally reported by @kingwavy in the following forum thread:

Issue:
https://github.com/rockstor/rockstor-core/issues/1925

and the pull request that fixed it:
https://github.com/rockstor/rockstor-core/pull/1946

And from the pull request we have the following summary of the issue:

“On systems with 27 or more disks named sd* where the system/root is also installed on sda ie sda3 and where the 27th and subsequent disks are named sdaa, sdab etc and are also btrfs formatted; a regex based serial propagation bug (from base device to partition) in scan_disks() resulted in the first listed (by lsblk) sda[a-z] device receiving a serial=none attribution and the second and all subsequent listed sda[a-z] devices receiving an erroneous fake-serial attribution.”

The fix landed in stable channel updates version 3.9.2-31 around July 2018.

I would say not really as it’s raid6, but as long as you are using a non parity raid (ie non 5 or 6) then you should be good. I.e. Raid 1 or 10 would be a better choice, but we have quite a few reports of ‘working’ parity raid systems but I’d make sure to have a UPS arrangement and one that has been tested to shut the system down in the even of a power loss. But this of course does not protect the pool from a kernel issue. You will also probably find that you have to disable quotas for reasonable performance, again this is currently a stable channel only feature.

If you get this setup done some pics might be nice :slight_smile:

Hope that helps.

Thank you for this feedback! :slight_smile:
Let met add some clarity on my existing setup:
I’m using rockstor for the latest 4 years without any issues with 15 HDD’s HGST of 8TB each in Raid6 mode. In my setup I already have UPS, Generator, etc and I didn’t had any issue so far :slight_smile:
As currently I’m running out of disk space I’m planning to add extra HDD’s as my system handles 60 devices. For now I was planning to add extra 15 HDD’s but with 14 TB as it’s a better deal with price/tb then hdd’s with 8TB. My question is if I expand my existing raid6 with the extra hdd’s that I want to buy it will be stable considering that the new one’s are 14TB instead of existing one’s with 8TB? Or it will be safer to buy the same hard drives of 8TB just to be in the safe side :slight_smile: for this system I have no backup nor DRP in place, just trusting the Rockstor environment that will do the job :slight_smile:

Didn’t had time to finalize the upgrade, but I’m planning to for the next month. I’m with the stable channel, disabling quotas is a gui feature? Or need to change it in fstab?

Regarding my above scenario it will work to add in the same pool 14TB hdd’s near the 8TB with Raid6? From the initial feedback seems that it will work, but just wanted to ensure that I’ve understood right :slight_smile:

I’m not 100% sure about RAID6, but with RAID1 it works as long as the biggest disk is no more than twice the size of the smallest one.

Hm, on second thought, plugging the numbers into the almighty space allocator, it seems you might be wasting space that way with RAID6. You’d still end up with more space than with a RAID1 array. Unless that tool is missing something important, of course.

Might be better to stick with 8GB drives then, unless it’s an option to create two separate arrays.

Initially I would just like to have everything in one folder and just increase the space, but I think it would be better to start a new one with 14TB drives and create another folder for the new mount.
That will always be a challenge as the technology always evolves and for sure in next years I will make another array with 20TB hdd’s :wink:

Thanks for your feedback! I really appreciate it!

Hold on, I may tell a lie. I actually went and plugged all the numbers into the calculator, and this is what it came up with:

Look at the bottom: Unusable: 0. That would indicate that all of the space is actually usable. I considered the dark bars an indicator of the fact that Space Was Being Lost, but it seems like I might be wrong here. Let’s hope one of the forum users with bigger setups than mine can chip in - my experience is limited to a RAID1 of 2+2+1 (+considering a 4), so not exactly in the same range. :slight_smile:

Also, as @phillxnet already mentioned: do post pics, this sounds like a nice setup you have.

Indeed I will share the outcome once it’s done :slight_smile: but now I’m more confused :stuck_out_tongue:
Not sure if it will be easier to mix HDD’s in the same array or create a new one (I would like to mix). It will be interesting if someone else tried that and can share his experience :slight_smile:

Hi @shocker,

I simply wanted to chip in on your question related to quotas:

Yes, it is a GUI feature, that you can find on the Pools list page, as well as on each pool’s detail page. You can turn them ON or OFF directly from there. See the screenshots provided by @phillxnet in the link below:

Hope this helps,

Thank you for this! Just disabled quotas! I didn’t had any performance issues till now but if this is a burst, why not? :slight_smile: Cheers!

Just upgraded my system with new shiny 14TB drives. I have added them to my existing pool with 8TB drivers and everything is working just fine. There is no capacity loss. As of now the filesystem is balancing, after is done I’ll share the performance results.

Cheers

3 Likes

Balancing was crazy slow and the filesystem was freezing from time to time.
Even that the quotas are off, I have paused the balance and restarted the server. Seems that balance status has been reseted and I need to start over. Even with balance off I had delays on listing the folders (from time to time a basic “ls” is taking 10 seconds to list the dir). Checking the system I have found that I forgot to change the status for the new drives to no spin down and APM 254. I will start over with the balance now :slight_smile:v Read is now on ~560 MB/s before the upgrade was at ~900-1000 MB/s but this might be due to my non-balanced system.

As of now drivers with 8/14tb parity looks like this:
size 7.28TiB used 6.12TiB path /dev/sdv
size 12.73TiB used 2.88TiB path /dev/sdb

The system speed recovered and everything is all right :wink:

2 Likes

So where are those pictures then?

Great to hear it all worked out!

1 Like