Hi, I am considering options to Freenas atm, and came across Rockstor so I am thinking of giving it a try.
One thing I liked about Freenas and ZFS is that I could combine several vdevs into one storage pool, is this possible in Rockstor/btrfs?
Hi, I am considering options to Freenas atm, and came across Rockstor so I am thinking of giving it a try.
One thing I liked about Freenas and ZFS is that I could combine several vdevs into one storage pool, is this possible in Rockstor/btrfs?
Welcome to Rockstor community @karakas!
I’d say btrfs is even better by having no concept like that of vdevs. You can combine drives of different sizes into a storage pool and add or remove drives, change raid profiles etc… of the pool at a later time. I think the user experience is a lot simpler.
Thank you and thank you for getting back to me. But doesn’t that get very risky once you start reaching a certain amount of drives with raid 6?
Can you explain more?
Well say I have 30 drives I want to add to my server. With this setup I would need to have one raid6 with 30 drives, and if I lose 3 drives the data is lost, but if one were able to split it up in multiple “raids” like with ZFS, let’s say 3 “raids” of 10 drives. Then I potentially up to 6 drives could die without losing all the data, as long as only 2 drives for each “raid” dies.
And with having such big raids in btrfs resilvering must can take a very long time can’t it?
That said, I like that one can grow raids in btrfs, so I will probably en up giving Rockstor a go.
Thanks for explaining the problem @karakas. I guessed that is what you’d say, but it’s good to have more detail in the post for everyone’s sake.
I am not a btrfs dev, though I closely watch the happenings. The points you make came up before on btrfs mailing list(thanks @roweryan for some helpful links) and I believe there is even a patch set for N-way parity for raid5/6 but it hasn’t made it into the kernel. I think the consensus is that these features will get more attention once raid5/6 is widely adapted as production ready.
I think hot spare would be nice asap and there may be a way to accomplish that in userspace if it comes to that. Currently the focus of our project is at higher levels of userspace as you can see. As Rockstor community grows, we can definitely play an active role in btrfs community as well. Some of you may already be doing so, which is great!
@karakas sadly such a feature like in zfs ist still missing in btrfs as you cant reuse previous pools for new pools “ontop”.
@suman n-way parity is not exactly what he asked, though raidz3 like behaviour would be nice.
this is what you want to get, aka raid60, striping raid6’es (or whatever raidtype)
For the Samba shares something like greyhole would do pooling onto multiple backend storage pools.
(Greyhole uses symlinks to make multiple storage backends appear as one share)
@roweryan and @suman topically enough someone at oracle has just submitted a fairly significant patch set that adds hot spare support to btrfs; so that’s good. There’s an ongoing discussion on it on the linux-btrfs mailing list:- Hot spare and Auto replace
I can’t be bothered to dig up the links right now, but the BTRFS maintainers are not currently interested in RAID 50/60. They’ve rejected pull requests which implement it. There’s an article written by someone who tried to add the feature and several pages on the mailing list about them rejecting it. It’s a shame.
that is a shame
even if it were something like greyhole or aufs
want to have 32 drives, but I’m not trusting that amount of data to 2 way parity.
if it were two 16 drives arrays and one available share space, that would be perfect