Failed to make Storage Sahare. RAID6 on btrfs -Pool creates perfectly but Disk Share won't

@roberto0610 Hello again.

This advice, given to you from me in the following forum thread:

was, as you can see as it was referenced at the time, from the btrfs mailing list. And Zygo Blaxell as far as I know is not, at least yet, on this forum. The source is good and not hear-say or held by only those on this forum. It’s science :). Hardware raid doesn’t know which copy is correct, that might be important!

Your performance knowledge re hardare raid is not relevant to that of btrfs’s chunk based, rather than drive based raid. Take a look at for example:

for comparisons of the raid performance levels in btrfs. It’s not neccessarily what you might think. That article is a little old but I’m pretty sure there was a newer on form the same source. I’ll post when I get the time to find it but others can do likewise as it would be good to build up some references actually. As I explain in my support email, btrfs raid borrows from hardware raid concepts but is not the same; one can’t always draw direct parallels. Putting hardware raid under btrfs will void btrfs’s ability to correct data and basically also void guarantees of returning what was requested to be saved. It’s why it was invented.

There is no parity in btrfs raid 10. Only btrfs 5 & 6 use parity.

The documentation links I gave in my support email to you, that you requested of me, then you will have the canonical references for this info.

For this you want lots of RAM, irrespective of the underlying file system. Or even better a bcache write throught (so only caches reads as much safer) using one or more ssd / nvem devices. I.e. see the following technical wiki entry on how we implemented in in our older CentOS base:

It would be nice to have this document updated as the version of bcache in our new OS base may well need only the udev role to work.

There are a whole host of technologies to assist in performance, but very few that do what btrfs does. That is because it’s a difficult task that has been attempted, and failed, many times. Hence the caution taken with the btrfs parity raid levels. I’m keen to follow upstream on this as I’m not the btrfs expert; they are.

Lets take care to keep an open mind on the knowledge and diligence of others here on the forum. We are all trying to help one another out here. You do find yourself in a tricky stop as you presumably required a single large pool that requires many drives. That a sticky point when btrfs can currently only offer a single drive failure. But with local fast backups the risk can be reduced. Or you opt for using the approach I referenced earlier of employing a newer stable kernel. All food for thought.

Apologies if this communication has an unintended tone. I’m just trying to advise as best as I see things from my own limited knowledge of all relevant things. And I’m not feeling:

this tone to be appropriate. This was an upstream move from the experts at the time. Neither you nor I are btrfs experts so we kind of have to go with the flow if we are to take advantage of what is passed down to us by way of our only COW file system option that can be included within the kernel and has the block pointer re-write capability.

Hope that helps and let us know how your adventures go. And you might want to follow up on some of the software vs hardware raid info out there on the internet so that you are informed on the balance of risk. It may be that you can find a way to mitigate the second drive failure during repair risk. I.e. if a single disk fails and you are say using btrfs raid 10, (4 disk minimum) and you have 6 disks, then the failed drive can be removed from the pool via Resize/ReRaid fairly quickly while you are still using the pool. The resulting pool will still have minimum +1 drives left. Another drive can then be added on-line without the data going off-line. Assuming you hardware supports hot plugging.

Anyway all foof for thought and many things to consider.

Thanks again for you long term support and interest by the way. You may well be interested in our GlusterFS support that I hope to get to once we have all our technical dept sorted in the next year. Another option for providing fail-over when things go down in the bad way :). And again entirely doable via command line today: on top of what Rockstor currently enables by way of magic made easier.

1 Like