Failed to make Storage Sahare. RAID6 on btrfs -Pool creates perfectly but Disk Share won't

Can anybody point me in to how to create the RAID6 Share from command line?

Since web UI it’s not working. I get this error when tring to crate the Share… But the error does not replicate when creating the Storage Share from a Single RAID0 using the same drives. But it doesn’t work with RAID6.

I get this error when trying to create a Storage Share from Preview created Storga Pool.

Error running a command. cmd = /usr/bin/chattr -i /mnt2/64TB-Storage. rc = 1. stdout = ['']. stderr = ['/usr/bin/chattr: Read-only file system while setting flags on /mnt2/64TB-Storage', '']

@roberto0610 Hello again.

So I finally got around to confirming your findings here. And following on from my suspicion indicated in our support email chat: sure enough our upstream Leap 15.3 default kernel /btrfs-progs has now disabled read/write on the less mature btrfs parity raids levels of 5 & 6. Fancy that!
Via a

journalctl -f

And then during a Web-UI creation of a 3 disk btrfs raid 6 we get:

Dec 12 18:57:55 rleap15-3 kernel: BTRFS: device label test-pool devid 1 transid 5 /dev/sdb
Dec 12 18:57:55 rleap15-3 kernel: BTRFS: device label test-pool devid 2 transid 5 /dev/sdc
Dec 12 18:57:55 rleap15-3 kernel: BTRFS: device label test-pool devid 3 transid 5 /dev/disk/by-id/ata-QEMU_HARDDISK_QM00009
Dec 12 18:57:55 rleap15-3 kernel: BTRFS info (device sdb): use no compression, level 0
Dec 12 18:57:55 rleap15-3 kernel: BTRFS info (device sdb): disk space caching is enabled
Dec 12 18:57:55 rleap15-3 kernel: BTRFS info (device sdb): has skinny extents
Dec 12 18:57:55 rleap15-3 kernel: BTRFS info (device sdb): flagging fs with big metadata feature
Dec 12 18:57:55 rleap15-3 kernel: btrfs: RAID56 is supported read-only, load module with allow_unsupported=1

Thanks to @Superfish1000 for trying their best to inform me of this upstream situation in the following forum thread:

My apologies to @Superfish1000 for not being on-top of this one. We mostly ignore the parity raids as they are not production ready and we state as much but it was still a surprise to find this probably sane move from upstream. So I am considering myself as having received a welcome told-you-so; it has lead to me chasing this one up a little which is good.

For any folks needing to import a prior 5/6 pool we have a recently updated section in the docs:

Import unwell Pool: https://rockstor.com/docs/interface/storage/disks.html#import-unwell-pool

which addresses the need, which the btrfs parity raid levels now have, of using custom mount options (in this case ro) prior to import. This then at least allows for the import, and ro only access, for backups to be refreshed and a rebuild to raid 1 or 10 to be enacted.

So @roberto0610 to address the issue of:

This is now normal for a party raid pool. And as discussed in our support email chat, a stable backport kernel looks like a possible way to go but I’ve not tested that yet. But again for those interested we have @kageurufu excellent post on his current adventure in this direction in order to import their data raid 6, parity raid1c4 pool:

@Flox We should have a chat about what, if anything, we need to do about this. I’m super keen to follow our upstream exactly on the kernel and btrfs side of things as they are the experts, but we might want to add a doc entry for this potentially. We already have the Web-UI tool-tip advise that 5/6 are not production ready but this is a step up in the protection against it’s use all-in from what we have had.

Again thanks to all who chip in on the forum, it’s great to have such input from so many directions. And again apologies for being behind the times (read a little slow) on this one.

Hope that helps.

2 Likes

I guess my only way to get going (if I was to write any data to my drives…) now my only way is by going back to hardware RAID6 and then trade it as single drive in Rockator/BTRFS… Nice there is no Support for RAID5 or 6 currently… Let use hardware raid even if people around this forum don’t recommend this…

I mean I really get it. I do totally get it.
-Having RAID0 it’s good for performance.
-RAID10 really good too, Parity and striping “Super”
-but raid5 or raid6. No good performance. Not super fast writes or even had to long long to re-build when a drive fails… But for certain scenarios it really good, like in multimedia creation aplicación. When most of the demand is in reading thousands of times the same video data or photos. That gets written into the drives only one time but it’s been read many, many times.

All that I need to change: its my HBA pastrough controller for one that actually has Hardware RAID0, 1, 5, 6, 10, 50 and 60.

Thank you for helping me out: @phillxnet

@roberto0610 Hello again.

This advice, given to you from me in the following forum thread:

was, as you can see as it was referenced at the time, from the btrfs mailing list. And Zygo Blaxell as far as I know is not, at least yet, on this forum. The source is good and not hear-say or held by only those on this forum. It’s science :). Hardware raid doesn’t know which copy is correct, that might be important!

Your performance knowledge re hardare raid is not relevant to that of btrfs’s chunk based, rather than drive based raid. Take a look at for example:

for comparisons of the raid performance levels in btrfs. It’s not neccessarily what you might think. That article is a little old but I’m pretty sure there was a newer on form the same source. I’ll post when I get the time to find it but others can do likewise as it would be good to build up some references actually. As I explain in my support email, btrfs raid borrows from hardware raid concepts but is not the same; one can’t always draw direct parallels. Putting hardware raid under btrfs will void btrfs’s ability to correct data and basically also void guarantees of returning what was requested to be saved. It’s why it was invented.

There is no parity in btrfs raid 10. Only btrfs 5 & 6 use parity.

The documentation links I gave in my support email to you, that you requested of me, then you will have the canonical references for this info.

For this you want lots of RAM, irrespective of the underlying file system. Or even better a bcache write throught (so only caches reads as much safer) using one or more ssd / nvem devices. I.e. see the following technical wiki entry on how we implemented in in our older CentOS base:

It would be nice to have this document updated as the version of bcache in our new OS base may well need only the udev role to work.

There are a whole host of technologies to assist in performance, but very few that do what btrfs does. That is because it’s a difficult task that has been attempted, and failed, many times. Hence the caution taken with the btrfs parity raid levels. I’m keen to follow upstream on this as I’m not the btrfs expert; they are.

Lets take care to keep an open mind on the knowledge and diligence of others here on the forum. We are all trying to help one another out here. You do find yourself in a tricky stop as you presumably required a single large pool that requires many drives. That a sticky point when btrfs can currently only offer a single drive failure. But with local fast backups the risk can be reduced. Or you opt for using the approach I referenced earlier of employing a newer stable kernel. All food for thought.

Apologies if this communication has an unintended tone. I’m just trying to advise as best as I see things from my own limited knowledge of all relevant things. And I’m not feeling:

this tone to be appropriate. This was an upstream move from the experts at the time. Neither you nor I are btrfs experts so we kind of have to go with the flow if we are to take advantage of what is passed down to us by way of our only COW file system option that can be included within the kernel and has the block pointer re-write capability.

Hope that helps and let us know how your adventures go. And you might want to follow up on some of the software vs hardware raid info out there on the internet so that you are informed on the balance of risk. It may be that you can find a way to mitigate the second drive failure during repair risk. I.e. if a single disk fails and you are say using btrfs raid 10, (4 disk minimum) and you have 6 disks, then the failed drive can be removed from the pool via Resize/ReRaid fairly quickly while you are still using the pool. The resulting pool will still have minimum +1 drives left. Another drive can then be added on-line without the data going off-line. Assuming you hardware supports hot plugging.

Anyway all foof for thought and many things to consider.

Thanks again for you long term support and interest by the way. You may well be interested in our GlusterFS support that I hope to get to once we have all our technical dept sorted in the next year. Another option for providing fail-over when things go down in the bad way :). And again entirely doable via command line today: on top of what Rockstor currently enables by way of magic made easier.

1 Like

@roberto0610 This may also be of interest:

This is another referenced I’ve sent out to another performance orientated Rockstor user:

Apples and oranges comparison re the raid technologies but interesting for the comparison between the btrfs levels themselves I thought. Again this is now an older kernel so if folks have newer performance metrics out there then all good.

Also for another angle on differences between hardware raid and the different btrfs raid levels there is of course available disk space:

https://carfax.org.uk/btrfs-usage/

That may be of use to some in working out their viable raid options given set hardware availability for example. We can’t always go bottom up and often have to fit available hardware into the best arrangement we see fit.

@Flox We should probably pop a link to this calculator into the docs as it’s now been around for some time and looks to be well maintained.

Hope that helps.

2 Likes