@roberto0610 Hello again.
So I finally got around to confirming your findings here. And following on from my suspicion indicated in our support email chat: sure enough our upstream Leap 15.3 default kernel /btrfs-progs has now disabled read/write on the less mature btrfs parity raids levels of 5 & 6. Fancy that!
Via a
journalctl -f
And then during a Web-UI creation of a 3 disk btrfs raid 6 we get:
Dec 12 18:57:55 rleap15-3 kernel: BTRFS: device label test-pool devid 1 transid 5 /dev/sdb
Dec 12 18:57:55 rleap15-3 kernel: BTRFS: device label test-pool devid 2 transid 5 /dev/sdc
Dec 12 18:57:55 rleap15-3 kernel: BTRFS: device label test-pool devid 3 transid 5 /dev/disk/by-id/ata-QEMU_HARDDISK_QM00009
Dec 12 18:57:55 rleap15-3 kernel: BTRFS info (device sdb): use no compression, level 0
Dec 12 18:57:55 rleap15-3 kernel: BTRFS info (device sdb): disk space caching is enabled
Dec 12 18:57:55 rleap15-3 kernel: BTRFS info (device sdb): has skinny extents
Dec 12 18:57:55 rleap15-3 kernel: BTRFS info (device sdb): flagging fs with big metadata feature
Dec 12 18:57:55 rleap15-3 kernel: btrfs: RAID56 is supported read-only, load module with allow_unsupported=1
Thanks to @Superfish1000 for trying their best to inform me of this upstream situation in the following forum thread:
My apologies to @Superfish1000 for not being on-top of this one. We mostly ignore the parity raids as they are not production ready and we state as much but it was still a surprise to find this probably sane move from upstream. So I am considering myself as having received a welcome told-you-so; it has lead to me chasing this one up a little which is good.
For any folks needing to import a prior 5/6 pool we have a recently updated section in the docs:
Import unwell Pool: Disks — Rockstor documentation
which addresses the need, which the btrfs parity raid levels now have, of using custom mount options (in this case ro) prior to import. This then at least allows for the import, and ro only access, for backups to be refreshed and a rebuild to raid 1 or 10 to be enacted.
So @roberto0610 to address the issue of:
This is now normal for a party raid pool. And as discussed in our support email chat, a stable backport kernel looks like a possible way to go but I’ve not tested that yet. But again for those interested we have @kageurufu excellent post on his current adventure in this direction in order to import their data raid 6, parity raid1c4 pool:
@Flox We should have a chat about what, if anything, we need to do about this. I’m super keen to follow our upstream exactly on the kernel and btrfs side of things as they are the experts, but we might want to add a doc entry for this potentially. We already have the Web-UI tool-tip advise that 5/6 are not production ready but this is a step up in the protection against it’s use all-in from what we have had.
Again thanks to all who chip in on the forum, it’s great to have such input from so many directions. And again apologies for being behind the times (read a little slow) on this one.
Hope that helps.