Naive (maybe) question Raid 5/6

Hi,

I’ve been using ReadyNAS for around 10 years (OS6 on a 314 for 6 years) And they’ve been trouble free (touch wood).

They’ve been using btrfs for at least 6 years, albeit with their own RAID implementation (X-RAID) which looks superficially a bit like RAID5/6, but I’m thinking they must be doing something to mitigate the problems with RAID 5/6.

Basically with 4 drives, you get almost 3x the capacity of the smallest drive, and when you replace a drive, it auto-detects and re-builds the new drive from the other 3. It’s quite nifty, but takes a looooooooong time (when I updated from 4T drives to 8T, each drive swap took at least 24 hrs.)

So somehow each drive seems to be some combination of stripe and mirror, unlike RAID 10, there is no level of striped and mirror. As I said, Netgear seem to have confidence in them (it’s their reputation on the line).

So, here is the (possibly) naive question.
I’m wondering, is there some idea that could be borrowed? Or are they just mad? I’ve read something about mirroring metadata (including checksums?) across all drives, but striping data.

Cheers. (Bought my years subscription btw. Still on a VM while I try it out. But that’s a story for a separate topic)

1 Like

Hi again @HarryHUK,

Before some Btrfs experts provide more information here, I can chip in a little bit, especially with regards to the following:

You might be referring to the “recently”-added support in Btrfs for different raid levels between data and metadata combined to raid levels raid1c3 and raid1c4, that is particularly interesting for those wanting to run raid5/6 on their data. Indeed, it would allow you to have better redundancy for your metadata while keeping your data in raid5/6. We have had some quick discussion on implementing this in the forum (see the post linked below, for instance):

and @phillxnet’s response:

As you can see, this is something that will most likely be implemented Rockstor once it is made fully available in our base distribution as that will most likely match with when the openSUSE folks deem the feature ready and stable enough. As far as I know, this hasn’t been backported to Leap 15.2 yet so I don’t think it is ready yet.

On a side note, from the discussions I’ve read online, it seems that filesystem developers in general (not just Btrfs) seem to focus primarily on raid1/10 nowadays so raid5/6 might not be their primary focus. You can see quite a bit of work and discussion on raid5/6 on the Btrfs mailing list, though, so it has seen big improvements recently related to speed and reliability, I believe. Operations like scrub in Raid5/6 were famous for being very slow in older kernels when compared to raid1/10, for instance, and that has improved, for instance. Disabling quotas is also recommended to improve speed/performance.

I know I haven’t really answered your question(s), but I hope I was able to bring some elements of answer nonetheless!

1 Like

Cheers,
yes I saw things about raid1c3 and raid1c4. Netgear are obviously doing their own thing because it’s been that way for at least 6 years, but these new developments look promising to provide similar in a standard way.

I think I have quotas off on my ReadyNAS, it’s really only my using it, so didn’t see the point atm.

One thing it seems it can tell me, though, is disk space used by data and snapshots. Which I can appreciate is not straightforward.
Another thin

Just had a closer look at the ReadyNAS. It’s configured as RAID5, and quotas are on… hmmm

1 Like

@HarryHUK Hello again.

Re:

I don’t know, but had assumed that ReadyNAS and the like used mdraid for their raid capability and layered btrfs on top, i.e. they don’t actually use the btrfs raid capability. Is this not actually the case?

1 Like

This is entirely possible.

i managed to make the 4.0.4 rockstor and update the kernel and btrfs progs to 5.9, seems to work fine

the RAID5 and 6 profile in rockstor should be using RAID1 automatically for metadata (if a disk is removed a metadata only balance should be triggered when using only raid1 with raid6 when a disk goes missing) until 5.4 kernel or higher is used which supports RAID1c3 for metadata when using RAID6 profile (witch is the strongly recommended option so your filesystem metadata does not blow up)

(at time of posting its 5.9 kernel)
zypper addrepo -f http://download.opensuse.org/repositories/Kernel:/stable/standard/Kernel:stable.repo
zypper refresh
zypper dup -r Kernel_stable

(assuming kernel is 5.9)
zypper addrepo -f http://download.opensuse.org/repositories/filesystems/openSUSE_Leap_15.2/filesystems.repo
zypper refresh
zypper install btrfsprogs-5.9

use this below to create a raid6 data and raid1c3 for metadata
mkfs.btrfs -L data -m raid1c3 -d raid6 /dev/sd[b,c,d,e,f,g]
once done goto disks and refresh the page and then import the pool

Note if you add a disk via the GUI it unfortunately passes the raid6 option on the add command (instead of just adding the disk with no options) and converts metadata to raid6 so you need to run “btrfs ba start -mconvert=raid1c3 /mount/point” to convert metadata back to raid1c3
recommend using the “btrfs device add /dev/sdX /mount/point” to avoid the above (make sure you do a balance after disk add so it uses the new disk)

the readynas question
you need quotas on so you can see snapshot size on readynas (with it off you only see actual data and remaining space with no clue how much space is been used by snapshots) on older slow readynas its turned off by default (that was throwing me for a loop because i snapshots was missing because mine was an ARM unit and it was just because quotas was disabled by default)

synology and readynas use standard mdraid with btrfs on top of it (both do the same vodo that allows it to trigger md raid to use its single or dual parity or mirror to deliver the correct data to the btrfs layer)

readynas Xraid is not a raid level it just automatic raid and automatic disks management witch can make it confusing because they have x-raid 1 and 2 witch are the same thing

plug a empty blank disk in and it will automatically add it to the pool and expand to use all space, same thing happens when replacing a failed or want a larger disk just hot pull old disk and plug in new one and off it goes and adds the disk (with synology this is all manual you have to tell it what to do when a disk has been replaced or added) only acception to that rule is if you insert a disk in with data you need to delete the foreign disk config from the gui (strongly recommended to diskpart > clean command disks or related linux command)

x-raid auto single 1 disk > RAID1 2 disks> raid5 upto 6 disks and converts to raid6 on 7th disk inserted, if you want to use RAID6 with 4-6 disks but keep xraid auto disk management enabled you can get around this by switch to flex raid insert the new disk > click on the new empty disk > add the parity disk and then turn x-raid back on to get raid6 with less than 4-7 disks

4 Likes