BTRFS RAID alternatives?

Hey people,

After a month or so of using Rockor I’m considering leaving.

My main problems is BTRFS RAID (not the filesystem). One of drives my slowly dying, it had a End to End Read Error Pre Failure warning, which I discovered after checking each disk with a Live Linux boot disk. The pool went read only.

Rockstor apparently doesn’t warn about Pre Failures in SMART attributes. Maybe I have to customise the SMART options although I thought this would be standard?

If you have a drive which is failing slowly rather than just failing straight away then BTRFS will probably go irreversible read only mode on RAID 1/10/5/6.

https://btrfs.wiki.kernel.org/index.php/Status

Although you won’t lose all your data, you will need some other disks to copy everything to, slowly as btrfs restore is painful. Recreate, copy everything back again and cross your fingers. Doesn’t sound very redundant though?? :stuck_out_tongue:

What I’d like to do instead is use Linux raid instead to manage the RAID and only use BTRFS as the filesystem.

Alternatives I’m looking at are Synology and unRAID.

But by using MDraid underneath you’d still have a lack of parity check-summing and you’d not have the checksum/recovery features of BTRFS either.

Sure BTRFS could still detect a file is corrupted but since it’s not handing the underlying resilience/raid it can’t do anything about it as there’s no copy for it to rebuild the file from.

I’m aware of that, you wouldn’t have bitrot protection. However at least the volume wouldn’t come to a halt over a small problem like a hard drive starting to develop bad sectors.

This would be a temporary solution until BTRFS fixes how it handles bad hard drives.

How many disks are you working with? You can lose a fair bit more disk space on raid1 vs raid6, but you only go into a read-only mode if the drive count is reduced to 1. That is much less likely to happen with a 4 or 5 disk array.

I’ve got 10 drives in RAID 6.

Using btrfs restore it’s taken me 2 days to do 3TB, it’s painfully slow.

You have to answer the prompts when it gets stuck on a file.

This whole situation has really shown what a weak system it is.

They call it “mostly stable”. In the enterprise sector we’d call this alpha.

I haven’t used raid6 so I can’t comment on the rebuild process. Rebalancing does take a LONG time in general though, which I have done. The good news is that for me it wasn’t a big deal since my system was still up and running - I was just removing a vm drive so I could replace it with the real deal.

To address your original issue, in a 10 disk raid6, you will lose a significant amount of space going from raid6 to raid1, so I think switching to something else would be a good idea…

The question is what. I would not recommend btrfs on top of a raid6 as it negates most of the benefits of btrfs - of which you don’t have today on raid6 btw. Have you given any thought to ZFS? specifically FreeNAS? The raidz2 is a stable raid6 alternative, and the only real drawbacks I could name are that FreeNAS is significantly slower to boot, the hardware support isn’t as great but probably not an issue, and you can’t easily change your raid setup by adding more drives. You can still grow your raid size by adding new, bigger, drives - and the total size grows when they are all upgraded, but things get messy if you want to add or remove the total number of drives.

I came here from ZFS on FreeNAS. Problem was that I’m planning upgraded in the future to restructure my array. ZFS isn’t so flexible considering no Block Pointer Rewrite.

I was thinking BTRFS on RAID6 because you can then still have Snapshots for instance.

Plus FreeNAS screwed everyone around going to Stable 10.0 Corral, then deciding its alpha and releasing 11.0 based on 9.0 code.