Stick with RAID1 or switch to RAID5? (TL;DR: sticking with RAID1)

So here I was, thinking I’d switch my 2+2+1TB RAID1 to a 2+2+2TB RAID5, instead of a 2+2+4TB RAID1, and still end up with the same amount of usable space. After reading threads like these, I’m hesitating. For me, most of the data would be either backed up remotely or easily replacable, so any “real” problems I’d run into could, worst case, be solved by wiping the array and starting over. Still a hassle, of course.

I’m also wondering what scrub times are like for a <10TB array consisting of at most 4 drives. I keep hearing that it’s much worse than RAID1, but I’d be open to input about smaller arrays like mine (I don’t consider 3 or 4 2TB drives “a lot” - I’ve seen folks with way, way more) that are basically a home NAS / media storage / Nextcloud instance.

If switching to RAID5 is not discouraged as much as it used to be, I’m also wondering what the best order of things would be. Clear up some space, remove the 1TB (and rebalance, I suppose), add the 2TB and convert to RAID5 in one go? Or are there better, quicker ways?

Edit: let’s add some more info, in case it’s relevant. My system is a HP Gen8 microserver (the Celeron-powered one) with 6GB of RAM. Currently, a scrub of the RAID1 takes about 3.5 hours. Data is a mix of backup files, websites, and movie stuff, so both smaller and larger files. The system is mostly used by just me. Anything else in terms of usage that might be relevant?

Edit 2: Possibly related - it is of course possible to use RAID1 for metadata rather than another level, and in coming kernel versions it seems like multi-copy RAID1 will be a thing, too, as well as increased performance. Maybe I should start out with going for that newer kernel, first… :wink:


The question about going to Raid5/6 is one I’m considering as well.

A few months ago I did the conversion from Raid1 to Raid6. To try it out.
I have an UPS in front of my NAS, and should therefor be reasonably safe.

My array is 6 disks, 3x2TB and 3x3TB.

Scrub times before the conversion was about 7½ hours for the Raid1 array.

After conversion to Raid6 I had a scrub running for more than 48 hours, and not even at 40% finished.
So I stopped it, converted back to Raid1 and haven’t looked back.
That was more than 48hours of constant activity for all 6 drives in the system. I would not want to do that to them once every month or so.

I’m still following the btrfs mail list, trying to keep informed, but it does not seem that much wotk is going into optimizing Raid5/6.

It must be said that the version of btrfs I was running was older than the newest, so performance could be better today. I doubt it though.

1 Like

Thanks for the information, @KarstenV! I, too, am thinking about that. By curiosity, the scrub times you mentioned were with quota disabled?

That’s an interesting point, for sure - in my case, at least, quotas aren’t needed, so there’s no need for me to enable them and suffer the performance hit.

Another question, in that area: what about snapshots?

Quotas were enabled, but no snapshots.

Ah, that might explain the slowdowns - I’ve read about more folks that saw incredible speedups by disabling quotas.

An article I ran into mentioned slow(er) write speeds for RAID5 (and mentions some other criticisms). Not sure how much of that is also the case with BTRFS?

There isn’t protection from a power outage or spike with RAID5, it would be much better to use RAID10 for BTRFS. If you use RAID5 you need to do so with the understanding that you could lose or corrupt your data.
Unlike a hardware backed RAID5 with a battery backup, you have software doing this in BTRFS, so a power disruption can have severe consequences included corrupting the disks and rendering data unrecoverable.
Please don’t use RAID5 with BTRFS unless you are ok with the risk here. The only way you should entertain this is if you can guarantee no disruption in power.

Those are stern words. :wink:

From what I’ve read, a single power loss usually isn’t going to kill your array, but anything that happens while the array is recovering will.

I’ve kinda made up my mind: for my use case, I suppose RAID1 is better.

  • I have no UPS (nor do I want one)
  • I don’t have too many disks (my hardware is limited to 4) to “lose” disk space from
  • the total size of the array makes any space gains percentually interesting, but in practice it’s just a few hundres MB more.
  • Scrub (and recovery, should the need arise) performance is going to be better
  • No risk from the write hole.

So I’ll replace the 1TB drive with a 4TB one, bringing the array to 2+2+4, eventually I’ll add another 4, and once I start running out of space on that, the first two drives have seen a couple of years of action and might be up for replacement. On that occasion I can either go with all 4’s, or bite the bullet and replace the two oldest 2TB drives with 8TB ones.

But that’s a long way off, I think.

Solid plan. I’ve just seen too many stories of people losing data. It’s always been assumed that RAID5/6 isn’t reliable do to the potential power loss disruption. There isn’t anything to stop you from using hardware RAID5 underneath and exposing one drive though, but that kind of defeats the purpose of using the BTRFS benefits.

I have been using RAID5 for a long time (3+ yrs) and during this time I have had a power outage many times, without any losses. However, everyone should decide on this by weighing all the possible risks, also want to note that I am constantly updating the kernel and version of btrfs-tools, in which many bugs were fixed compared to vanilla Rockstor’s ones.
Also, with any type of the storage: backup your important data independently!

Same as @Eraser I have several RAID5 pools and don’t really run any protection, my UPS has been throwing a config failure error for almost 2 years and I haven’t gotten around to fixing it. I’ve probably had 2 or 3 power outages and those shares still work fine.

Status of BTRFS can always be found here: