How to create a RAID1 where all (meta-)data on all disks identical?

On web gui I can create a RAID1 pool with 2 or more disks (in my testcase SSD’s).
While using 2 disks the available space in the pool is that of one disk (approx).

But when using 3 disks I get a free space of 1,5 disk sizes (?).
It seems that metadata are distributed.
It is described, that in this case a failure of one disk can be handled.

I’d like to have all data on all 3 disks identically, that a failure of 2 disks can be tolerated.
How can I establish a RAID1 which is “fully” mirrored?
Can it be done via terminal?
What is the mkfs commandoption to do so (if its possible at all)?

ps: I read a lot of wiki paragraphs but I did nit found this case…

Thanks in advance!

You can set up a standard RAID1 array over two disks, then add a 3rd disk and set up an incremental backup to this drive instead:
https://btrfs.wiki.kernel.org/index.php/Incremental_Backup

Or, add another drive and use RAID6

Thanks for your reply.

What does rockstor do with an RAID1 over 3 disks?
My picture was that in a RAID1 all data are mirrored on all disks. I’m wrong with this?

Why is in rockstor in this case the free space approx 3/2 of one disksize? Is a different RAID used instead?

An seperated backup job means unnecessary read on source…

RAID6 is no option for me, because in case of failure I want to be able to extract data from one single disk.

Ok, now I found the keyword: “N-Way”

According to the btrfs wiki ( https://btrfs.wiki.kernel.org/index.php/UseCases ):
NOTE This does not do the ‘usual thing’ for 3 or more drives. Until “N-Way” (traditional) RAID-1 is implemented: Loss of more than one drive might crash the array. For now, RAID-1 means ‘one copy of what’s important exists on two of the drives in the array no matter how many drives there may be in it’.

… so Btrfs does not support my usecase “N-Way” … up to now.
I hope it will do it in future!

Hi @TB-UB,

The RAID-1 implementation in BTRFS is essentially RAID-1E. Where there are 2 copies of everything across all the members in the array/pool.

1 Like

@TB-UB Welcome to the Rockstor community.

That’s about it yes. N-Way btrfs raid1 is as quoted, currently only 2 copies: one on each of 2 independent devices within a pool.

There are plans, and some proposed code in the btrfs developer community to do > 2 N-Way but they have yet to be fully reviewed and merged. So in time I fully expect this to come to fruition but it is not likely to be available any time soon. But upon it emerging I expect Rockstor to effectively ‘surface’ this soon there after.

Thanks for sharing your findings thus far.

@TB-UB and @vesper1978

The latest proposed code for > 2 way raid 1 from David Sterba (of SUSE and btrfs-progs maintainer) has just surfaced again on the btrfs-mailing list in the following thread:

https://lore.kernel.org/linux-btrfs/20190611095314.GC24160@twin.jikos.cz/T/#t
or via another reader:
https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg88317.html

Linking here for context.

There are 2 statements from D Sterba:

btrfs: add support for 3-copy replication (raid1c3)
Add new block group profile to store 3 copies in a simliar way that
current RAID1 does. The profile attributes and constraints are defined
in the raid table and used by the same code that already handles the
2-copy RAID1.
The minimum number of devices is 3, the maximum number of devices/chunks
that can be lost/damaged is 2. Like RAID6 but with 33% space
utilization.
Signed-off-by: David Sterba
( https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=47e6f7423b9196ad6832d26cae52b7015f81ee7f )

btrfs: add support for 4-copy replication (raid1c4)
Add new block group profile to store 4 copies in a simliar way that
current RAID1 does. The profile attributes and constraints are defined
in the raid table and used by the same code that already handles the 2-
and 3-copy RAID1.
The minimum number of devices is 4, the maximum number of devices/chunks
that can be lost/damaged is 3. There is no comparable traditional RAID
level, the profile is added for future needs to accompany triple-parity
and beyond.
Signed-off-by: David Sterba
( https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8d6fac0087e538173f34ca7431ed9b58581acf28 )

Q: Will raid1c3 and/or raid1c4 be supported by rockstor? When ;o) ?

@TB-UB Re:

These are still very early additions in upstream but it looks like their basic function is now starting to round out as we have the recently updated GitHub issue from December by kdave :

https://github.com/kdave/btrfs-progs/issues/221

So hopefully as this progresses and becomes merged and more widely tested we will, in time, see it in the openSUSE btrfs back-ports that we enjoy in our ‘Built on openSUSE’ variants. There after we can look to incorporating it into our interface and back-end functions.

Just a heads up on the progress upstream on this one.

1 Like