RAID1 to RAID5 pool issues

Hi folks,

Hopefully someone can shed some light or assist on an issue I’m experiencing after adding 2 more disks to my server, and expanding the pool & changing raid from raid1 to raid5.

So far I’ve completed two balances, however still ending up with no new space, and the new disks (sdd and sde) are showing 100% allocated after being added & balanced on raid5.

The server consists of 4x8TB disks, and the storage pool looks like the below:
image
As you can see, it’s showing 100% allocated on the new disks.

Have run a few commands on the box to get a bit more info, however I’m very unskilled with btrfs:

[root@storage ~]# btrfs fi usage /mnt2/storage1/ WARNING: RAID56 detected, not implemented WARNING: RAID56 detected, not implemented WARNING: RAID56 detected, not implemented Overall: Device size: 14.55TiB Device allocated: 0.00B Device unallocated: 14.55TiB Device missing: 0.00B Used: 0.00B Free (estimated): 0.00B (min: 8.00EiB) Data ratio: 0.00 Metadata ratio: 0.00 Global reserve: 512.00MiB (used: 288.00KiB)

Data,RAID5: Size:6.67TiB, Used:6.67TiB
/dev/sdb 6.66TiB
/dev/sdc 6.66TiB
/dev/sdd 7.27GiB
/dev/sde 7.27GiB

Metadata,RAID5: Size:8.00GiB, Used:7.00GiB
/dev/sdb 8.00GiB
/dev/sdc 8.00GiB

System,RAID5: Size:32.00MiB, Used:960.00KiB
/dev/sdb 32.00MiB
/dev/sdc 32.00MiB

Unallocated:
/dev/sdb 615.18GiB
/dev/sdc 615.18GiB
/dev/sdd 1.04MiB
/dev/sde 1.04MiB

[root@storage ~]# btrfs device usage /mnt2/storage1/ /dev/sdb, ID: 1 Device size: 7.27TiB Device slack: 0.00B Data,RAID5: 6.65TiB Data,RAID5: 7.27GiB Metadata,RAID5: 8.00GiB System,RAID5: 32.00MiB Unallocated: 615.18GiB

/dev/sdc, ID: 2
Device size: 7.27TiB
Device slack: 0.00B
Data,RAID5: 6.65TiB
Data,RAID5: 7.27GiB
Metadata,RAID5: 8.00GiB
System,RAID5: 32.00MiB
Unallocated: 615.18GiB

/dev/sdd, ID: 3
Device size: 7.27GiB
Device slack: 0.00B
Data,RAID5: 7.27GiB
Unallocated: 1.04MiB

/dev/sde, ID: 4
Device size: 7.27GiB
Device slack: 0.00B
Data,RAID5: 7.27GiB
Unallocated: 1.04MiB

[root@storage ~]# btrfs fi show Label: 'rockstor_rockstor' uuid: 4dcbf3a9-b916-42e7-b142-9efc22dfb685 Total devices 1 FS bytes used 2.28GiB devid 1 size 17.51GiB used 14.04GiB path /dev/sda3

Label: ‘storage1’ uuid: 0721fd63-fc56-442e-b1d5-dd3128978845
Total devices 4 FS bytes used 6.67TiB
devid 1 size 7.27TiB used 6.67TiB path /dev/sdb
devid 2 size 7.27TiB used 6.67TiB path /dev/sdc
devid 3 size 7.27GiB used 7.27GiB path /dev/sdd
devid 4 size 7.27GiB used 7.27GiB path /dev/sde

Sorry for the lack of information here, I’m a little out of my depth - hopefully someone can shed some light into what’s going on - happy to supply any other information I can!

Thanks in advance,
Shaun

@sfnz Hello again

We have already spoken a little on an earlier state of this post’s topic (per disk allocation) via support email. I think at the time you had 2 disks and had added 2 more but the extra space was not showing up.

As you can see from your

btrfs fi usage /mnt2/storage1/

command btrfs is warning you that some ‘usage’ info is just not working as intended in the parity btrfs raids of 5 & 6.
Hence my suggestion in the email thread and the tool tip during pool creation that it is not an appropriate choice for production (read data you care about).

You will get better results with either raid 1 or raid 10 with

Now as to the figures, and to what you can expect, take a look at the following:

https://carfax.org.uk/btrfs-usage/

Rockstor uses the btrfs device usage mount-point to get the info for the bottom table, ie the per disk Allocation stuff. Note here though that Allocation is not usage. If you look up btrfs allocation system, it basically does raid on a chunk level and then more or less fills these chunks with data. You could have a completely allocated drive that has many many partially occupied chunks. Usually a balance will collect the free spaces and drop chunks that are then no longer needed. But again we come back to the parity raids. That “btrfs fi usage” command givens the following output:

WARNING: RAID56 detected, not implemented WARNING: RAID56 detected, not implemented WARNING: RAID56 detected, not implemented

then goes on to say the whole device is 14 TB which is of course untrue. But Rockstor shows 7 TB. So this is all out of wack. I would suggest that you not use the parity raids and report your experience with the raid1 / 10 variants and we can take it from there. As while the parity btrfs side is sill outputting inconsistent info such as your 2 commands we are in a difficult position to asertain exactly what is happening. Note also that btrfs raid 1 will only do 2 copies of the data and meta data irispective of the number of disk, it does raid / chunk, so it just makes sure that the 2 raid 1 say chunks are on 2 different devices.

We are undergoing a move to a ‘Built on openSUSE’ offering where many of these issues associated with our now rather old kernel and btrfs-progs are already sorted. So we hope in the future to offer better reporting as a consequent of this move. But for the time being I’d not use the parity raids. They are far younger within btrfs than the raid1 / 10 variants and it particularly shows in our use of older in kernel btrfs and tools, in our current CentOS offering. But as stated we are far along the path to being able to do better here.

Hope that answers your question, at least in part. But the key here is that Rockstor is likely slightly confused by the “WARNING: not implemented” output and the inconsistent reporting at the command line. But our focus is now more on moving to our openSUSE variant where, due to upstream btrfs maintenance and backports, we have a better set of legs to stand on.

See the following post by @Flox re openSUSE btrfs back-ports:

1 Like

Thanks Phil - will change the raid on the pool to raid10 and report back.

SF

@sfnz Hope it goes well.

Note it can be much faster with quotas disabled. But don’t do this if you are running the currently older CentOS based testing channel.

Latest Stable channel can handle quotas disabled however.

Best to post here the output of:

yum info rockstor

so folks can see the particular version of rockstor you are working with.

Cheers.