BTRFS raid 6 won't remount after single drive failure

Hi All, Hopefully I can get some help.

A few weeks ago my Rockstor appliance failed overnight. I’ve been using it as a backup device, as well as for sharing files around our studio.

As far as I can figure out there was a single drive failure in the 6 drive raid 6. There was also (possibly unrelated) a memory failure in the box.

The drive failure took the whole system offline. which seems unusual for a raid 6, is should operate fine in a degraded state. Replacing the failed drive did nothing.

I then discovered the memory failure. I think the failed memory corrupted the OS/boot drive (usb stick)

I rectified the memory and confirmed with a 24 hour memtest and created a new boot USB but it failed to remount the pool.

I attempted the recovery advice in the Rockstor manual. The boot drive corrupted again.

I then purchased a new motherboard, cpu and ecc memory and tried again. New boot but I’m unable to remount the storage drives.

I re-connected the hard drives and got this when I tried to remount the pool using the web interface. Studio-Storage is the main drive pool.


        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 353, in _btrfs_disk_import
mount_root(po)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 142, in mount_root
run_command(mnt_cmd)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 98, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = [’/bin/mount’, ‘/dev/disk/by-label/Studio-Storage’, ‘/mnt2/Studio-Storage’]. rc = 32. stdout = [’’]. stderr = [‘mount: wrong fs type, bad option, bad superblock on /dev/sdf,’, ’ missing codepage or helper program, or other error’, ‘’, ’ In some cases useful info is found in syslog - try’, ’ dmesg | tail or so.’, ‘’]


I also had the following show up on the appliance boot screen

[ 22.666886] usbhid 1-1.3.1:.0: can’t add hid device: -110
[ 114.119442] BTRFS: failed to read the system array on sda
[ 114.132233] BTRFS: open_ctree failed

btrfs fi show brings up the following:

label ‘rockstor_rockstor’ uuid: 81e92640-151f-4855-a75d-11b5da754478
Total devices 1 FS bytes used 1.45GiB
devid 1 size 12.50GiB used 4.04GiB path /dev/sde3

warning, device 1 is missing
checksum verify failed on 12642209480704 found 22541675 wanted C6B7ABC3
checksum verify failed on 12642209480704 found 2BFFA285 wanted 6FDAC69D
checksum verify failed on 12642209480704 found 2BFFA285 wanted 6FDAC69D
bytenr mismatch, want 12642209480704, have 116272015881216
Couldn’t read chunk tree
Label: ‘Studio-Storage’ uuid: 07c20ccd-ace8-4c9a-802e-dd582c6e91df
Total devices 6 FS bytes used 3.99TiB
devid 2 size 2.73TiB used 1.01TiB path /dev/sdc
devid 3 size 2.73TiB used 1.01TiB path /dev/sdb
devid 4 size 2.73TiB used 1.01TiB path /dev/sda
devid 5 size 2.73TiB used 1.01TiB path /dev/sdg
devid 6 size 2.73TiB used 1.01TiB path /dev/sdf
*** Some devices missing

The last email I received from Customer support on the 31 August is that it seems the drive names have changed upon reboot and interfered with the system pool. This was apparently a bug that has since been rectified.

Anyway, I’m pretty new to Linux, and to open source in general and would appreciate any help that I can get.

There are only a couple of things there that aren’t backed up elsewhere, but I’d like to know that this kind of thing can be recovered. There’s not much point having the newest, fanciest file system on the block if something like this loses all your data.

Edit: I should note that I’m running Rockstor 3.8-14.11 and have a stable updates subscription.

It looks like I have the pool mounted, first by following the documentation with

mount -o degraded /dev/sda /mnt2/mypool

and then clicking the remount pool button in the web UI.

It is showing no data, though, but it’s a start.

Update two.

I’m now on 3.8-14-13 and managed to get the pool remounted and the new drive added to the pool.

It is showing my shares but showing that the pool is not using any space.

The rebalance failed due to : Read-only file system.

It also threw a bunch of BTRFS errors during the rebalance, prior to the fail.

BTRFS error (device sdf): bad tree block start 5611155484282765481 12644410621952 (happened 4 times)

BTRFS error (device sdf): Error -5 accounting shared subtree. Quota is out of sync, rescan required. (happened twice)

BTRFS error (device sdf): bad tree block start 0 12644310335488 (Happened 4 times)

BTRFS: error (device sdf) in cleanup_transaction:1771: errno=-5 IO failure

BTRFS: error (device sdf) in merge_reloc_roots:2421: errno=-5 IO failure

// sarcasm on //

hmmmmm btrfs raid 5&6 eat some data ? this goes to all those “we need more raid 5&6 support” !.

// sarcasm off //

Anyway a VERY IMPORTANT rule of thumb with btrfs is:

“if it exploded - 1) do not mount it RW 2) do not try any form or fs repair that modifies a file system”

So, procedure is to try use btrfs rescue and rescue your data to a separate storage. Some of your data may end up corrupted. Then and ONLY then try to attempt fs repair, btrfs degraded mode for raid 5&6 is not guarantee to work.

Personally after rescuing my data I would scrap this pool running raid 5 or 6 and build one with raid1 (or 10) because those actually can very gracefully survive a disk death.

Thanks Tomasz

I think I’ve now corrupted another disk or two while trying to follow recovery tutorials whilst realistically having no idea what I’m doing.

Surely raid 5/6 can’t be that bad that a single drive failure destroys the whole pool?

Anyway, most of the data was backed up elsewhere so I guess I’ll just give up and make a fresh pool.

I’d prefer to use raid 6 as the overhead for raid 1/10 gets fairly high as I add drives and expand the pool in the future.

Damn shame as I love the theory of the expansion features that means I can add a drive at a time as my storage needs expands, rather than freenas and zfs which needs you to either double the pool or make a whole new pool to expand.

HA ! if you’re unlucky since sector failure will destroy your raid 5 & 6 on btrfs :)))))))

Hey, if raid 5/6 will get fixed you will get a free space boost :smiley: (yes I know the raid5/6 situation suck)

Hey, to be honest I’ve used raid 5 / 6 in the past and yes there is a space benefit but usually I was getting a poor performance … and seek performance was sucking so hard it was just unusable. Of course for home storage - fair enough it’s a cold storage scenario …

Yeah, it’s cold storage / archive / backup, not a production server so speed isn’t critical, but space is.

As far as I could work out I had file table errors on at least three or four of the disks, plus other weirdness so I gave up on recovery and scrapped the pool, wiped the disks and made a new raid 1 pool instead.

I’m not thrilled about the sacrifice in space, but until raid 6 is working properly I guess it’ll have to do.

Hey, I got hit with problem around late 3. ish kernel that caused over 100 large files on my FS to get truncated to 4kb :slight_smile: and that’s with raid10 just for laughs : ))))

Enjoy the seek performance that comes with raid10 !

Edit my previous post. I meant Raid 1 not Raid 10. I’m a bit gunshy at the moment, so wanted to dumb things down as much as possible, and make life as easy as possible in the event of future failures.

So no striping, just boring, slow old Raid 1.

Don’t beat your self about it … you can still get a significant speed up in certain scenarios. Just pick a very large file on you FS and from console do

dd if=/path/to/your/file of=/dev/zero bs=10M

and see what results you get.

Cheers. I’ll give it a go.

The transfers I’ve been doing have been maxing out our gigabit network so I think the speed will do given it’s a cold storage/backup appliance.

How about SPF+ ? It’s a perfect way of injecting a lot of bandwidth into your network … or even machine to machine …