Rockstor Fails to Import File System from Previous Installation

I have a 3 disk RAID that I created a few days ago with a test install of Rockstor. I decided that the USB was too slow, so I reinstalled to a SATA HDD.

After getting logged in on the fresh install, I tried to import my 3 Disk Array using the GUI. I have sda, sdb, and sdc listed as having BTRFS data on them. When I try to import any of the three disks, I get the same error:

Failed to import any pool on this device(sdb). Error: Error running a
command. cmd = [’/bin/mount’, ‘/dev/disk/by-label/data’, ‘/mnt2/data’].
rc = 32. stdout = [’’]. stderr = [‘mount: wrong fs type, bad option, bad
superblock on /dev/sdc,’, ’ missing codepage or helper program,
or other error’, ‘’, ’ In some cases useful info is found in
syslog - try’, ’ dmesg | tail or so.’, ‘’]

This may be helpful background: I started with four SATA disks, pulled one out and re-balanced to three. Everything was working fine before the reinstall, but now the filesystem is evidently unreadable.

If all of this data is lost, that’s not the end of the world because I still have it backed up elsewhere. I’d like to recover it and move forward, though. It would be great to be able to use RS on my main NAS.

Anyone have thoughts on this?

Hey @Learning2NAS, what’s the output of btrfs fi show ?

And also, as in the error, there may be useful info in dmesg, perhaps towards the end.

Solution could be as simple as running another device scan btrfs device scan, though that happens once during bootstrap. Any errors on the bootstrap service? systemctl status rockstor-bootstrap -l

Updates:

(1) If I watch on the Rockstor unit’s monitor when I try to mount my previous array, the error reads "[104.851774] BTRFS: failed to read chunk tree on sda. [104.863088] BTRFS: open_ctree failed

(2) btrfs fi show results in the following (excluding system disk)
Warning devid 3 not found already
Label: ‘data’ uuid:BIGLONGNUMBERHERE
Total devices 4 FS bytes used 294.14GB
devid 1 size 465GB used 148GB path /dev/sda
devid 2 size 465GB used 148GB path /dev/sdb
devid 4 size 465GB used 148GB path /dev/sdc
*** Some devices missing

(3) The end of dmesg shows the same error I reported in (1) above.

(4) No errors during bootstrapping, other than a few waits while the machine was booting. They resolved automatically once the rockstor service was online.

Appears to be a similiar issue I had / having with one of my RAID5 arrays…

This was my error message from Rockstor:
"Error running a command. cmd = ['/bin/mount', u'/dev/disk/by-label/mainNAS', u'/mnt2/mainNAS', '-o', u',compress=no']. rc = 32. stdout = ['']. stderr = ['mount: wrong fs type, bad option, bad superblock on /dev/sdh,', ' missing codepage or helper program, or other error', '', ' In some cases useful info is found in syslog - try', ' dmesg | tail or so.', '']

I worked with the BTRFS guys and there will be a patch for my issue in the next release - https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg48547.html

Confirm your chunk tree is broken, if it is give btrfs rescue chunk-recovery a try?

Hey @ScottyEdmonds,

That error looks exactly the same. How do I confirm my chunk tree is broken? If you can point me toward a command I’ll give it a run and see what the result is.

Thanks

You know what stinks, too? I selected Mirror instead of RAID 5 because I wanted a stable platform. This shakes my confidence in BTRFS a bit.

@Learning2NAS I was a little skeptical about BTRFS as well after this, but I did extensive testing prior to committing that seemed to go very well, and also have two other shares that have been running with no issues(one RAID5 and one RAID0). The most critical thing is the support that the BTRFS community has given me has been INCREDIBLE and regardless of if I get the data back I’ll continue to use BTRFS with Rockstor. Fortunately I backup the data thats important and I onyl lost tv/movies - mind you it’s almost 7TB worth :frowning:

Here is the link to the long thread of discussion between myself and the BTRFS guys, you should be able to follow through the steps - http://www.spinics.net/lists/linux-btrfs/msg49033.html

What does btrfs check /dev/sdc and btrfs check /dev/sdb give you?

I’m excited about the holidays because I’ll hopefully have time to get back to troubleshooting this stalled project.

Question: Now that the bootstrap fix for Rockstor has been released and btrfs has been updated to patch our issue, will updating to the latest release of Rockstor fix my broken array? Shouldn’t I be able to mount my array in an updated Rockstor install and have the issue fix itself?

We’re waiting on the update for BTRFS-progs, that has not been released yet.

I’ve started testing btrfs-progs 4.3.1, it’s available for Testing folks as of earlier this week.

Is there a particular update you are referring to @ScottyEdmonds ?

Yes @suman it’ll be btrfs-progs 4.3.2 patch