Error mounting BTRFS on reinstall after crash

Hello
I am not experienced on moounting / unmounting filesystems on linux, so forgive me if this is a basic question.

TLDR:
System crashed, reinstalled on new SSD, trying to mount original pool, got errormessage. What to do?

Whole story:
Something went wrong with my rockstor, i have no indication as to what caused this. The system was frozen one morning, and upon attempting a reboot the system went into kernel panic mode.

As it was running on a fairly old SSD and i had a newer one laying around, I replaced the SSD containing the systemdisk and reinstalled the system.

The documentation describes that it is possible to import the pool after a system reinstall (this is one of the reasons i feelt that rockstor was the system to use) so i reinstalled without thinking too much about it.

Upon importing the pool, i get an error as follows:

Failed to import any pool on this device(5). Error: Error running a command. cmd = /bin/mount /dev/disk/by-label/BigHomeDisk /mnt2/BigHomeDisk. rc = 32. stdout = [ā€™ā€™]. stderr = [ā€˜mount: wrong fs type, bad option, bad superblock on /dev/sdc,ā€™, ā€™ missing codepage or helper program, or other errorā€™, ā€˜ā€™, ā€™ In some cases useful info is found in syslog - tryā€™, ā€™ dmesg | tail or so.ā€™, ā€˜ā€™]

Is it possible to rescue this pool and the data on it?
If so, how do i proceed?

Hi @jopaulsen, and welcome!

Sorry youā€™re having issuesā€¦ Iā€™m not an expert in this kind of recovery, but before someone knowledgeable will answer, have you tried looking into the dmesg logs?
You could try using the dmesg | tail command as suggested in the error message, for instance.

Alternatively, you can use Rockstorā€™s webUI to check this log and others, using the Logs manager found under Services > Logs Manager.


There, I suggest to look into dmesg and the rockstor logs as well. You can either look at them separately within the UI, or download them as archive:

Hopefully these will provide helpful information. I know rescue operations can do more harm than repair if done incorrectly, so knowing exactly what the problem is likely represents the best first step.

Sorry I canā€™t help further than this so far.

Thank you Flox for a quick answer.
I had a look at the logs as suggested. Last few lines of logs posted below.
As far as i understand i do not see anything useful here, but as i said, i have limited understanding of these systems, so others might see more.
And as you stated, i know one can do more harm than good just trying stuff in situations like this. Any help is appreciated.
However, this is not a production system, so there is no critical stuff on here. But some backupfiles (remotebackups from another site, onsite backups still exists) and private stuff, so i would rather save it than build up the system again. Especially it would suck to loose the private files.

Rockstor logs:

CommandException: Error running a command. cmd = /bin/mount /dev/disk/by-label/BigHomeDisk /mnt2/BigHomeDisk. rc = 32. stdout = ['']. stderr = ['mount: wrong fs type, bad option, bad superblock on /dev/sdc,', '       missing codepage or helper program, or other error', '', '       In some cases useful info is found in syslog - try', '       dmesg | tail or so.', '']
[13/Jul/2019 14:41:39] ERROR [storageadmin.util:44] exception: Failed to import any pool on this device(8). Error: Error running a command. cmd = /bin/mount /dev/disk/by-label/BigHomeDisk /mnt2/BigHomeDisk. rc = 32. stdout = ['']. stderr = ['mount: wrong fs type, bad option, bad superblock on /dev/sdc,', '       missing codepage or helper program, or other error', '', '       In some cases useful info is found in syslog - try', '       dmesg | tail or so.', '']
Traceback (most recent call last):
  
File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 700, in _btrfs_disk_import
    
mount_root(po)
  
File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 252, in mount_root
    
run_command(mnt_cmd)
  
File "/opt/rockstor/src/rockstor/system/osi.py", line 115, in run_command
    
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /bin/mount /dev/disk/by-label/BigHomeDisk /mnt2/BigHomeDisk. rc = 32. stdout = ['']. stderr = ['mount: wrong fs type, bad option, bad superblock on /dev/sdc,', '       missing codepage or helper program, or other error', '', '       In some cases useful info is found in syslog - try', '       dmesg | tail or so.', '']

dmesg:
input10

[    5.205232] input: HDA Intel Line Out Surround as /devices/pci0000:00/0000:00:1b.0/sound/card0/input11
[    5.205282] input: HDA Intel Line Out CLFE as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12
[    5.205334] input: HDA Intel Line Out Side as /devices/pci0000:00/0000:00:1b.0/sound/card0/input13
[    5.205382] input: HDA Intel Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input14
[    5.205433] input: HDA Intel HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:1b.0/sound/card0/input15
[    5.259851] ACPI Warning: SystemIO range 0x0000000000000828-0x000000000000082F conflicts with OpRegion 0x0000000000000800-0x000000000000084F (\PMRG) (20160930/utaddress-247)
[    5.259857] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[    5.259859] ACPI Warning: SystemIO range 0x0000000000000530-0x000000000000053F conflicts with OpRegion 0x0000000000000500-0x000000000000053F (\GPS0) (20160930/utaddress-247)
[    5.259862] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[    5.259862] ACPI Warning: SystemIO range 0x0000000000000500-0x000000000000052F conflicts with OpRegion 0x0000000000000500-0x000000000000053F (\GPS0) (20160930/utaddress-247)
[    5.259865] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[    5.259865] lpc_ich: Resource conflict(s) found affecting gpio_ich
[    5.264751] ACPI Warning: SystemIO range 0x0000000000000400-0x000000000000041F conflicts with OpRegion 0x0000000000000400-0x000000000000040F (\SMRG) (20160930/utaddress-247)
[    5.264756] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[    5.300839] sd 2:0:0:0: Attached scsi generic sg0 type 0
[    5.300922] sd 2:0:1:0: Attached scsi generic sg1 type 0
[    5.300955] sd 4:0:0:0: Attached scsi generic sg2 type 0
[    5.300990] sd 4:0:1:0: Attached scsi generic sg3 type 0
[    5.301454] sd 3:0:0:0: Attached scsi generic sg4 type 0
[    5.301490] sd 5:0:0:0: Attached scsi generic sg5 type 0
[    5.301520] sd 8:0:0:0: Attached scsi generic sg6 type 0
[    5.301933] sd 9:0:0:0: Attached scsi generic sg7 type 0
[    5.407496] input: PC Speaker as /devices/platform/pcspkr/input/input16
[    5.483730] intel_powerclamp: No package C-state available
[    5.602121] ppdev: user-space parallel port driver
[    5.639484] Adding 4064252k swap on /dev/sdh2.  Priority:-1 extents:1 across:4064252k SSFS
[    5.663514] iTCO_vendor_support: vendor-support=0
[    5.665138] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
[    5.665183] iTCO_wdt: Found a ICH10 TCO device (Version=2, TCOBASE=0x0860)
[    5.665275] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
[    5.864173] EXT4-fs (sdh1): mounted filesystem with ordered data mode. Opts: (null)

I canā€™t remember for certain, but i believe i had this exact message recently in the same situation. In my case the drive was perfectly fine. I didnā€™t know that at the time. My old system disk was still operational, so I reconnected that drive pool to the old system boot disk, and then shutdown that system. I swapped in the new system disk and it worked. my takeaway was that maybe the 2 systems had different btrfs software versions. maybe my old system was more up-todate and a reboot fixed my new boot drive. However, my assessment should be taken with a grain of salt as iā€™m not a btrfs professional. That pool is still running perfectly. I donā€™t believe there was ever a problem with the drive or the filesystem.

Rockstor can absolutely import btrfs pools from other installations and I have done it successfully with probably 10 or more different pools some single and some Raid1. I only had this issue once with 1 drive in a Raid 1 pool.

Thank you, this is what i am hoping to be the case. However the old system is not available anymore, so i cannot boot into that to disconnect or shut it down gracefully.
SMART data on all disks seems fine, and as far as i know, no errors were reported before the system crashed.
What i am looking for, is if there is a command i can run on the set to correct this disk, or force mounting the pool with one disk ā€œmissingā€ (the whole point of using a raid level is to have fault tolerance).
I cannot find anywhere in the documentation where such operations are described.

@jopaulsen Hello. It may be that you have been affected by a known btrfs ā€˜conditionā€™ :slight_smile:

Before you try anything that might, as you earlier referenced, do more harm than good, I would first ensure that you are definitely not affected by the following issue:

There is a known occasional behaviour in btrfs that emerges when mounting multi device volumes (pools in Rockstor speak) when mounting by lable:

Quoting from: Problem FAQ - btrfs Wiki

"
one volume of a multi-volume filesystem fails when mounting, but the other succeeds:

# mount /dev/sda1 /mnt/fs
mount: wrong fs type, bad option, bad superblock on /dev/sdd2,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
# mount /dev/sdb1 /mnt/fs
#

Then you need to ensure that you run a btrfs device scan first:

# btrfs device scan
"

There have been a number of improvements in Rockstor over time where we try to detect this failure and run the above suggested command, one such was:

https://github.com/rockstor/rockstor-core/issues/1547

which gives an example of this failure in ā€˜actionā€™. You didnā€™t report your Rockstor version but the associated code for that issue was added in Rockstor Version 3.9.1-0 so is already in both our testing and stable channels:

https://github.com/rockstor/rockstor-core/pull/1737

We also try to fail over to mounting by each and every device known to the pool if a mount fails but it may be that we fail or have an outstanding bug here in some circumstances. Given the occurrence of this can often be transient itā€™s a difficult one to reproduce. Hence the given reproduce of a LUKS volume on top of bcache in the above issue / pull request set.

So in short Iā€™ve seen this happen when there are no filesystem issues as such, bar the referenced btrfs issue. And given mount by label can try a different drive on each power up it may be that you system will mount just fine after another shutdown and power cycle. Definitely worth a try before you attempt anything else I say.

May not relate to your circumstance and as I say we do try and account for this behaviour in code but depending on your Rockstor version and if we still have bugs in this area itā€™s a simple thing to try.

Hope that helps, but may not relate to your particular circumstance.

@Flox and @dlsound Copying in a reference to the concerned code area to hopefully ease further debugging if itā€™s Rockstor code thatā€™s failing here.:

There are further comments within that code area that may also be useful.

1 Like

Hello Phillip, and thank you for a very detailed explanation.
First of all, I am sorry that i did not specify the version I am running on the reinstall. I downloaded the latest image from the rockstor site, and the version comes up as 3.9.1-0 in the GUI.

On your reference to the btrfs device scan i read that article, and while i do not comprehend in totallity what is written (seems to be part of a ongoing discussion), I gather that the command sorts trough the devices trying to find/resolve btrfs devices?

I ran the command, and while no output was generated in the terminal session, I tried to mount the drives again in the GUI. (if this is supposed to generate output, please let me know).
Still got the same error:

[15/Jul/2019 13:10:03] ERROR [storageadmin.util:44] exception: Failed to import any pool on this device(8). Error: Error running a command. cmd = /bin/mount /dev/disk/by-label/BigHomeDisk /mnt2/BigHomeDisk. rc = 32. stdout = ['']. stderr = ['mount: wrong fs type, bad option, bad superblock on /dev/sdc,', '       missing codepage or helper program, or other error', '', '       In some cases useful info is found in syslog - try', '       dmesg | tail or so.', '']
Traceback (most recent call last):
File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 700, in _btrfs_disk_import
mount_root(po)
File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 252, in mount_root
run_command(mnt_cmd)
File "/opt/rockstor/src/rockstor/system/osi.py", line 115, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /bin/mount /dev/disk/by-label/BigHomeDisk /mnt2/BigHomeDisk. rc = 32. stdout = ['']. stderr = ['mount: wrong fs type, bad option, bad superblock on /dev/sdc,', '       missing codepage or helper program, or other error', '', ' In some cases useful info is found in syslog - try', '       dmesg | tail or so.', '']

So I tried the last thing you suggest: powercycle and try to mount again.
To be sure, i also ran btrfs device scan after prowercycle before trying to mount the pool.
I got the same result, but this time it referenced a different devicename as the point of failure (sdf instead of sdc).
I also see upon running btrfs fi show that devices are named differently this time than upon last reboot. devid 1 (1.36TiB) was sdf last time, now it is sdb.
devid 4 (2.73 TiB) was sdc last time, now it is sde.
devid 5 (2.73 TiB) was sdd last time, now it is sdf.

(The pool consist of 3 devices that are 1.36 TiB, and 3 devices that are 2.73 TiB).

Tried to powercycle again, and ran the same procedure (scan and mount).
Same result, with different label (sdd this time)

Is there anything that I can try to generate more info about the error, so that a solution might be found?

@jopaulsen Thanks for the update.

Yes, on more modern machines the sda sdd type names are essentially arbitrarily assigned on each power cycle. Hence the suggestion to try this so Rockstor can attempt itā€™s mount by by another device (which itā€™s supposed to do anyway but still).

Iā€™m not certain of what you mean here. Iā€™m assuming you are attempting to import a prior installs pool via the disk page; is this correct? I.e. Import BTRFS Pool. The reason I ask is their is currenly no Web-UI element that allows one to specifically mount / unmount a given Pool (btrfs volume); but an import, along with quite a few other functions there after, implies a mount check / remount.

To aid folks on the forum in helping you in this mount issue could you give us the full output of:

btrfs fi show

That way we can all see what we are dealing with here and hopefully narrow down the issue. That is from your wording it would seem this pool is expected to have 6 device members and that command should give us a clue at to their visibility.

I think the next step is to try a command line mount such as is being attempted by Rockstor on boot so we can rule out Rockstorā€™s potential failings in executing/interpreting this mount command during the pool import (if that is what you are doing).

With the output of the above command we can have a go at this to confirm if we have a btrfs issue here on your pool. Could you also summarise the history of this pool. Specifically if you have tried to re-create a pool with these devices (which would throw up an error if done via Rockator Web-UI or if you have only attempted an import as per the referenced doc entry. Also during the re-install on the new ssd did make sure to not affect inadvertently the existing data pool? I.e. by disconnecting these drives prior to re-install etc. Just trying to narrow down what might have happened.

Hope that helps and apologies if Iā€™ve missed something you have already stated.

Mount in the GUI:
Yes exactly as described in the documentation, by clicking the little arrow next to one of the disks in the pool.

Full output:
btrfs fi show
Label: ā€˜rockstor_rockstorā€™ uuid: 13d57399-7027-45f4-ab7b-b5a8d7a77f20
Total devices 1 FS bytes used 1.92GiB
devid 1 size 107.42GiB used 5.02GiB path /dev/sdf3

Label: ā€˜BigHomeDiskā€™ uuid: e869f7d1-a92e-4204-8a1f-35124bc353ce
Total devices 6 FS bytes used 6.86TiB
devid 1 size 1.36TiB used 1.36TiB path /dev/sdh
devid 2 size 1.36TiB used 1.36TiB path /dev/sdg
devid 3 size 1.36TiB used 1.36TiB path /dev/sdb
devid 4 size 2.73TiB used 1.59TiB path /dev/sdc
devid 5 size 2.73TiB used 1.59TiB path /dev/sdd
devid 6 size 2.73TiB used 1.59TiB path /dev/sda

As to the history of the pool, i have not tried to recreate or other commands outside of the import as described in the documentation you refer to, having some experience with ā€œthings doing more harm than good if you are not sure what you doā€ā€¦ :slight_smile:

To recap:
Something went wrong with my rockstor, i have no indication as to what caused this. To my knowledge, no errors or failures, and the pool was not full. The system was frozen one morning, and upon attempting a reboot the system went into kernel panic mode.

As it was running on a fairly old SSD and i had a newer one laying around, I replaced the SSD containing the systemdisk and reinstalled the system. (having read that importing the pool in a new installation should be possible).

Upon importing the pool, i got the error as described previously.

@jopaulsen Thanks for the full command output. So all disks are seen as attached which is the first step.

Now given there have been a number of improvements throughout the system, including I think (from memory) in the import system. You would be best advised to use the newest Rockstor code you can. In this case we are most interested in the newer kernel that comes from subscribing to either update channel. Currently the Stable channel is newer but both include a kernel update from 4.10 to 4.12. So first off make sure you are running the 4:12 kernel by subscribing to either update channel and apply all updates that come you way. As there are a tone of upstream updates you would be best advised to give it a good few minutes to apply these updates and depending on your machines speed it could take quite some time to apply and rebooting mid update will most likely break your install. So we are after getting your system up to at least the 4.12 kernel via either update channel.

The current kernel version is displayed in the top right of the Rockstor Web-UI.

So once you are all updated and have successfully rebooted into the new kernel we can then try and re-create the Rockstor reported error on the command line. Assuming of course that the updated kernel and Rockstor code hasnā€™t managed to successfully import the pool of course. Worth a try before you proceed to the following command line reproducers of the mount issue.

To confirm you are running the newer 4.12 kernel offered in either update channel run the following as the root user:

uname -a

and then try:

mount /dev/disk/by-label/BigHomeDisk /mnt2/BigHomeDisk

Should give the same result as you have been getting through the Web-UI.

With the following giving an indication of which drive the label (for this boot) is set to use.

ls -la /dev/disk/by-label/BigHomeDisk

And, given the indicated known issue, one is supposed to be able to mount a pool from any given member, you could also substitue any member dev name, ie /dev/sdg for the lavel dev reference.

But first see if an update helps and be sure not to reboot during the update as it will break the install. Also note that during larger installs the Web-UI will become unavailable while the rockstor service and its associated dependencies are updated. But a reboot will be necessary thereafter if you are not already running the 4.12 kernel which is the main point of this update, at least initially.

Hope that helps.

Ok, have subscribed to a new activation code, as my old one did not work (because i use a new SSD for the system maybe?).
Hovever, when adding it, the new one generated an error message, so contacted support to resolve that issue. (Had to do it last year when purchasing also).
Will enable stable updates, and try what you write here. Ill respond when the issue with the activation code has been resolved.

@jopaulsen Re:

Most likely because you system has a known non unique product_uuid for itā€™s motherboard, or there is already an existing product_uuid in the existing system that is blocking this one.

This is further indication of the same non unique motherboard product uuid or existing problematic product id on record.

Yes, rather annoying this. We do have a ā€˜self serviceā€™ web app in beta testing but it is yet to ā€˜go liveā€™. Itā€™s what sent you the ā€œTRIAL BETA TEST SYSTEM FOR ACTIVATION CODESā€ email that you were asked to ignore due to the old system still being authoritative: assuming you received one of these of course. We currently have 2 parallel systems dishing out activation codes and the test system activation codes will not work for the existing repos.

Iā€™m afraid I canā€™t as yet resolve your activation code issue, assuming you waited long enough for the ā€˜realā€™ activation code to arrive by email (usually 10 mins or so).

Let us know how it goes and if you have further queries about your exact appliance id you can private message me on this forum to avoid exposing your appliance id or activation code publicly (Iā€™m currently the only active forum moderator but we hope to have another one soon :slight_smile:).

Hopefully your activation code issue should be sorted soon and we can continue. Apologies for the slight cascade of issues you are having but we are working on all areas of this. Plus the stable release code does have some improvements to the import mechanism so their that. And once our ā€˜in testingā€™ Appliance ID Manager takes over the activation code management we should be in a better position to more rapidly sort these frustrating delays with having to email re activation code problems. At least thatā€™s the hope.

Thanks for helping to support Rockstorā€™s development by the way.

Short update: Recieved new code, updating as we speak.

@jopaulsen Glad to hear that.

Do be patient as there will be hundreds of MBs of upstream CentOS updates also. And remember there will be rockstor reload in this which will kill the Web-UI for a bit.

Keep us updated.

No problem with supporting the developmen, i have used the system (and supported it) for a long time. This is the first serious problem i have experienced.

Ok, so I finnished updating.
After update, the system asked to be powercycled, so did that first, and then ran the commands you suggested. No luck.

Command return:

uname -a
Linux rockstorhome1 4.12.4-1.el7.elrepo.x86_64 #1 SMP Thu Jul 27 20:03:28 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux

mount /dev/disk/by-label/BigHomeDisk /mnt2/BigHomeDisk
mount: wrong fs type, bad option, bad superblock on /dev/sdc,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
ls -la /dev/disk/by-label/BigHomeDisk
lrwxrwxrwx 1 root root 9 juli  15 18:40 /dev/disk/by-label/BigHomeDisk -> ../../sdc

EDIT:
Tried to run the GUI import option also, result here:

Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 743, in _btrfs_disk_import
    mount_root(po)
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 408, in mount_root
    run_command(mnt_cmd)
  File "/opt/rockstor/src/rockstor/system/osi.py", line 121, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/mount /dev/disk/by-label/BigHomeDisk /mnt2/BigHomeDisk -o ,compress=no. rc = 32. stdout = ['']. stderr = ['mount: wrong fs type, bad option, bad superblock on /dev/sdc,', '       missing codepage or helper program, or other error', '', '       In some cases useful info is found in syslog - try', '       dmesg | tail or so.', '']

I am stumped as to what to try next.

@jopaulsen Glad the update went ok.
Re:

So you are now using the newer of the kernels so we have that out of the way.

And we have confirmation that the mount by label is as previously reported by Rockstors web-UI.

Iā€™d now try mounting at the command line by each of the pool (btrfs volume) members in turn.
Remember to reference a fresh execution of:

btrfs fi show

to find the current device names for this pool.

I.e if the above command lists say /dev/sdg as a current pool member, as it did before, then try for example:

mount /dev/sdg /mnt2/BigHomeDisk

and each of the other disks in turn to properly check for this known issue. It is looking like your pool is poorly but there is still stuff we can try. Just check that a default mount via each of the devices in turn is not working first. Also no harm executing the:

btrfs device scan

command before hand, but should only be required once per boot and Rockstor has most likely already run this.

So lets see if trying to mount by each specific device in turn gets us any further?

Ok, so i tried to mount each volume, results below:

[root@rockstorhome1 ~]# btrfs fi show
Label: 'rockstor_rockstor'  uuid: 13d57399-7027-45f4-ab7b-b5a8d7a77f20
	Total devices 1 FS bytes used 2.00GiB
	devid    1 size 107.42GiB used 6.02GiB path /dev/sdf3

Label: 'BigHomeDisk'  uuid: e869f7d1-a92e-4204-8a1f-35124bc353ce
	Total devices 6 FS bytes used 6.86TiB
	devid    1 size 1.36TiB used 1.36TiB path /dev/sdh
	devid    2 size 1.36TiB used 1.36TiB path /dev/sdg
	devid    3 size 1.36TiB used 1.36TiB path /dev/sdb
	devid    4 size 2.73TiB used 1.59TiB path /dev/sdc
	devid    5 size 2.73TiB used 1.59TiB path /dev/sdd
	devid    6 size 2.73TiB used 1.59TiB path /dev/sda

[root@rockstorhome1 ~]# mount /dev/sdh /mnt2/BigHomeDisk
mount: wrong fs type, bad option, bad superblock on /dev/sdh,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@rockstorhome1 ~]# mount /dev/sdg /mnt2/BigHomeDisk
mount: wrong fs type, bad option, bad superblock on /dev/sdg,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@rockstorhome1 ~]# mount /dev/sdb /mnt2/BigHomeDisk
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@rockstorhome1 ~]# mount /dev/sdc /mnt2/BigHomeDisk
mount: wrong fs type, bad option, bad superblock on /dev/sdc,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@rockstorhome1 ~]# mount /dev/sdd /mnt2/BigHomeDisk
mount: wrong fs type, bad option, bad superblock on /dev/sdd,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@rockstorhome1 ~]# mount /dev/sda /mnt2/BigHomeDisk
mount: wrong fs type, bad option, bad superblock on /dev/sda,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@rockstorhome1 ~]# 

No luck, i am afraid.

@jopaulsen Re:

OK, at least we tired.

So given we have now tested for the know issue re mounting by label you are best advised to follow the below openSUSE guide on broken/unmountable volumes:

https://en.opensuse.org/SDB:BTRFS

The sub-section you want is How to repair a broken/unmountable btrfs filesystem

And if you substitute their ā€œ/mntā€ point for your ā€œ/mnt2/BigHomeDiskā€ then if you do get it mounted you should be able to import as per normal (even if you end up have to mount ro).

And from a quick look it would seem that their next suggested step is to initiate a scrub by specifying pool devices directly so in your case that would be:

btrfs scrub start /dev/sdg

Or any of the other devices in that pool, assuming sdg is still one of the relevant pool members.

On that page they show how you can then monitor the progress of that scrub on the command line; assuming it doesnā€™t throw an error on attempting to start it.

Apologies for not getting you any further but given SUSE/openSUSE have a great deal of expertise in btrfs Iā€™m betting that guide is at least a good start. If you still make no progress then there may be others on the forum who can help you further as we have quite a few experienced btrfs users among our members. Itā€™s also a good idea to keep a record of the commands you have tried from here on in as you start to try more risk commands as this could help others advise you better given they will know the command history to date.

And before you try anything else are we safe to assume that these drives are attached to the same system they were working happily on previously? Just a though as some controllers map drives differently but if their method of attachment is identical or equivalent to their previous attachment then itā€™s a null point and you should move on with that openSUSE guide.

Take care to note which steps in the guide are potentially destructive as some can indeed make things worse. If the date on these drives lives no where else and is of sufficiently value then you are best to not execute any of the said potentially destructive commands and resource btrfs experts instead. They will most likely want you to use the latest kernel and the easiest method would be to use an openSUSE Tumbleweed LiveCD/Rescue CD which should carry the most recent viable kernel and btrfs-progs. Iā€™m currently working on expanding our buildbot capabilities to provide Rockstor on top of Tumbleweed and openSUSE Leap 15.1 but I have unfortunately, as yet, not completed this task and have no ETA; apologies there also.

See the following page for Tumbleweed LiveCDs/Rescue CD options:
https://en.opensuse.org/openSUSE:Tumbleweed_installation

In the case that you use Tumbleweed your mount attempts could use the generic ā€œ/mntā€ point option suggested by their above referenced docs.

Do let us know how you get on with this and please do ask again here if you have no progress still as there are many others here who can advise on this topic better than myself. But that guide looks like you next best step and is very approachable.

Hope that helps and do keep us posted. Also note that the referenced guide indicates a data retrieval method if you are unable to achieve a successful mount.

First of all, yes, the system is the same as they were previously mounted on. The only thing that has changed, is the exchange of the SSD the system was installed on. All other drives is mounted exactly as they were.
Seccondly: tried to go trough the suggested commands, results below.

[root@rockstorhome1 ~]# mount /dev/sdb /mnt2/BigHomeDisk
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Tried all drives, same result.

[root@rockstorhome1 ~]# btrfs scrub start /dev/sdh
ERROR: '/dev/sdh' is not a mounted btrfs device

Also tried this one from the SUSE papers.

[root@rockstorhome1 ~]# mount -o usebackuproot /dev/sdb /mnt2/BigHomeDisk
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

I am at this point looking trough the destructive commands to see what i would chance to do. However, at this point i am also considering if i am going to accept the data as lost, and move on. (this would also allow a more destructive progressā€¦ )
But it really bugs me that i have no idea as to what caused this failure. Moving on would be accepting a ā€œunknownā€ reason for this crash, and no idea if it will happen again.

@jopaulsen Re:

Yes I know what you mean, but btrfs is definitely a work in progress and although itā€™s very good itā€™s goals are very lofty. Iā€™d say stick with it for a bit longer as there are many successful reports of date retrieval and as yet we have actually tried very little. And do remember to resource the Tumbleweed LiveCD/Rescue CD option as it may very well be that you get very different results with a much newer kernel. And once the pool (btrfs volume) is mountable there it should again be mountable within Rockstor there after as the newer kernels and btrfs-progs have many fixes. Hence our push to re-basing on openSUSE Leap 15.1 and Tumbleweed.

See how you get on with the Tumbleweed LiveCD/Rescue CD (USB) options as it may end up just mounting find. Or at least put you in a position to be able to query the experts/developers on the btrfs mailing list. It may very well be that you have encountered a bug re robustness as all you have done to date is a single unclean shutdown from when your system was found to be unresponsive, though admittedly with an older kernel. But once you are in Tumbleweed land you are essentially bang up to date release kernel wise.

https://btrfs.wiki.kernel.org/index.php/Btrfs_mailing_list

Hope that helps.