Dracut Emergency Shell

Whenever I attempt to start Rockstor, I get the attached error and messages after it attempts to boot.

This is on a different system from before because I had a direct lightning strike on my house and the former system has issues. I’m hoping someone can provide some guidance on what to do. I have about 4TB worth of data which would be catastrophic if I were to lose any of it.

Hi @eetheredge806,

Is this on an existing Rockstor install?
If so, it appears that your root installation disk is dead.

The good news is that in a standard configuration, your Rockstor data should be on disks other than your root disk, and you’ve probably only lost your configuration.

If you’re familiar with Linux, please provide:

  • rdsosreport.txt contents
  • output of journalctl -b
  • Output of blkid

If you’re not familiar with Linux, debugging this will be a hard slog, I strongly suggest you replace the system disk, reinstall and import your existing BTRFS pool(s) using the instructions here.

1 Like

I have tried in vain to import my disks, pools and shares, but I keep getting this message:

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 700, in _btrfs_disk_import
mount_root(po)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 252, in mount_root
run_command(mnt_cmd)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 115, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /bin/mount /dev/disk/by-label/aviganis.pool /mnt2/aviganis.pool. rc = 32. stdout = [’’]. stderr = [‘mount: wrong fs type, bad option, bad superblock on /dev/sdc,’, ’ missing codepage or helper program, or other error’, ‘’, ’ In some cases useful info is found in syslog - try’, ’ dmesg | tail or so.’, ‘’]

@eetheredge806
In some cases mount by label can fail; see:

https://btrfs.wiki.kernel.org/index.php/Problem_FAQ
subsection
“Filesystem can’t be mounted by label”
which in turn leads to:
“Only one disk of a multi-volume filesystem will mount”

quoting from there:

“Then you need to ensure that you run a btrfs device scan first:”

and so we try to fail over to mount by device with each having a device specific scan before hand:

But these device specific scans may be failing in this case.

Try as root:

btrfs device scan

as per the linked btrfs FAQ wiki above and then attempt the import again.

You could also try the following command before the above device scan command.

udevadm trigger

From https://github.com/rockstor/rockstor-core/issues/1606

The output of:

btrfs fi show

may also be useful to help other forum members with further suggestions.

Hope that helps and let us know how you get on.

Still getting this. I guess I will just have to realize and understand my data is irrecoverable and explain to my wife and daughter that all of our family pictures are history. Something told me not to go this route to keep my data backed up.

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 700, in _btrfs_disk_import
mount_root(po)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 252, in mount_root
run_command(mnt_cmd)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 115, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /bin/mount /dev/disk/by-label/aviganis.pool /mnt2/aviganis.pool. rc = 32. stdout = [’’]. stderr = [‘mount: wrong fs type, bad option, bad superblock on /dev/sdc,’, ’ missing codepage or helper program, or other error’, ‘’, ’ In some cases useful info is found in syslog - try’, ’ dmesg | tail or so.’, ‘’]

What does it mean when it indicates “Disk is unusable because it has an existing whole disk BTRFS filesystem on it. Click to configure ot wipe”

OF course it has data on it, I’m trying to recover it, but it seems like there are a ton of roadblocks.

@eetheredge806

It means exactly what it is stating. There at least appears to be a whole disk btrfs file system as indicated by lsblk (Rockstor’s default pool arrangement is whole disk) and so this message it to indicate that state and block it’s use in other pools.

That is not necessarily the case. Btrfs will refuse to mount an unhealthy volume and there are several ways to mount that volume even if it is damaged. And then still other procedures to recover undamaged info even if a mount is not possible. But no one here, or anywhere else, can inform you unless you give us some information about this volume. And one place to start would be by answering the question I asked of you previously.

Please answer this question (the command must be run as root on a local Rockstor terminal or via a ssh session to your Rockstor) and you may then at least be helping others help you from a more informed stance.

I as referenced there is a know issue with regard to mounting via label and if may just be that. You could also manually mount the volume but you really need to answer the given question so that others can help. We don’t even know if this was a multi disk pool (volume) I’m just assuming this for the time being as that is usually the case.

Do not wipe this or any other disks (if there are any which has yet to be established in this thread: hence the command output request) and do not attempt any repair. Currently there is actually no indication of an actual corruption (see my FAQ reference again) and a manual mount is also entirely possible and I and many many others on the list can help with that but not until you answer the given question.

To to clear: my understanding of your current scenario given this threads info is that you had a lighting strike and lost a system / or system disk. This seems to have been indicated by your transferring that system disk to another system and it failing. Presumably you re-installed Rockstor successfully, although we don’t know your exact version:

yum info rockstor

will tell us that (ignore the Web-UI for this info for the time being) that command gives the canonical truth of the matter. Assuming you haven’t build from source code that is.

The current road block is not informing the list of anything about your data pool. Please try and help us help you as without any information some advise is best not given as it can cause more harm than good.

Let us know the full story and everything you have tried so far. For instance did you re-install and were you absolutely sure you didn’t re-install over one of your data disks. Incidentally, even if you did and there were at least 2 disks in a raid1 all your data may still be retrievable: even without a mount but we must have some initial info and a record of what happened.

Yes that would have be nice but but we can only go from here for the time being. Although the replication feature is one you may be interested in, in the future: ie auto replicating a share from one Rockstor machine to another on a interval basis.

Hope that helps and lets see if the forum and yourself can methodically work through this. It may just be that you need to mount degraded but again this is potentially dangerous info as if you attempt it, you may only get one shot at mending the pool. But again, to stress, this depends on the raid level (ie raid0, raid1 etc) and the number of disks remaining in the pool: which the first command above should tell us.

Please also let the forum know if my requests are inappropriately pitched. That is are you able to get a local terminal or a ssh session into this box for example.

So in short I wouldn’t loose hope just yet as by default btrfs avoids mounting damaged or degraded (missing devices) volumes but this may still be a simple known quirk which we can get around but only with the requested information.

Thanks.

Ok, I think the biggest cause of my issue was a version mismatch when it came to Rockstor. At first, I was trying to get it to work with 4.10.xxxx, but when the lightning zapped my old server, it was running 4.12.xxxx. I was able to get my license changed over to the new UUID and once I activated and then updated to 4.12.xxxx, I was able to import my old disks, pools and shares. Everything is working now.

@eetheredge806 Thanks for the update and glad you got things sorted.

Yes the kernel version could have affected this but there have also been quite a lot of updates and improvements between the associated Rockstor versions.

We do have the following advise in our Reinstalling Rockstor howto in the Data Import subsection:

"Once Rockstor has been reinstalled and you have applied the updates via the automated prompt in the WebUI and rebooted if prompted to do so you can import the data that was present on your previous Rockstor install’s data disks; assuming you had separate data disks of course.

N.B. given this is a new install it is advisable to reboot anyway to make sure all is well before doing the data import, this will ensure you are using all of what has just been updated."

Also as this is a newly subscribed stable channel install do make sure you are actually running the latest Rockstor via the:

yum info rockstor

as there are quite a few other improvements / fixes that have been added and there is a known issue re Web-UI version display (will be fixed in the next iso release).

Thanks again for the update and for helping to support Rockstor’s development via a stable subscription.