Failed to import any pool on this device(2). Error: Error running a command. cmd

Brief description of the problem

Drive will not import

Detailed step by step instructions to reproduce the problem

Rockstor caused my shares to randomly disappear. Reinstalled rockstor and attempted to import drives to recover data.

Web-UI screenshot

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 700, in _btrfs_disk_import mount_root(po) File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 252, in mount_root run_command(mnt_cmd) File "/opt/rockstor/src/rockstor/system/osi.py", line 115, in run_command raise CommandException(cmd, out, err, rc) CommandException: Error running a command. cmd = /bin/mount /dev/disk/by-label/genepool /mnt2/genepool. rc = 32. stdout = ['']. stderr = ['mount: wrong fs type, bad option, bad superblock on /dev/sda,', ' missing codepage or helper program, or other error', '', ' In some cases useful info is found in syslog - try', ' dmesg | tail or so.', '']

@UC_Nightmare Welcome to the Rockstor community.

along with your current report of a failure to mount:

suggests that you may have an ‘poorly’ drive or pool.

What is the output of:

btrfs fi show

as this may help forum members help you. And should hopefully give info on the nature of your pool. Ie if it consists of only one drive for example.

Also a picture of the Disks page might also help, along with the version of Rockstor you are using.

Hope that helps.

btrfs fi show returns:
[root@rockstor ~]# btrfs fi show
Label: ‘rockstor_rockstor’ uuid: 1f3cb771-2bb0-4b42-abb1-14613054d3b3
Total devices 1 FS bytes used 1.61GiB
devid 1 size 12.95GiB used 4.04GiB path /dev/sdb3

warning, device 2 is missing
Label: ‘genepool’ uuid: fd6342ca-30a8-4ec8-9a73-f9c317f03f2a
Total devices 2 FS bytes used 368.28GiB
devid 1 size 465.76GiB used 376.01GiB path /dev/sda
*** Some devices missing

Screenshot of drives page:

The other drive that was paired with this one no longer exists.

I’m only interested in recovering the data that is/was on this drive. If that’s not possible, it’s not the end of the world. Basically, tell me if I’m wasting my time with this.

I’m not sure on the version of Rockstor but the iso was downloading and freshly installed within the last week.

Hi @UC_Nightmare,

What RAID level was in use on your pool labelled ‘genepool’?
I’m assuming it was RAID0 or RAID1 due to the disk count.
You can check this with:

btrfs fi df

If it was RAID0, you’re probably out of luck (every nth stripe missing).
If it was RAID1 you should (hopefully) be able to mount the filesystem manually as a degraded array.
Something like this would be a good start:

mount -o degraded,rw,relatime,space_cache,subvol=/genepool /dev/sda /mnt2/genepool

Note, I largely pulled that command out of my [redacted] with some google-fu and details from your btrfs fi show, I cannot guarantee that it’s correct as I’m at work and unable to test it until I get home.

@Haioken and @UC_Nightmare An important caveat on the following fine suggestion:

With kernels pre 4.14 or so, such as Rockstor’s current defaults, you only get one chance to mount rw (read, write). This means you might want to mount degraded,ro (read only) first until you have all your data off and then attempt a repair later once in a less precarious position. Returning the pool to regular service could be done either by adding a disk to achieve the 2 disk minimum for raid1 or changing the pool raid level to single (where a pool of one device is legitimate), both of which will require the ‘one try’ degraded,rw (read write) mount options. If the later is chosen then redundancy is removed from the pool but it can also be much quicker and so would stress the remaining drive (and potential only copy of the data) less as well. But best if the data is no where else or inconvenient to retrieve / outdated in backups, to focus on retrieval via a degraded,ro (read only) mount prior to doing anything else: assuming a ro,degraded mount works that is: otherwise it’s down to btrfs restore which is very slow and rather a last resort when a pool (btrfs vol) is unmountable even as degraded,ro.

@Haioken would you agree with this? I’m kind of looking forward to moving to a post 4.14 kernel by default so we don’t have to consider such things: this should happen shortly. Oh well bit by bit.

Hope that helps and also note that almost all pool repair procedures are currently limited to command line intervention, although we are in the process of improving this situation: ie we now support:

and

But in the case of an unimportant/non Rockstor managed pool these don’t help.

I also intend to address the following issue as soon as I am able:

Although it in turn has a background dependency that I’ve begun working on but with nothing to show as yet.

Yes, that’s pretty well on-point. I was trying to directly address @UC_Nightmare’s issue of “I just want to get the data out” - though I probably should have thought to swap rw/ro mount options.
I think it’ll be great to move to a later upstream Kernel.

The related issues and associated pull-requests (where applicable) are looking good. Hopefully these will pave the way toward some dedicated ‘recovery’ UI elements.

@Haioken @phillxnet
Sorry for the late reply. Yes the array was originally a RAID1.

I attempted to mount the drive as degraded (read only) with the slightly modified command:
mount -o degraded,ro,relatime,space_cache,subvol=/genepool /dev/sda /mnt2/genepool

unfortunately the console returns: mount: mount(2) failed: No such file or directory.

I am currently googling the issue, but I have not much experience with the command line so your feedback would be helpful.

Edit: mount -o degraded,ro /dev/sda /mnt2/genepool
ran without any errors. I hope I have not done something terribly wrong.

Edit 2: the command seemed to have no effect. when I attempted to reimport the drive via GUI it now throws this error:
Failed to import any pool on this device(2). Error: Error running a command. cmd = /sbin/btrfs quota enable /mnt2/genepool. rc = 1. stdout = [’’]. stderr = [‘ERROR: quota command failed: Read-only file system’, ‘’]

I believe running: mount -o degraded /dev/sda /mnt2/genepool will rid me of the read only error, but I will wait for further input from those who actually know what they are doing.

The above quote is incorrect, I meant to quote:

The makes some general assumptions about your configuration, that the subvolume is labelled ‘/genepool’, and that the mountpoint /mnt2/genepool exists.
Either of these assumptions being incorrect could cause the mount to fail as per your first error.

Yes, the RS GUI has a general disagreement with importing read-only shares due to the reliance on quotas. Mounting rw would resolve this, however as @phillxnet mentioned, you’ll only be able to do this mount once.

If all you want is to retrieve the data, I would mount it manually, and retrieve the data manually via the command line (or manually export the share with SMB or NFS to access remotely)

Otherwise, I would strongly recommend attaching a new disk to the system of equal or greater size, adding to the existing btrfs filesystem, and performing a mount and balance prior to any further major use.

Do you have any suggestions for a command that would successfully mount the drive in such a way that I will be able to retrieve the data with minimal risk of data loss?

@UC_Nightmare

As @phillxnet above mentioned, you simply need to mount read-only (which you did).
This will however prevent Rockstor’s UI from being able to use the drive, however the disk is mounted and accessible.

Upon review on my own NAS, I think the proper mount for your situation is:

mount -o degraded,ro,relatime,space_cache,subvol=/ /dev/sda /mnt2/genepool

This will enable access to the disk locally, allowing you to copy via shell to any other locally mounted filesystem.
If you wish to copy over a network, that will require a little more work, and some detail on what you’re planning to copy to

1 Like

So if I’m understanding this right: I can format another disk as NTFS, connect it to the rockstor machine, and copy my files from the original drive to the NTFS drive?

do you have any suggestions for commands to achieve this? A quick google search shows that this isn’t a very popular topic.

@UC_Nightmare,

That is one possibility yes. To get NTFS write-support (last I checked) you’ll need to use NTFS-3G.
I’m not convinced that this will be pre-installed, so to install it:

yum install ntfs-3g

Beyond that, the following is conjecture. Make sure you read and understand what you’re doing here.

You’ll need to identify the disk and partition you’ve attached (after it’s attached, obviously). You should be able to get this with:

fdisk -l | grep -i ntfs | awk '{print $1}'

after attaching the NTFS drive. If the output is blank, Linux has likely not recognized the attached disk.

Create a mountpoint and mount the NTFS partition.

mkdir /mnt/windisk && mount -t ntfs-3g /dev/<disk_and_partition> /mnt/windisk

You’ll also need to mount your original partition read only for safety, as shown earlier:

mount -o degraded,ro,relatime,space_cache,subvol=/ /dev/sda /mnt2/genepool

Then it’s simply a matter of copying. There are a hundred million ways of copying files between two locations in Linux. I typically suggest a tarpipe (which is quite advanced, but offers good throughput regardless of file sizes).
I would start by creating a backup directory, then tarpipe the complete contents of /mnt2/genepool to it:

mkdir /mnt/windisk/rs_backup
cd /mnt2/genepool
tar cf - . | (cd /mnt/windisk/rs_backup && tar xBf -)

It seems trying to mess with ntfs was a bad idea.After many errors I attempted to reformat the new drive as exfat, but I am really unfamiliar with the linux command line (and I didn’t plan to spend my day trying to get familiar with it.)

perhaps i will take my chances with mounting the drive as degraded rw for the chance that I may be able to simple copy the files over the network.

Thanks for all the help you have provided. I apologize for wasting your time.

@UC_Nightmare

No problem. Rockstor is still quite young, thus the reliance on command line for many more complex operations.

N.B Phillip’s warning relating to mounting the pool as read-write, you can only perform that action once.

If you would prefer to access the share over the network, and not mount rw for safety, perhaps you could try setting up a temporary samba config with a single exported share.

To do this, we need to create an alternative config file, say /root/smb_alt.conf

The config would be something like:

[global] 
workgroup = WORKGROUP
netbios name = broken_nas
security = share  
[data] 
comment = temporary_share
path = /mnt2/genepool
force user = root
force group = root
read only = Yes 
guest ok = Yes
browseable = Yes

Once this content is in place, we can start samba (the windows file sharing system) with:

/usr/sbin/smbd -i -s /root/smb_alt.conf

Again, untested because I’m not at home, but some googling should confirm most of this.

I believe I have the exfat drive working.

I followed your commands and just ran “tar cf - . | (cd /mnt/windisk/rs_backup && tar xBf -)”

The shell still hasn’t returned the prompt so I assume it is still working. Is there any status notification or shall I just wait a few (~30) minutes?

Hi @UC_Nightmare

Most linux copy commands are pretty garbage at feedback, so don’t worry too much on that.
Can you hear disk activity?

For further verification, you can open up second shell (do not terminate the first one, as you can’t really recover a partially completed tarpipe copy! you would need to start again) and check the contents of /mnt/windisk/rs_backup

ls /mnt/windisk/rs_backup

Or for a list of files modified in the last minute, try:

find /mnt/windisk/rs_backup -mmin -1

You can change the ‘-1’ to ‘-n’, where n is the number of minutes you want to look into the past.

Regarding how long it’ll take, there’s never a fixed time, but I can provide guesstimates if you tell me:

  • how much data do you have on the source disk?
  • What type of disks are the source and destination? (SSD? 7200rpm Rotating rust?)
  • How is the data distributed? (Mainly files larger than 50Mb, Mainly smaller files, even distribution of file sizes)
  • Does it have complex folder structure, or only a few folders?

It still hasn’t completed, but it is showing some slow progress (but progress nonetheless). It’s currently copying some movies (aprox 2-5gb each), with moderately complex folder structure (each movie could be 4-5 folders deep).

The drives are both 2.5" hard drives of unknown speed. Not the fastest, but I was expecting a bit faster than this.

I think the source drive has aprox. ~450gb of files. The majority (300gb ish) of which is a plex folder containing movies and tv shows. all of which are >50mb. The rest are various small files. It appears those haven’t been copied yet.

At 70% (to account for overhead) of estimated max speed for those drives, I figured the transfer would be complete in 1.5 - 2 hours. It believe I may have grossly overestimated copying via CLI. It shouldn’t be a problem though. If nothing goes wrong I will just let it run overnight.

EDIT: I ran “find /mnt/windisk/rs_back -mmin -120” just to see what all might have been copied so far. It appears the transfer is stuck on the first file (in this case a movie in .mkv)
I believe I may have partitioned the exfat drive wrong. It seemed to work at the time, so I was unsure. Could this be the cause?

Wow, stuck on a single file, that’s quite odd :confused:
I can’t imagine a mistake in partitioning that would cause this.

Assuming you mean 2.5" mechanical drives, they are typically quite slow - though not that slow.

Are you sure the source drive is still in good condition?

Can you check for the tar processes?

pgrep tar

Also check the mountpoints

mount | grep "genepool\|windisk"

Post the output of those two.

Also might be worth checking /var/log/messages for btrfs issues and posting that too:

grep btrfs /var/log/messages

It might be prudent to try a traditional linux copy rather than a tarpipe in this instance, as a traditional copy is more likely to exit failure if something goes wrong.
I’d like to see the output of the above first, however if you want to jump straight to the traditional copy, you’ll need to terminate the tarpipe copy with ctrl-c and instead run:

cp -R /mnt2/genepool/* /mnt2/windisk/rs_backup

I’ll be around for another 2 hours, then I’ll be unavailable for about an hour or so while travelling.

[root@rockstor ~]# pgrep tar
12703
12705

[root@rockstor ~]# mount | grep “genepool|windisk”
/dev/sdb on /mnt/windisk type ext2 (rw,relatime,block_validity,barrier,user_xattr,acl)
/dev/sda on /mnt2/genepool type btrfs (ro,relatime,degraded,space_cache,subvolid=5,subvol=/)

[root@rockstor ~]# grep btrfs /var/log/messages
Feb 3 12:05:10 Rockstor dracut: *** Including module: btrfs ***
Feb 3 12:08:16 Rockstor dracut: *** Including module: btrfs ***
Feb 3 12:18:52 Rockstor dracut: *** Including module: btrfs ***
Feb 3 12:24:51 Rockstor dracut: *** Including module: btrfs ***
Feb 3 12:36:08 Rockstor dracut: *** Including module: btrfs ***
Feb 3 12:57:16 rockstor dracut: *** Including module: btrfs ***
Feb 4 08:59:33 rockstor dracut: *** Including module: btrfs ***
Feb 5 15:12:33 rockstor dracut: *** Including module: btrfs ***
Feb 6 14:21:33 rockstor dracut: *** Including module: btrfs ***
Feb 6 16:29:53 rockstor dracut: *** Including module: btrfs ***

Nothing appears too terribly out of order to me, but I’m not experience in this field.

After posting this I will attempt a new copy. Depending on the outcome I may go to bed shortly after that. I will keep you posted.

Edit: ctrl + c returns nothing but a blicking cursor in shell (no response). Would it be unwise to close out of the shell and open a new one to begin another copy? I am unsure if closing an ssh session will also kill the copy attempt.

Edit 2: spamming ctrl-c (and once q) finally led to a response.
"You have new mail in /var/spool/mail/root
[root@rockstor genepool]#"
seems odd to me, but I will continue as planned.

Edit 3: “cp -R /mnt2/genepool/* /mnt2/windisk/rs_backup” returns:
"cp: target ‘/mnt2/windisk/rs_backup’ is not a directory"
perhaps I should use “/mnt/windisk/rs_backup”?

Good luck!

You’re correct in that everything there looks fine with the notable exception that your ‘windisk’ mount is NOT exFAT, it’s Linux EXT/2 (Aka “Second Extended”).
If you’re planning to attach this to a windows machine later, I would probably suggesting resolving that before continuing.

To create an exFAT partition on Linux, you need exfat utils, not usually available on CentOS.
This will require centos EPEL and nux-desktop package repositories.

wget http://mirror.nsc.liu.se/fedora-epel/7/x86_64/e/epel-release-7-5.noarch.rpm
wget http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm
yum localinstall nux-dextop-release-0-1.el7.nux.noarch.rpm
yum localinstall epel-release-7-5.noarch.rpm
yum update
yum install exfat-utils fuse-exfat

You’ll want to unmount and remove the existing ones first. Be sure you want to do this:

umount /mnt/windisk
wipefs -a /dev/sdb

Then we create a primary partition (shamelessly Pulled from StackOverflow for a non-interactive method)

(
    echo o # Create a new empty DOS partition table
    echo n # Add a new partition
    echo p # Primary partition
    echo 1 # Partition number
    echo   # First sector (Accept default: 1)
    echo   # Last sector (Accept default: varies)
    echo w # Write changes
    ) | fdisk

Now we need to make the filesystem:

mkfs.exfat /dev/sdb1

You can then mount as previous.

Be careful with your operations, ask questions, I’m happy to help out as much as possible.