Traceback (most recent call last):
File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 700, in _btrfs_disk_import
mount_root(po)
File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 252, in mount_root
run_command(mnt_cmd)
File "/opt/rockstor/src/rockstor/system/osi.py", line 115, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /bin/mount /dev/disk/by-label/genepool /mnt2/genepool. rc = 32. stdout = ['']. stderr = ['mount: wrong fs type, bad option, bad superblock on /dev/sda,', ' missing codepage or helper program, or other error', '', ' In some cases useful info is found in syslog - try', ' dmesg | tail or so.', '']
btrfs fi show returns:
[root@rockstor ~]# btrfs fi show
Label: ârockstor_rockstorâ uuid: 1f3cb771-2bb0-4b42-abb1-14613054d3b3
Total devices 1 FS bytes used 1.61GiB
devid 1 size 12.95GiB used 4.04GiB path /dev/sdb3
warning, device 2 is missing
Label: âgenepoolâ uuid: fd6342ca-30a8-4ec8-9a73-f9c317f03f2a
Total devices 2 FS bytes used 368.28GiB
devid 1 size 465.76GiB used 376.01GiB path /dev/sda
*** Some devices missing
The other drive that was paired with this one no longer exists.
Iâm only interested in recovering the data that is/was on this drive. If thatâs not possible, itâs not the end of the world. Basically, tell me if Iâm wasting my time with this.
Iâm not sure on the version of Rockstor but the iso was downloading and freshly installed within the last week.
What RAID level was in use on your pool labelled âgenepoolâ?
Iâm assuming it was RAID0 or RAID1 due to the disk count.
You can check this with:
btrfs fi df
If it was RAID0, youâre probably out of luck (every nth stripe missing).
If it was RAID1 you should (hopefully) be able to mount the filesystem manually as a degraded array.
Something like this would be a good start:
mount -o degraded,rw,relatime,space_cache,subvol=/genepool /dev/sda /mnt2/genepool
Note, I largely pulled that command out of my [redacted] with some google-fu and details from your btrfs fi show, I cannot guarantee that itâs correct as Iâm at work and unable to test it until I get home.
With kernels pre 4.14 or so, such as Rockstorâs current defaults, you only get one chance to mount rw (read, write). This means you might want to mount degraded,ro (read only) first until you have all your data off and then attempt a repair later once in a less precarious position. Returning the pool to regular service could be done either by adding a disk to achieve the 2 disk minimum for raid1 or changing the pool raid level to single (where a pool of one device is legitimate), both of which will require the âone tryâ degraded,rw (read write) mount options. If the later is chosen then redundancy is removed from the pool but it can also be much quicker and so would stress the remaining drive (and potential only copy of the data) less as well. But best if the data is no where else or inconvenient to retrieve / outdated in backups, to focus on retrieval via a degraded,ro (read only) mount prior to doing anything else: assuming a ro,degraded mount works that is: otherwise itâs down to btrfs restore which is very slow and rather a last resort when a pool (btrfs vol) is unmountable even as degraded,ro.
@Haioken would you agree with this? Iâm kind of looking forward to moving to a post 4.14 kernel by default so we donât have to consider such things: this should happen shortly. Oh well bit by bit.
Hope that helps and also note that almost all pool repair procedures are currently limited to command line intervention, although we are in the process of improving this situation: ie we now support:
and
But in the case of an unimportant/non Rockstor managed pool these donât help.
I also intend to address the following issue as soon as I am able:
Although it in turn has a background dependency that Iâve begun working on but with nothing to show as yet.
Yes, thatâs pretty well on-point. I was trying to directly address @UC_Nightmareâs issue of âI just want to get the data outâ - though I probably should have thought to swap rw/ro mount options.
I think itâll be great to move to a later upstream Kernel.
The related issues and associated pull-requests (where applicable) are looking good. Hopefully these will pave the way toward some dedicated ârecoveryâ UI elements.
@Haioken@phillxnet
Sorry for the late reply. Yes the array was originally a RAID1.
I attempted to mount the drive as degraded (read only) with the slightly modified command:
mount -o degraded,ro,relatime,space_cache,subvol=/genepool /dev/sda /mnt2/genepool
unfortunately the console returns: mount: mount(2) failed: No such file or directory.
I am currently googling the issue, but I have not much experience with the command line so your feedback would be helpful.
Edit: mount -o degraded,ro /dev/sda /mnt2/genepool
ran without any errors. I hope I have not done something terribly wrong.
Edit 2: the command seemed to have no effect. when I attempted to reimport the drive via GUI it now throws this error:
Failed to import any pool on this device(2). Error: Error running a command. cmd = /sbin/btrfs quota enable /mnt2/genepool. rc = 1. stdout = [ââ]. stderr = [âERROR: quota command failed: Read-only file systemâ, ââ]
I believe running: mount -o degraded /dev/sda /mnt2/genepool will rid me of the read only error, but I will wait for further input from those who actually know what they are doing.
The makes some general assumptions about your configuration, that the subvolume is labelled â/genepoolâ, and that the mountpoint /mnt2/genepool exists.
Either of these assumptions being incorrect could cause the mount to fail as per your first error.
Yes, the RS GUI has a general disagreement with importing read-only shares due to the reliance on quotas. Mounting rw would resolve this, however as @phillxnet mentioned, youâll only be able to do this mount once.
If all you want is to retrieve the data, I would mount it manually, and retrieve the data manually via the command line (or manually export the share with SMB or NFS to access remotely)
Otherwise, I would strongly recommend attaching a new disk to the system of equal or greater size, adding to the existing btrfs filesystem, and performing a mount and balance prior to any further major use.
Do you have any suggestions for a command that would successfully mount the drive in such a way that I will be able to retrieve the data with minimal risk of data loss?
As @phillxnet above mentioned, you simply need to mount read-only (which you did).
This will however prevent Rockstorâs UI from being able to use the drive, however the disk is mounted and accessible.
Upon review on my own NAS, I think the proper mount for your situation is:
mount -o degraded,ro,relatime,space_cache,subvol=/ /dev/sda /mnt2/genepool
This will enable access to the disk locally, allowing you to copy via shell to any other locally mounted filesystem.
If you wish to copy over a network, that will require a little more work, and some detail on what youâre planning to copy to
So if Iâm understanding this right: I can format another disk as NTFS, connect it to the rockstor machine, and copy my files from the original drive to the NTFS drive?
do you have any suggestions for commands to achieve this? A quick google search shows that this isnât a very popular topic.
That is one possibility yes. To get NTFS write-support (last I checked) youâll need to use NTFS-3G.
Iâm not convinced that this will be pre-installed, so to install it:
yum install ntfs-3g
Beyond that, the following is conjecture. Make sure you read and understand what youâre doing here.
Youâll need to identify the disk and partition youâve attached (after itâs attached, obviously). You should be able to get this with:
fdisk -l | grep -i ntfs | awk '{print $1}'
after attaching the NTFS drive. If the output is blank, Linux has likely not recognized the attached disk.
Create a mountpoint and mount the NTFS partition.
mkdir /mnt/windisk && mount -t ntfs-3g /dev/<disk_and_partition> /mnt/windisk
Youâll also need to mount your original partition read only for safety, as shown earlier:
mount -o degraded,ro,relatime,space_cache,subvol=/ /dev/sda /mnt2/genepool
Then itâs simply a matter of copying. There are a hundred million ways of copying files between two locations in Linux. I typically suggest a tarpipe (which is quite advanced, but offers good throughput regardless of file sizes).
I would start by creating a backup directory, then tarpipe the complete contents of /mnt2/genepool to it:
mkdir /mnt/windisk/rs_backup
cd /mnt2/genepool
tar cf - . | (cd /mnt/windisk/rs_backup && tar xBf -)
It seems trying to mess with ntfs was a bad idea.After many errors I attempted to reformat the new drive as exfat, but I am really unfamiliar with the linux command line (and I didnât plan to spend my day trying to get familiar with it.)
perhaps i will take my chances with mounting the drive as degraded rw for the chance that I may be able to simple copy the files over the network.
Thanks for all the help you have provided. I apologize for wasting your time.
No problem. Rockstor is still quite young, thus the reliance on command line for many more complex operations.
N.B Phillipâs warning relating to mounting the pool as read-write, you can only perform that action once.
If you would prefer to access the share over the network, and not mount rw for safety, perhaps you could try setting up a temporary samba config with a single exported share.
To do this, we need to create an alternative config file, say /root/smb_alt.conf
The config would be something like:
[global]
workgroup = WORKGROUP
netbios name = broken_nas
security = share
[data]
comment = temporary_share
path = /mnt2/genepool
force user = root
force group = root
read only = Yes
guest ok = Yes
browseable = Yes
Once this content is in place, we can start samba (the windows file sharing system) with:
/usr/sbin/smbd -i -s /root/smb_alt.conf
Again, untested because Iâm not at home, but some googling should confirm most of this.
I followed your commands and just ran âtar cf - . | (cd /mnt/windisk/rs_backup && tar xBf -)â
The shell still hasnât returned the prompt so I assume it is still working. Is there any status notification or shall I just wait a few (~30) minutes?
Most linux copy commands are pretty garbage at feedback, so donât worry too much on that.
Can you hear disk activity?
For further verification, you can open up second shell (do not terminate the first one, as you canât really recover a partially completed tarpipe copy! you would need to start again) and check the contents of /mnt/windisk/rs_backup
ls /mnt/windisk/rs_backup
Or for a list of files modified in the last minute, try:
find /mnt/windisk/rs_backup -mmin -1
You can change the â-1â to â-nâ, where n is the number of minutes you want to look into the past.
Regarding how long itâll take, thereâs never a fixed time, but I can provide guesstimates if you tell me:
how much data do you have on the source disk?
What type of disks are the source and destination? (SSD? 7200rpm Rotating rust?)
How is the data distributed? (Mainly files larger than 50Mb, Mainly smaller files, even distribution of file sizes)
Does it have complex folder structure, or only a few folders?
It still hasnât completed, but it is showing some slow progress (but progress nonetheless). Itâs currently copying some movies (aprox 2-5gb each), with moderately complex folder structure (each movie could be 4-5 folders deep).
The drives are both 2.5" hard drives of unknown speed. Not the fastest, but I was expecting a bit faster than this.
I think the source drive has aprox. ~450gb of files. The majority (300gb ish) of which is a plex folder containing movies and tv shows. all of which are >50mb. The rest are various small files. It appears those havenât been copied yet.
At 70% (to account for overhead) of estimated max speed for those drives, I figured the transfer would be complete in 1.5 - 2 hours. It believe I may have grossly overestimated copying via CLI. It shouldnât be a problem though. If nothing goes wrong I will just let it run overnight.
EDIT: I ran âfind /mnt/windisk/rs_back -mmin -120â just to see what all might have been copied so far. It appears the transfer is stuck on the first file (in this case a movie in .mkv)
I believe I may have partitioned the exfat drive wrong. It seemed to work at the time, so I was unsure. Could this be the cause?
Wow, stuck on a single file, thatâs quite odd
I canât imagine a mistake in partitioning that would cause this.
Assuming you mean 2.5" mechanical drives, they are typically quite slow - though not that slow.
Are you sure the source drive is still in good condition?
Can you check for the tar processes?
pgrep tar
Also check the mountpoints
mount | grep "genepool\|windisk"
Post the output of those two.
Also might be worth checking /var/log/messages for btrfs issues and posting that too:
grep btrfs /var/log/messages
It might be prudent to try a traditional linux copy rather than a tarpipe in this instance, as a traditional copy is more likely to exit failure if something goes wrong.
Iâd like to see the output of the above first, however if you want to jump straight to the traditional copy, youâll need to terminate the tarpipe copy with ctrl-c and instead run:
cp -R /mnt2/genepool/* /mnt2/windisk/rs_backup
Iâll be around for another 2 hours, then Iâll be unavailable for about an hour or so while travelling.
[root@rockstor ~]# mount | grep âgenepool|windiskâ
/dev/sdb on /mnt/windisk type ext2 (rw,relatime,block_validity,barrier,user_xattr,acl)
/dev/sda on /mnt2/genepool type btrfs (ro,relatime,degraded,space_cache,subvolid=5,subvol=/)
[root@rockstor ~]# grep btrfs /var/log/messages
Feb 3 12:05:10 Rockstor dracut: *** Including module: btrfs ***
Feb 3 12:08:16 Rockstor dracut: *** Including module: btrfs ***
Feb 3 12:18:52 Rockstor dracut: *** Including module: btrfs ***
Feb 3 12:24:51 Rockstor dracut: *** Including module: btrfs ***
Feb 3 12:36:08 Rockstor dracut: *** Including module: btrfs ***
Feb 3 12:57:16 rockstor dracut: *** Including module: btrfs ***
Feb 4 08:59:33 rockstor dracut: *** Including module: btrfs ***
Feb 5 15:12:33 rockstor dracut: *** Including module: btrfs ***
Feb 6 14:21:33 rockstor dracut: *** Including module: btrfs ***
Feb 6 16:29:53 rockstor dracut: *** Including module: btrfs ***
Nothing appears too terribly out of order to me, but Iâm not experience in this field.
After posting this I will attempt a new copy. Depending on the outcome I may go to bed shortly after that. I will keep you posted.
Edit: ctrl + c returns nothing but a blicking cursor in shell (no response). Would it be unwise to close out of the shell and open a new one to begin another copy? I am unsure if closing an ssh session will also kill the copy attempt.
Edit 2: spamming ctrl-c (and once q) finally led to a response.
"You have new mail in /var/spool/mail/root
[root@rockstor genepool]#"
seems odd to me, but I will continue as planned.
Edit 3: âcp -R /mnt2/genepool/* /mnt2/windisk/rs_backupâ returns:
"cp: target â/mnt2/windisk/rs_backupâ is not a directory"
perhaps I should use â/mnt/windisk/rs_backupâ?
Youâre correct in that everything there looks fine with the notable exception that your âwindiskâ mount is NOTexFAT, itâs Linux EXT/2 (Aka âSecond Extendedâ).
If youâre planning to attach this to a windows machine later, I would probably suggesting resolving that before continuing.
To create an exFAT partition on Linux, you need exfat utils, not usually available on CentOS.
This will require centos EPEL and nux-desktop package repositories.
Youâll want to unmount and remove the existing ones first. Be sure you want to do this:
umount /mnt/windisk
wipefs -a /dev/sdb
Then we create a primary partition (shamelessly Pulled from StackOverflow for a non-interactive method)
(
echo o # Create a new empty DOS partition table
echo n # Add a new partition
echo p # Primary partition
echo 1 # Partition number
echo # First sector (Accept default: 1)
echo # Last sector (Accept default: varies)
echo w # Write changes
) | fdisk
Now we need to make the filesystem:
mkfs.exfat /dev/sdb1
You can then mount as previous.
Be careful with your operations, ask questions, Iâm happy to help out as much as possible.