How to mount disks from a rockstor system on another linux system to recover data

Is there anything special that I need to do in order to be able to read the disks from my old rockstor system on my arch linux laptop? I need to retrieve some of the data from the disks. I have tried connecting the to a virtual box install or rockstor, and while it sees the disk, it will not allow me to import it. I get the error below.

Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 700, in _btrfs_disk_import
mount_root(po)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 252, in mount_root
run_command(mnt_cmd)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 115, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /bin/mount /dev/disk/by-label/Store_1 /mnt2/Store_1. rc = 32. stdout = [’’]. stderr = [‘mount: wrong fs type, bad option, bad superblock on /dev/sdb,’, ’ missing codepage or helper program, or other error’, ‘’, ’ In some cases useful info is found in syslog - try’, ’ dmesg | tail or so.’, ‘’]

Would it simply be a matter of spinning up the old system again with all of the drives attached and removing them from the existing pool that they are in? Or would this cause data loss?

Thanks in advance for any help I can get

@snakeeyes88 Welcome to the Rockstor community.

No. We currently use elrepo kernel-ml packages for our kernel, with a move to default openSUSE kernels once we have completed our move from our current CentOS distro base to those offered by openSUSE (Leap 15.x and Tumbleweed presently). And our userland btrfs-progs is as per it’s released version number.

This error can result from a know sporadic issue with mounting by label in multiple drive volumes: ie see the btrfs wiki section “Only one disk of a multi-volume filesystem will mount” quoting below for convenience but best to read the original for context:

"… or if one volume of a multi-volume filesystem fails when mounting, but the other succeeds:

mount /dev/sda1 /mnt/fs
mount: wrong fs type, bad option, bad superblock on /dev/sdd2,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
mount /dev/sdb1 /mnt/fs

Then you need to ensure that you run a btrfs device scan first:

btrfs device scan
"

I have seen this a number of times myself, ie the pool (btrfs volume) will not mount when given one drive, but will mount when given another. Mostly a simply reboot, which often results in the by-label udev link pointing to another drive, will resolve the import difficulty. We do try to fail over from this error but I think we fall short on the import side. So I would say try again after a reboot and you may well find it works just fine. There after our fail over to trying each and every disk in turn kicks in and subsequent mounts can work around this sporadic known issue.

I’m pretty sure we also make more use of ‘btrfs dev scan’ in later updates from those included on the current iso so always best to get to the latest Rockstor code your chosen channel facilitates as well.

Don’t remove the disks from their pool if you want the data on them as when you remove a disk from a pool it’s contents are moved over to the remaining pool members (if sufficient devices remain): thus leaving the removed disk blank as it no longer forms part of that pool.

However you may find that simply re-attaching them to their prior Rockstor instance and booting up again will restore their prior access, assuming you have not removed their prior pool entry from that instance of Rockstor. I.e. if you simply disconnected them they will show as detached in that prior system. And upon their re-attachment the Rockstor UI should pickup where it left off assuming they retain their prior serial numbers (most likely).

There is no ‘export’ concept akin to ZFS within btrfs. The pool (btrfs vol) and it’s associated subvols (shares in Rockstor speak) are simply unmounted / remounted.

Hope that helps. Just remember that all drives must be attached simultaneously as otherwise you run the risk of a ‘split brain’ scenario where the ‘age’ of each disk differs which will definitely complicate things.

So in short this message can result from a known issue upstream. I am unaware if this still exist in newer kernels but is bound to be sorted at some point and only really affects us at import time and then only rarely.