Migrated disks to new system, POOL appears, but share doesnt

Hello.

I previously had Rockstor 4.0.8-0 running on an Arm64 (aarch64) based system. I got help a while back for some small issues I observed with the disk plot activity not working (here).

Over the past few days, I’ve had to move to a newer Arm64 (aarch64) based system. There too I have OpenSUSE 15.3 w/Rockstor 4.0.8-0 installed. I have migrated the two disks (setup as RAID1) from the old system to the new one.

The disks immediately showed up under the POOLS and I was able to select the option to import the configuration from the disks. I see that out of 3.64TB, 678.56 GB is in use. See the image below.

There was a single share configured on the RAID that is taking up the 678.56 GB. The share does not appear though after importing the configuration. Under Storage → Shares, I only see the home share which is on the boot device.

And when I try to create a new share, I see that it only allows me to use the remaining empty space and it shows in red the “(678.6 GB)Space that is provisioned for other shares and is in use”.

How can I re-instate/re-create the share to get access back to that 678 of data? Looking for any pointers on how to rectify this.

Some other output which may be helpful.

rockstor:~ # btrfs fi show
Label: 'ROOT'  uuid: 89169da9-8d69-abcd-a34d-c0d2ebff97b9
        Total devices 1 FS bytes used 29.52GiB
        devid    1 size 39.49GiB used 31.54GiB path /dev/vda3

Label: 'RAKTAR'  uuid: 96b37a40-11c0-abcd-8762-8abcffeb10b0
        Total devices 2 FS bytes used 677.42GiB
        devid    1 size 3.64TiB used 680.03GiB path /dev/vdb
        devid    2 size 3.64TiB used 680.03GiB path /dev/vdc

Thanks!

@gsamu Hello again.

Re:

It very much looks like you have yet to import your data pool named “RAKTAR”. The “ROOT” pool is the system pool with it’s accompanying “home” subvolume. Once you import your data pool, via the disks overview page, you will also have it’s associated shares (btrfs subvols). Take a look at the following doc entry for importing pools:

“Import BTRFS Pool”: Disks — Rockstor documentation

You can import a pool, with all it’s members, from any one of it’s prior members.

However I think I’m potentially misreading your situation. Is the picture of the pools the current situation, post pool import. In which case you are saying there is a failure to pick-up a prior subvolume.

It is expected that all prior shares of a pool are imported along with their pool and we have no recent or even older reports of a failure in this respect. Do NOT try and create a share of the same name as this would be bad. Lets first find out what’s going wrong or what is mis-placed. Execute the following as the ‘root’ user to get more detailed logs which should help with more information.

/opt/rockstor/bin/debug-mode on

Then take a look at the:

/opt/rockstor/var/log/rockstor.log

for any clues as you refresh, say, the shares page. The log should indicate what it is trying to pick-up from the pool. And on each page refresh the system tries to update it’s state with that of the Pool.

The following command:

btrfs subvol list /mnt2/RAKTAR

should give a list of the subvolumes.

Was this subvol by chance named “.beeshome”, just a thought. As in 4.0.6-0:

via:

we ‘excluded’ that subvolume.

Hope that helps. And lets get the answer to these questions and the command outputs and hopefully folks here on the forum can get this misty missing subvol (Share in Rockstor speak) solved.

1 Like

Hello and thanks for the tips.

The screenshots which I provided were post POOL import. So the share was not picked up when I did the POOL import.

I’ve enabled debug, and tried to import the POOL once again. This is what I see in the logfile. Of note are the erorrs:

[09/Sep/2021 08:43:00] DEBUG [system.osi:469] --- Inheriting base_root_disk info ---

[09/Sep/2021 08:43:02] ERROR [storageadmin.views.command:137] Exception while refreshing state for Pool(RAKTAR). Moving on: Error running a

command. cmd = /usr/bin/mount /dev/disk/by-label/RAKTAR /mnt2/RAKTAR -o ,compress=no. rc = 32. stdout = ['']. stderr = ['mount: /mnt2/RAKT

AR: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error.', '']

[09/Sep/2021 08:43:02] ERROR [storageadmin.views.command:139] Error running a command. cmd = /usr/bin/mount /dev/disk/by-label/RAKTAR /mnt2

/RAKTAR -o ,compress=no. rc = 32. stdout = ['']. stderr = ['mount: /mnt2/RAKTAR: wrong fs type, bad option, bad superblock on /dev/sdb, mis

sing codepage or helper program, or other error.', '']

Traceback (most recent call last):

File "/opt/rockstor/src/rockstor/storageadmin/views/command.py", line 128, in _refresh_pool_state

mount_root(p)

File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 551, in mount_root

run_command(mnt_cmd)

File "/opt/rockstor/src/rockstor/system/osi.py", line 201, in run_command

raise CommandException(cmd, out, err, rc)

CommandException: Error running a command. cmd = /usr/bin/mount /dev/disk/by-label/RAKTAR /mnt2/RAKTAR -o ,compress=no. rc = 32. stdout = [

'']. stderr = ['mount: /mnt2/RAKTAR: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other er

ror.', '']

[09/Sep/2021 08:43:02] ERROR [storageadmin.views.pool:780] Exception while updating disk state: ('PoolDetailView' object has no attribute '

_update_disk_state').

[09/Sep/2021 08:43:02] DEBUG [fs.btrfs:786] Skipping excluded subvol: name=(@).

[09/Sep/2021 0

The subvol name was not .beeshome. This is the output from btrfs subvol:

filer:/opt/rockstor/var/log # cd /
filer:/ # btrfs subvol list /mnt2/RAKTAR
filer:/ # mount |grep RAKTAR
/dev/sda on /mnt2/RAKTAR type btrfs (rw,relatime,space_cache,subvolid=5,subvol=/)
filer:/ # ls /mnt2/RAKTAR
filer:/ #

@gsamu OK, so it looks like Rockstor has begun the pool imported but is failing the mount by label:

Mounting by label can sometimes fail and is a know issue with multi-device pools. We are meant to fail over to try and mount by each member but the initial import may not do this occasionally it seems.

See: “Filesystem can’t be mounted by label”: https://btrfs.wiki.kernel.org/index.php/Problem_FAQ

Then you need to ensure that you run a btrfs device scan first:

btrfs device scan

Worth a try.

We need to mount the pool in order to see/view it’s subvols - hence the missing subvol / share.

OK, it was a long shot. But the log entry has lead us further.

We see no output from “btrfs subvol list /mnt2/RAKTAR” as it failed to mount. Which is our first problem.

From your prior:

Label: 'RAKTAR'  uuid: 96b37a40-11c0-abcd-8762-8abcffeb10b0
        Total devices 2 FS bytes used 677.42GiB
        devid    1 size 3.64TiB used 680.03GiB path /dev/vdb
        devid    2 size 3.64TiB used 680.03GiB path /dev/vdc

It is confusing to now see mention in your post of /dev/sda /dev/sdb. Can we have a current output of:

btrfs fi show

Thanks. Vda / vdb are virtio devices (qemu), sda, sdb are scsi (generic) more commonly associated with ‘real’ devices.

Also the output of the following just in case:

ls -la /dev/disk/by-id/

and to confirm the label pointer (sym-link)

ls -la /dev/disk/by-label/

It would also be useful to see the entire mount arrangement currently:

cat /proc/mounts

This info will hopefully help forum members chip in with what may have happened here.

Lets see the output of these command so we have a better overview and can explain the vda sda changes and then hopefully the next stage will be clearer. Currently the pool is failing to mount properly, or there is an erroneous mount already. Clearing this up should help.

You may find that a reboot will have the mount ‘just work’, it may also help to clear up any previously failed, or in progress mounts.

Hope that helps.

1 Like

Ok I decided to just trash what I had on the disks as it was something I had a duplicate of on another system. So let’s consider this closed. The confusion around the sda vs. vda was because I put the disks back in the original system to see if I could recover them. Rather than spend more time to debug, I just wiped everything.

I would like to know though if using virtio devices (qemu) is risky or not?

@gsamu OK, at least you are up and running again.
Re:

Fine if you ensure they have a serial. qemu sata emulation ensures a serial is assigned. But note that Rockstor tracks devices by their serial so in passing real drives through as virtio you may end up with a different serial so it’s like the entire pool may have been ‘ghosted’ by a look-alike pool member set with all different serials. I’m pretty sure v4 (and later stable v3) can cope with this however as we have recently had improvements in coping with such arrangements.

Essentially as long as the device ‘looks’ the same then all should be good. The vast majority of Rockstor development is done on virtio and sata emulation within qemu. See the following technical wiki entry for our device management:

“Device management in Rockstor”: Device management in Rockstor
Subtitled “Rockstor’s Serial Obsession”

We follow udev and it’s requirements for generation of by-id names.

Hope that helps.