Failed pool import on upgrade to SUSE

I just updated from the old CentOS version to the new suse version, and the pool import failed. It looks like it’s compaining about RAID56, but I change from 5 to RAID 1 years ago. Not sure how to proceed. I still have my Rockstor V3 SSD so could fall back if the old version will still recognize the pools. If I can mount read only and then to a conversion to RAID10 in the SUSE version I’m happy to try that, but I think mounting read-only takes that option off the table. Any suggestions greatly appreciated.
Thanks,
Del

Brief description of the problem

Old pools failed to import into Rockstor v4

Detailed step by step instructions to reproduce the problem

Clicked down arrow next to drive to import pools

Web-UI screenshot

##### Houston, we’ve had a problem.
Failed to import any pool on device db id (4). Error: (Error running a command. cmd = /usr/bin/mount -t btrfs -o subvolid=340 /dev/disk/by-id/ata-HGST_HDN724040ALE640_PK1334PCJS2ZBS /mnt2/MyAdminFiles. rc = 32. stdout = [’’]. stderr = [‘mount: /mnt2/MyAdminFiles: wrong fs type, bad option, bad superblock on /dev/sdf, missing codepage or helper program, or other error.’, ‘’]).
[Drag and drop the image here]

Error Traceback provided on the Web-UI

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 858, in _btrfs_disk_import
import_shares(po, request)
File “/opt/rockstor/src/rockstor/storageadmin/views/share_helpers.py”, line 239, in import_shares
mount_share(nso, “{}{}”.format(settings.MNT_PT, s_in_pool))
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 667, in mount_share
return run_command(mnt_cmd)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 224, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/mount -t btrfs -o subvolid=340 /dev/disk/by-id/ata-HGST_HDN724040ALE640_PK1334PCJS2ZBS /mnt2/MyAdminFiles. rc = 32. stdout = [’’]. stderr = [‘mount: /mnt2/MyAdminFiles: wrong fs type, bad option, bad superblock on /dev/sdf, missing codepage or helper program, or other error.’, ‘’]

I fell back to the old system disk and all the data was there, started up with the pool and all shares intact. When I checked the RAID setting the UI says RAID 10.

@D_Jones Hello again,

There can be issues moving from such old kernels as we used in the v3 days. But it should be possible.
Take a looks at the following section on importing ‘unwell’ pools. It may help in your case:

https://rockstor.com/docs/interface/storage/disks.html#import-unwell-pool

You may not even need the ro option. Sometimes the initial mount can be tricky and can sometimes work via that method. Multi-disk pools can also sometimes be sensitive to which drive is used to mount them.

Definitely worth tapping away at this as all things are far newer in the v4.

This is normally an indication of an unwell pool. You might want to ensure all is OK with hit before you try moving over again. I.e. backups refresh and scrub. Also take a look at this section of our docs regarding ‘stray’ chunks under older raid levels. There is a balance command suggestion there to force all chunks to the desired raid level:
Re-establish redundancy
https://rockstor.com/docs/data_loss.html#re-establish-redundancy

Hope that helps.

2 Likes

@phillxnet Hello again, and thank you for the quick resopnse. I just ran the usage -T command and got the following table.
Data Metadata System
Id Path RAID10 RAID10 RAID10 Unallocated


1 /dev/sdb 1.66TiB 2.25GiB 16.00MiB 1.98TiB
2 /dev/sdc 1.66TiB 2.25GiB 16.00MiB 1.98TiB
3 /dev/sdd 1.66TiB 1.75GiB 16.00MiB 1.98TiB
4 /dev/sde 1.66TiB 2.50GiB - 1.98TiB
5 /dev/sdf 1.66TiB 1.25GiB 16.00MiB 1.98TiB


Total 8.30TiB 10.00GiB 64.00MiB 9.88TiB
Used 8.17TiB 9.04GiB 880.00KiB

I’m guessing disk 4 might be my problem. Looks like a good scrub then balance are in order.

1 Like