[Please complete the below template with details of the problem reported on your Web-UI. Be as detailed as possible. Community members, including developers, shall try and help. Thanks for your time in reporting this issue! We recommend purchasing commercial support for expedited support directly from the developers.]
Brief description of the problem
pools not mounted after restoring Rockstar virtual machine
Detailed step by step instructions to reproduce the problem
have moved Rockstar virtual machine to a new hot - imported and passed through storage disks, system dees disks, but not import pools on them
Web-UI screenshot
[Drag and drop the image here]
Error Traceback provided on the Web-UI
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/pool_scrub.py”, line 49, in get_queryset
self._scrub_status(pool)
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/utils/decorators.py”, line 185, in inner
return func(*args, **kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/pool_scrub.py”, line 59, in _scrub_status
cur_status = scrub_status(pool, btrfsprogs_legacy())
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 1936, in scrub_status
mnt_pt = mount_root(pool)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 795, in mount_root
“Command used {}”.format(pool.name, mnt_cmd)
Exception: Failed to mount Pool(tank) due to an unknown reason. Command used [‘/usr/bin/mount’, ‘/dev/disk/by-label/tank’, ‘/mnt2/tank’]
While I don’t have an answer for you at this time, could you specify a bit more on what your setup is like the version of Rockstor you’re running. What are you using for virtualization (VMWare, Virtualbox, etc.)?
I assume, you only moved the VM to a new host, but the disks were taken from your old setup/host combination, meaning the tank pool was originally created using the VM before you moved it?
the detached prefix usually indicates, that the system does not see the disk, however the disk attributes are still in the Rockstor database. I am assuming when you hit the rescan button on the disk page, nothing changes?
Since I don’t really know proxmox operations/setup, on the new host do you have to “introduce” the disks again explicitly to the VM, so it can see them? And could it be that, when that’s done, the device UUIDs/IDs change?
hitting "rescan"names of drives change. And still they re detached
Proxmox - at least on hypervisor side - sees the disks, shall I try to re-pass them through by id name and start VM, hopefully then Rockstar VM will imported automatically? I hope this is the idea at least?
This is always good information to know about, since quite a few forum members (myself included for testing) use VMs for either their production setup or testing out Rockstor, so these kinds of details pertaining to the virtualization solution (in this case proxmox) can likely be helpful for others down the line.
I think this change in IDs is related how we represent detached disks. Looking at @phillxnet 's wikified entry for device management:
where a fake UUID is generated in conjunction with the detached status, hence a rescan of these types of devices will result in changed names, but the status will remain the same.