Detailed step by step instructions to reproduce the problem
i tried to create a nfs share in the gui and the following error occured.
since i deleted an HDD from the array cause its broken i guess the system searches for the wrong disk. the btrfs array itself is working.
Web-UI screenshot
[Drag and drop the image here]
Error Traceback provided on the Web-UI
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 40, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/nfs_exports.py”, line 166, in post
mount_share(s, mnt_pt)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 286, in mount_share
return run_command(mnt_cmd)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 104, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = [’/bin/mount’, ‘-t’, ‘btrfs’, ‘-o’, ‘subvol=mnt’, ‘/dev/disk/by-id/detached-51323b25dd284839a9ac2fffcfc4782f’, ‘/mnt2/mnt’]. rc = 32. stdout = [’’]. stderr = [‘mount: special device /dev/disk/by-id/detached-51323b25dd284839a9ac2fffcfc4782f does not exist’, ‘’]
@g6094199 Welcome to the Rockstor community and thanks for the report.
Yes as you say it looks like it’s trying to mount the subvol via a detached disk, which it finds to not exist (unsurprisingly).
All removed disks are retained within the database along with their custom settings but have their system names replaced by a “detached-(long-random-number)” type name. If the detached disk is re-attached it’s db entry is re-assigned a ‘real’ device name and it re-inherits it’s custom settings, if any.
Device names in recent Rockstor are normally by-id or in cases where a by-id is not available such as a device having no serial number then the devices temporal name ie the sdX type is used and some restrictions are put in place.
Anyway to your issue: it looks like, as you suspect, the db still has this detached device as a member of the pool. To correct this you can delete the device entry from the Storage Disk page. Although it may take a few “Rescan button” then “bin icon” attempts due to the name changing underneath us (known but relatively harmless bug). This then indicates to Rockstor that you understand this ‘detached-’ device is no longer considered as part of the pool; although my understanding of the pool update mechanism is still in ‘development’ I’m afraid. I’m a little more up on the disk tracking though.
Hope that helps.
I have opened the following issue so that this inelegant behaviour might be addressed:
as u suggested i tried to delete the disk, but then there popped up another error
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 301, in delete
disk = Disk.objects.get(name=dname)
File “/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/models/manager.py”, line 151, in get
return self.get_queryset().get(*args, **kwargs)
File “/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/models/query.py”, line 310, in get
self.model._meta.object_name)
DoesNotExist: Disk matching query does not exist.
the diskname in the gui is “detached-f57c87a2b7a0478fa12ac705067f146a”
@g6094199 No this is the known but relatively harmless bug I referred to earlier. Please see my earlier advise on this one.[quote=“phillxnet, post:2, topic:1928”]
Although it may take a few “Rescan button” then “bin icon” attempts due to the name changing underneath us
[/quote]
@g6094199 Great, shouldn’t have been necessary to reboot for the disk removal but at least you are up and running again. It is rather a silly bug that one, bit of an emergent property from a previous deep change and we need to get to it to avoid this rather frustrating side effect; still glade you are up and running again and thanks for bringing the prior issue to light. I have included a note on the relevant issues to update this thread when significant progress is made.
Just a notification that as of testing channel updates release 3.9.0-14 your reported issue of a subvol mount attempt that used a detached disk should now be sorted and the associated issue opened as a result of your report has now been closed.