Error on disk rescan

Brief description of the problem

Running ROCKSTOR 3.9.1-16 Testing Updates
Linux: 4.12.4-1.el7.elrepo.x86_64

Can’t rescan disks to add new drive.

Error running a command. cmd = /usr/bin/ls -l /dev/disk/by-id. rc = 2. stdout = [‘’]. stderr = [‘/usr/bin/ls: cannot access /dev/disk/by-id: No such file or directory’, ‘’]

Detailed step by step instructions to reproduce the problem

On my qemu hypervisor i created small qcow2 disk image that hosts installation of rockstor system, some time later after installation i created large lvm group and passed it to virtual machine so it is visible as separate disk inside vm.

So the problem is that the script thats executed when i press ‘Rescan’ button in Storage → Disks menu is searching only for /dev/disk/by-id directory witch is not populated (i guess it’s udev thing) in my setup, other device identification methods are still available

Web-UI screenshot

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/src/rockstor/rest_framework_custom/", line 41, in _handle_exception yield File "/opt/rockstor/src/rockstor/storageadmin/views/", line 383, in post return self._update_disk_state() File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/", line 145, in inner return func(*args, **kwargs) File "/opt/rockstor/src/rockstor/storageadmin/views/", line 86, in _update_disk_state byid_name_map = get_byid_name_map() File "/opt/rockstor/src/rockstor/system/", line 1617, in get_byid_name_map throw=True) File "/opt/rockstor/src/rockstor/system/", line 121, in run_command raise CommandException(cmd, out, err, rc) CommandException: Error running a command. cmd = /usr/bin/ls -l /dev/disk/by-id. rc = 2. stdout = ['']. stderr = ['/usr/bin/ls: cannot access /dev/disk/by-id: No such file or directory', '']

@geexmmo Welcome to the Rockstor community and thanks for the detailed post.

Agreed: Rockstor absolutely depends upon by-id device names which udev cannot populate if the given devices have no unique serial number, such as would seem to be the case with your setup. The by-id name requirement and consequently the unique device serial number is as such defined as a minimum system requirement, see our:

Minimum system requirements doc entry. However your qemu layer should be able to ascribe a serial number to your vda (virtio) devices so that your setup meets Rockstor’s requirements.

That is debatable as once a device is completely wiped there is no fs or mbr uuid and so we are only left with bus info to track a device. But if that device is moved from one bus / port to another there is no means by which it can be uniquely identified. Hence the hard line taken on serial numbers within Rockstor’s design. Please see the following wiki page for further exposition on this:

Device management in Rockstor

(Subtitle: Rockstor’s Serial Obsession)

One of Rockstor’s goals is to manage devices in a robust manner and that requires tracking individual devices across various configurations. We use the unique serial for this purpose and there is likely to be no compromise on this going forward as it gives us a known base upon which we can build and hopefully end up being able to take on capabilities that are missing in systems which can only track a device once it has a partition table and or fs on it. This way we can already provide device stable custom smart settings for example, even if those devices are moved from one bus to another.

So in short I believe your setup can easily be returned to proper function by using the qemu capabilities to assign a serial number manually to all your virtio devices (vda and vdb), which by default dont’ have unique serial numbers; unlike sata devices emulated under qemu, such as in your disk page screen grab which appears not be be using virtio (presumably the prior config).

Let us know how you get on with this device serial number issue. For my own virtio delivered devices, within Rockstor development systems, I simply ascribe a serial via the VMM GUI to qemu via the following setting:


which results in the following by-id name:

lrwxrwxrwx 1 root root   9 Jul  3 11:03 virtio-serial-6 -> ../../vda

and in the disk page:

I fully acknowledge that the handling here (big red error splash) is inelegant and I think in your case that is due to the system drive now being a virtio device here as previously to the now failing disk scan update it looks like it was sata/ata with the auto assigned (by qemu with this bus emulation setting) serial of QM00002. But we have improved this behaviour for the system drive as of issue:

via pull request:

which was released towards the beginning of the current (3.9.2-x) stable channel updates so is not included in your current (end of testing channel) 3.9.1-16 version. We have moved, possibly temporarily, to only releasing rpm updates for the stable channel as a value add to stable channel subscribers. This means that for the last few months stable is getting progressively ahead of current testing channel release 3.9.1-16.

We may still have an inelegance here. You are supposed to be presented with a nice big message explaining the missing serial with a link to the min sys requirement page as per the wiki indicated earlier. And this was tested to work with the system disk also (after the indicated code change) but your live change under an install may have unearthed a further area of improvement.

Hope that helps and let us know how you get on with ascribing your new virtio driven devices with a serial. You should be find from there on hopefully.

Philip, Thank you very much.
I’m shocked to see such deep and detailed answer to this trivial thing, you don’t normally expect that kind of things on the internet while discussing some free(dum) stuff’s problems that runs files over computers :smile:
I was never involved that deep into linux hardware side to find out that ‘by-id’ is purely populated with links to devices that have serials assigned to them, thank you for that too.

I added following to my qemu lines:
-drive file=/dev/KVM2GroupVM/NAS,format=raw,if=none,id=drive-virtio-disk1,serial=huge-storage-drive,cache=none,aio=native

And of course it is working now:

Now almost ready to replace our old OMV Nas installation

@geexmmo Thanks for the update and glad your up and running again.

Thanks also for posting your qemu config as this is bound to help others that do it this way.

If your testing channel Rockstor experiments pan out do consider helping to support the projects development by subscribing to the stable channel updates. And remember to chime in here on the forum with any ideas / bug reports: but remember that the stable channel is now the only channel receiving regular rpm updates and is now a fair bit ahead of the last testing channel release (3.9.1-16 testing to 3.9.2-28 stable). Also note that currently the stable channel is taken directly from the latest master branch code on Github on each release and can be built by hand by following the instructions for developers at:

Contributing to Rockstor - Overview - Developers (subsection)

But be aware that as this is for development use it will delete all existing settings / database contents on each fresh build. But worth noting as we are proud of being completely open source.

Please see the following GitHub page for some of the fixes since testing channel version 3.9.1-16 which became stable channel version 3.9.2-0:

Unfortunately we have neglected our previous practice of maintaining a forum thread that detailed each change as it came out but have more recently linked to each closed issue per release in the referenced GitHub page.