Failed to configure drive role, or wipe existing filesystem, or do LUKS format on device id (2). Error: (Error running a command. cmd

Hello,

I am trying to move some drives from an old pool to another one but it looks like after deleting the old pool, if I try to wipe the drives to add them to the new pool I get the following error.
“Device or resource busy”
I have checked if one of the drives is mounted somewhere else but no. I don’t see what process could be using those drives either.
In the “disks” tab, Rockstor is still seeing the drives with BTRFS partition on them, and propose to import it.

Brief description of the problem

I want to move the drives from one pool to another, but they appear as “busy”.

Detailed step by step instructions to reproduce the problem

  1. Deleted old pool.
  2. Removed all scheduled tasks related to that pool.
  3. Try to wipe the drives so I can add them to another pool not working.

Web-UI screenshot

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 921, in _role_disk return self._wipe(disk.id, request) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py", line 145, in inner return func(*args, **kwargs) File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 616, in _wipe wipe_disk(disk_name) File "/opt/rockstor/src/rockstor/system/osi.py", line 865, in wipe_disk return run_command([WIPEFS, '-a', disk_byid_withpath]) File "/opt/rockstor/src/rockstor/system/osi.py", line 121, in run_command raise CommandException(cmd, out, err, rc) CommandException: Error running a command. cmd = /usr/sbin/wipefs -a /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6VCA7JV. rc = 1. stdout = ['']. stderr = ['wipefs: error: /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6VCA7JV: probing initialization failed: Device or resource busy', '']

Thank you for your support!
George.

@G_Man_be Hello again and thanks for the report.

Btrfs can sometimes be quite reluctant to release a drive, but in this case it may just be a Rockstor process that has not properly released it. Normally if one removes a drive from a pool it is released properly to be re-used but it may be we have an issue here when a pool is deleted.

Now that you have deleted the pool try a reboot. It may then work as intended. If so we need to look to this from our side and see if the same behaviour is still true with our openSUSE builds.

Let us know if this disk is properly released post a reboot, now that you have deleted it’s prior pool it should now be nothing to do with Rockstor as such and thus freed up ‘properly’. I strongly suspect a bug on our part with this one but let us know if the reboot does it. That will ensure it is properly released from both our side and btrfs’s side.

Hope that helps.

Hi @phillxnet!

Rebooting did solved my issue, thank you very much!
Is there any way I could help you to see if it was BTRFS or a Rockstor process?

@G_Man_be Glad your now sorted.

You already have by way of a simple reproducer. I have as a result opened the following issue:

That way when next anyone is available they can reproduce this issue and hopefully track down what’s going wrong.

Thanks for the nice clear report, much appreciated.

One more thing that would help here is a confirmation of the exact version of Rockstor that you are running. And given we had a painfull bug when moving from testing to stable, if you could double check via the output from:

yum info rockstor

Thanks for your help in clearly reporting this, I’ve seen the like of it but haven’t gotten around to a clear report but at least now we have this. Cheers.

Sure! Here is the exact version I have installed:

Loaded plugins: changelog, fastestmirror
Loading mirror speeds from cached hostfile

I am glad I could help!

1 Like