Error when I try to configure drive Role / Wipe existing Filesystem

Detailed step by step instructions to reproduce the problem

I get the attached error when I try to configure drive Role / Wipe existing Filesystem.
Version - ROCKSTOR 4.6.1-0

Web-UI screenshot

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/src/rockstor/storageadmin/views/", line 1006, in _role_disk return self._wipe(, request) File "/opt/rockstor/.venv/lib/python2.7/site-packages/django/utils/", line 185, in inner return func(*args, **kwargs) File "/opt/rockstor/src/rockstor/storageadmin/views/", line 680, in _wipe wipe_disk(disk_name) File "/opt/rockstor/src/rockstor/system/", line 1024, in wipe_disk return run_command([WIPEFS, "-a", disk_byid_withpath]) File "/opt/rockstor/src/rockstor/system/", line 246, in run_command raise CommandException(cmd, out, err, rc) CommandException: Error running a command. cmd = /usr/sbin/wipefs -a /dev/disk/by-id/ata-HGST_HUH721010ALE604_7JJS4E8G. rc = 1. stdout = ['']. stderr = ['wipefs: error: /dev/disk/by-id/ata-HGST_HUH721010ALE604_7JJS4E8G: probing initialization failed: Device or resource busy', '']

What was on that disk before? Either Rockstor thinks some partition on that drive is mounted/in use because it doesn’t have access to it (can sometimes happen depending on what the disk was previously formatted with).

There is a small section in the documentation about this:

Aside from the solutions described there, you could also see whether via the command line and a forced (-f flag) wiping like this works:

/usr/sbin/wipefs -af /dev/disk/by-id/ata-HGST_HUH721010ALE604_7JJS4E8G

in case it matters to you from the man page:

   wipefs can erase filesystem, raid or partition-table signatures
   (magic strings) from the specified device to make the signatures
   invisible for libblkid. wipefs does not erase the filesystem
   itself nor any other data from the device.

If you want to ensure the data on the disk is absolutely, positively destroyed then you might want to go with the wiping process described in the Rockstor help section above.

1 Like
  1. What was on the disk before
    Chia farm data store through Truenas

  2. Ran the command on 4 drives with same issue and get the screenshots. But when try to configure drive Role / Wipe existing Filesystem, I still get the same error.


Also when I do a disk Rescan, I get the below error.

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/rest_framework_custom/”, line 41, in _handle_exception
File “/opt/rockstor/src/rockstor/storageadmin/views/”, line 522, in post
return self._update_disk_state()
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/utils/”, line 185, in inner
return func(*args, **kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/”, line 376, in _update_disk_state
p_info = dev_pool_info[dev_name]
KeyError: ‘/dev/sdcd’

I think I have seen somewhere that for a prior TrueNAS drive you might have to actively wipe the drive at the lower level I think (using something like ShredOS or Darik’s Boot and Nuke)
or use dd (and that could be dangerous if you don’t pick the right drive to delete).

1 Like