[RESOLVED] Failed to configure drive role, or wipe existing filesystem, or do LUKS format on device id (1). Error: (Error running a command. cmd

[Please complete the below template with details of the problem reported on your Web-UI. Be as detailed as possible. Community members, including developers, shall try and help. Thanks for your time in reporting this issue! We recommend purchasing commercial support for expedited support directly from the developers.]

Brief description of the problem

I can’t wipe disks.

Detailed step by step instructions to reproduce the problem

I initially used RAID 5 despite the warning but after some experimentation and more reading I’ve now decided to go with RAID10.
I removed the installled Rock-On, deleted all the SFTP and Samba shares I had created, deleted the Shares and the Pool.
When I tried to create a new pool there were no disks available so I selected a disk and chose to wipe it (tick box).

Web-UI screenshot

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 921, in _role_disk return self._wipe(disk.id, request) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py", line 145, in inner return func(*args, **kwargs) File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 616, in _wipe wipe_disk(disk_name) File "/opt/rockstor/src/rockstor/system/osi.py", line 963, in wipe_disk return run_command([WIPEFS, "-a", disk_byid_withpath]) File "/opt/rockstor/src/rockstor/system/osi.py", line 176, in run_command raise CommandException(cmd, out, err, rc) CommandException: Error running a command. cmd = /usr/sbin/wipefs -a /dev/disk/by-id/ata-ST2000DM008-2FR102_ZK301ZWC. rc = 1. stdout = ['']. stderr = ['wipefs: error: /dev/disk/by-id/ata-ST2000DM008-2FR102_ZK301ZWC: probing initialization failed: Device or resource busy', '']

No problem. I rebooted RockStor (something to do) and when it restarted I was able to wipe the drives.

2 Likes

@jmangan Welcome to the Rockstor community.
Re:

and

Yes, this can happen. Btrfs can often fail to ‘drop’ a drive from the ‘busy’ list even when the pool has been deleted. It’s OK when removing a single drive from a pool via resize but when dropping an entire pool a reboot is often required. There is kernel work on-going to improve this type of drive management and that should, in time, make such operations as you did make more sense rather than throwing these resource busy errors.

All in good time and thanks for reporting your findings. Glad you got it sorted. And yes the raid1/10 within btrfs is far more mature, and faster, than the parity raids of 5/6. It’s a little like 2 file systems in one really as the parity raids don’t fully conform to the main remits of btrfs so should probably not have been added. But it’s often easier to fix something that exists than to start over. But we will have to see how things pan out for the parity raids. There is some work going on in that direction but most work seems to be on the 1/10 and there will soon be the btrfs Raid1c2 and Raid1c3 variants that may be interesting. Plus we hope in time to support the capability to pick both data and metadata raid levels independantly. This should help to enable the parity raids feasability a little as then one can use say Raid1c3 for metadata and raid6 for data. Again, all in good time and we are not quite there just yet. And our next move in this area would be to surface within the Web-UI what data and metadata are actually stored in first. The we can use that to proof expanding our capabilities re user configurable data/metadata raid levels down the road.

Hope that helps.

1 Like

Thanks for the comprehensive response.

I haven’t looked into Raid1c2 and Raid 1c3 yet. I remember when there were 5 RAIDs and that was enough for anybody, damn it! :wink:

I’m also used to hardware-based RAID but I can see, now, that doing it in software has pros and cons so I have to adapt - and learn.

Also thanks for a great product which, ironically, I came to because it was based on CentOS (which I have a passing familiarity with) at around the time it is moving to SUSE. Not that that will deter me now.

2 Likes