New(old) disk is "unusable" and I cannot wipe

I have a raid 1 consisting of two disks. I quickly needed a spare drive so I borrowed one from the pool (the data on the pool is not extremely vital). I no longer need the drive so I attempted to return it to the array. Now rockstor shows “Disk is unusable as it contains partitions and no user assigned role”. I attempt wipe the drive and I get “Selected device is part of a Rockstor managed pool”. I cannot remove the drive from the pool because RAID 1 requires atleast two drives. I have spent roughly an hour googling for answers but I figured I would get a more accurate answer here in far less time.

Rockstor is mainly a side project and I have no idea what to do. It there some other option I am missing? I would prefer it if I didn’t have to nuke the array and rebuild. but I suppose I will do it if I have no other choice. Thanks in advance for any help you can provide.

@UC_Nightmare,

I’m not a Rockstor developer, just a *nix user, but hopefully the below will make sense.

The problem is most likely that the Rockstor DB recognizes your disk (by it’s serial number) as being attached to the pool, and this is confusing the situation.

It should be noted that you are moving into unknown and unsupported territory. RAID is designed to be able to recover from disk failure, not intentional and arbitrary removal of a functioning disk to fit another purpose.
As a result, this particular edge case was likely never considered - Thus not accounted for in the design of Rockstor’s disk management framework.

To get around this, I think you’ll need to manually delete and re-add the disk from the CLI.
As this is a RAID 1, and you’re not supposed to delete an active mirror, you’ll likely need to:

  1. Physically remove the drive again first
  2. Mount (or remount) degraded
  3. Remove the disk from the RAID
  4. Physically re-add the disk
  5. Add the disk to the existing RAID.

The commands below should (untested, taken from BTRFS wiki) match the above, parts 2, 3 and 5.

mount -o remount,degreaded /mnt2/<pool_name>
btrfs device delete missing /mnt2/<pool_name>
btrfs device add /dev/<disk_device>
btrfs fi balance /mnt2/<pool_name>

Hope this helps.