After install to USB, other drives are added to rockstor_rockstor pool

After the RockStor installation is complete to the USB drive, at startup two of the drives I intended to use as data drives automatically get added to the rockstor_rockstor pool.

Hardware:

  • HP DL380 Gen6
    • dual quad-core Xenon
    • 48GB memory
    • P410I RAID controller
    • 32GB Samsung USB drive (boot disk)
    • 8 1TB drives
      • each configured as a logical disk in the RAID controller configuration utility

Installation attempts

In my initial install attempt I had the hard drives installed and configured as individual logical drives before installing and saw afterwards that the installation tried to put all of the drives into the rockstor_rockstor pool similar to what is described in this forum post.

For the second installation attempt, following phillxnet’s advice in the above linked post, I detached all of the 1TB drives after deleting all of the logical drives in the RAID controller’s utility to wipe any reference on the disks to the rockstor_rockstor pool. The installation was successful and looking at the disks page on the web management tool, as expected there was only the single USB boot disk shown. I restarted the machine and added a logical disk for each 1TB disk using the RAID configuration tool and let the machine continue booting. RockStor successfully started but when I looked at the disks page on the web tool I saw that two of the 1TB drives had been added to the rockstor_rockstor pool.

Just to satisfy my curiosity, I again restarted and deleted the logical disks in the RAID controller utility, detached the drives and re-installed RockStor, again successfully. I then restarted the machine and reconfigured the drives as a single RAID5 logical disk with the RAID controller tool and allowed the boot to continue. This time there was no change in the disks page of the web tool. The RAID5 array didn’t get placed into the rockstor_rockstor pool. I suppose I could live with this configuration but I really want to give BTRFS a spin.

For my fourth attempt, I repeated the steps I did for the second attempt: deleting the logical disk(s), detaching them, reinstalling RockStor, restarting and re-creating the 8 logical disks. After startup, RockStor again placed two of the 1TB disks into the rockstor_rockstor pool but interestingly, it was two different disks than what it added during my previous (second) attempt.

Not sure what else to try.

@eric Welcome to the Rockstor community and apologies for my slow response.

Yes currently suspected as a bug in our upstream Anaconda installer and it’s interplay with our kickstart config. See my latest comment in:

https://github.com/rockstor/rockstor-core/issues/1848

In your second attempt I suspect that the deleting of the logical drives didn’t actually wipe the disks so they were still recognisable as btrfs with a label of ‘rockstor_rockstor’ and so when reattached after the initially successful single attached system drive install, the automatically got associated back to that pool. Rockstor currently uses pool labels to associate drives with a label and it denies the creation of duplicate labels, however this is and entirely legitimate arrangement in the raw btrfs front as each pool (vol) is identified uniquely as a uuid (which we also store). So in this case the re-attached and non wiped prior drives what were initially and inadvertently btrfs formatted as members of a rockstor_rockstor pool came back to haunt the system. In this situation it is best to make sure all drives are properly wiped ie Rockstor internally, when it can, uses the following ‘all powerful and so dangerous’ command:

wipefs -a dev-name

Fist of you need to establish which drives are which, from a btrfs perspective. This can be done by:

btrfs fi show

You may well see the one successful single disk rockstor_rockstor pool (volume) and then another volume also listed as having rockstor_rockstor with 2 members. They are likely you best candidate for a full on wipe.

You may be able to achieve these wipes from within the installer but given it’s btrfs behaviour in this respect you may be better off using the command line.

So essentially the deleting of the logical disks and their consequent identical restoration is nothing more than disconnecting and reconnecting. This is my guess given your observed and reported findings. Wipe all and re-install or if you already still have a single disk install volume that you are running from then you may get away with manually wiping the ‘rouge’ ghost ones.

If in doubt post your ‘btrfs fi show’ output here and we can see what the current state is, I’m guessing it going to be akin to that posted by @Spiceworld in the following forum thread:

who helped to establish this bug in the first place via that forum report.

Where they actually different disks or just different temp canonical type names ie sda sdd etc as they can change from one boot to another.

It may be that you have uncovered a different bug but I’d first try “wipefs -a” on every drive and take it from there as then you should be good. But remember that the installer may tick drives for you when you don’t want them as part of the initial system pool and that Rockstor can’t actually cope with more than a single btrfs member in it’s sytem (rockstor_rockstor) pool.

Hope that helps and let us know how you get on, also more info on the number of disks in your system may well be helpful as well as the output of that ‘btrfs fi show’ command.

Sorry for the late response. My priorities are often not my own.

I did the ‘wipefs -a’ on all of the devices and that fixed it.

Thanks much.

@eric Glad your sorted and thanks for the update.

Well done for persevering and keep us posted on your experience and suggestions.