After install to USB, other drives are added to rockstor_rockstor pool

@eric Welcome to the Rockstor community and apologies for my slow response.

Yes currently suspected as a bug in our upstream Anaconda installer and it’s interplay with our kickstart config. See my latest comment in:

In your second attempt I suspect that the deleting of the logical drives didn’t actually wipe the disks so they were still recognisable as btrfs with a label of ‘rockstor_rockstor’ and so when reattached after the initially successful single attached system drive install, the automatically got associated back to that pool. Rockstor currently uses pool labels to associate drives with a label and it denies the creation of duplicate labels, however this is and entirely legitimate arrangement in the raw btrfs front as each pool (vol) is identified uniquely as a uuid (which we also store). So in this case the re-attached and non wiped prior drives what were initially and inadvertently btrfs formatted as members of a rockstor_rockstor pool came back to haunt the system. In this situation it is best to make sure all drives are properly wiped ie Rockstor internally, when it can, uses the following ‘all powerful and so dangerous’ command:

wipefs -a dev-name

Fist of you need to establish which drives are which, from a btrfs perspective. This can be done by:

btrfs fi show

You may well see the one successful single disk rockstor_rockstor pool (volume) and then another volume also listed as having rockstor_rockstor with 2 members. They are likely you best candidate for a full on wipe.

You may be able to achieve these wipes from within the installer but given it’s btrfs behaviour in this respect you may be better off using the command line.

So essentially the deleting of the logical disks and their consequent identical restoration is nothing more than disconnecting and reconnecting. This is my guess given your observed and reported findings. Wipe all and re-install or if you already still have a single disk install volume that you are running from then you may get away with manually wiping the ‘rouge’ ghost ones.

If in doubt post your ‘btrfs fi show’ output here and we can see what the current state is, I’m guessing it going to be akin to that posted by @Spiceworld in the following forum thread:

who helped to establish this bug in the first place via that forum report.

Where they actually different disks or just different temp canonical type names ie sda sdd etc as they can change from one boot to another.

It may be that you have uncovered a different bug but I’d first try “wipefs -a” on every drive and take it from there as then you should be good. But remember that the installer may tick drives for you when you don’t want them as part of the initial system pool and that Rockstor can’t actually cope with more than a single btrfs member in it’s sytem (rockstor_rockstor) pool.

Hope that helps and let us know how you get on, also more info on the number of disks in your system may well be helpful as well as the output of that ‘btrfs fi show’ command.