quote=“petermc, post:1, topic:4813”]
As soon as I connect it, it says it is part of my pool. That’s not right as I didn’t add it to my pool.
Thanks for the report: that’s quite a strange one.
I don’t think it’s related to the drive names as the kernel ensures that each drive has a unique name so we should be good there.
Yes, but we use device serial numbers to uniquely identify/track drives and their settings:
From this I’m assuming this is not a drive you have previously used with Rockstor or have installed Rockstor on. If it is then you may need to do a proper wipe, ie via ‘wipefs -a’ (very carefully), as just removing the partitions is not enough to remove any prior btrfs signatures.
Hopefully yes. I can’t think currently how this could happen unless you are running a really really old version of Rockstor which from your previous posts is not likely (it was years ago now).
Could you first confirm your Rockstor version via a paste of the following command (run as root):
yum info rockstor
and to help diagnose what’s happening here it would help if you could post a screen grab of both your Disks and Pools page, with the problem drive attached, as well as the output of the following commands:
As I say above, I must be mistaken. This drive must have been used in rockstor.
wipefs -a /dev/sdg
Did the trick. Thanks.
btrfs fi show
Label: ‘rockstor_rockstor’ uuid: 7c01412f-5b44-4f2a-bb79-37c661976ded
Total devices 1 FS bytes used 1.95GiB
devid 1 size 1.81TiB used 5.04GiB path /dev/sda5
Label: ‘MainPool’ uuid: 2508707c-81aa-4109-9158-2c5522423b80
Total devices 5 FS bytes used 1.20TiB
devid 1 size 1.82TiB used 311.47GiB path /dev/sdb
devid 2 size 2.73TiB used 311.47GiB path /dev/sdd
devid 3 size 2.73TiB used 311.47GiB path /dev/sde
devid 4 size 3.64TiB used 311.47GiB path /dev/sdc
devid 5 size 1.82TiB used 6.22GiB path /dev/sdg
Which looks less problematic now. Thanks.
I must say, I have come back to rockstor after trying to use windows 10 as a server, so it is a credit to you guys. I seem to have a machine which had a lot of driver issues and crashes. I am pleased to be back to stability. Thanks.
@petermc Thanks for the update and glad you managed to sort it.
Yes we definitely have a weakness here as we fail to pick up on forced pool label duplication (ie attaching a drive from a prior identically named pool which is allowed in btrfs but not allowed in Rockstor). But we do take steps to avoid this scenario when initially creating pools.
I’ve create an issue defining this buggy behaviour (ie we should flag the duplicate name / differing uuid) and have reference this forum thread as evidence: