quote=“petermc, post:1, topic:4813”]
As soon as I connect it, it says it is part of my pool. That’s not right as I didn’t add it to my pool.
[/quote]
Thanks for the report: that’s quite a strange one.
I don’t think it’s related to the drive names as the kernel ensures that each drive has a unique name so we should be good there.
Yes, but we use device serial numbers to uniquely identify/track drives and their settings:
From this I’m assuming this is not a drive you have previously used with Rockstor or have installed Rockstor on. If it is then you may need to do a proper wipe, ie via ‘wipefs -a’ (very carefully), as just removing the partitions is not enough to remove any prior btrfs signatures.
Hopefully yes. I can’t think currently how this could happen unless you are running a really really old version of Rockstor which from your previous posts is not likely (it was years ago now).
Could you first confirm your Rockstor version via a paste of the following command (run as root):
yum info rockstor
and to help diagnose what’s happening here it would help if you could post a screen grab of both your Disks and Pools page, with the problem drive attached, as well as the output of the following commands:
It would also be good if you could post any drive / pool related messages you find within your:
less /opt/rockstor/var/log/rockstor.log
which is also accessible via the Web-UI from System - Logs Manager (thanks to @Flyer).
I’m due to have another look at drive management in the near future so it would be good to understand what’s happened with your setup prior to that stint.
Thanks again for the report and lets hope we can get this one sorted as it’s quiet strange.
As I say above, I must be mistaken. This drive must have been used in rockstor.
Running this,
wipefs -a /dev/sdg
Did the trick. Thanks.
btrfs fi show
Label: ‘rockstor_rockstor’ uuid: 7c01412f-5b44-4f2a-bb79-37c661976ded
Total devices 1 FS bytes used 1.95GiB
devid 1 size 1.81TiB used 5.04GiB path /dev/sda5
Label: ‘MainPool’ uuid: 2508707c-81aa-4109-9158-2c5522423b80
Total devices 5 FS bytes used 1.20TiB
devid 1 size 1.82TiB used 311.47GiB path /dev/sdb
devid 2 size 2.73TiB used 311.47GiB path /dev/sdd
devid 3 size 2.73TiB used 311.47GiB path /dev/sde
devid 4 size 3.64TiB used 311.47GiB path /dev/sdc
devid 5 size 1.82TiB used 6.22GiB path /dev/sdg
Which looks less problematic now. Thanks.
I must say, I have come back to rockstor after trying to use windows 10 as a server, so it is a credit to you guys. I seem to have a machine which had a lot of driver issues and crashes. I am pleased to be back to stability. Thanks.
@petermc Thanks for the update and glad you managed to sort it.
Yes we definitely have a weakness here as we fail to pick up on forced pool label duplication (ie attaching a drive from a prior identically named pool which is allowed in btrfs but not allowed in Rockstor). But we do take steps to avoid this scenario when initially creating pools.
I’ve create an issue defining this buggy behaviour (ie we should flag the duplicate name / differing uuid) and have reference this forum thread as evidence:
We will probably have to do a follow up check on the uuid and some how flag the inconsistency within the UI.
The ‘impostor/legacy’ MainPool member:
and it’s existing legitimate MainPool members:
NAME=“sde” MODEL=“WDC WD30EFRX-68E” SERIAL=“WD-WCC4N1HNDK0A” SIZE=“2.7T” TRAN=“sata” VENDOR=“ATA " HCTL=“3:0:0:0” TYPE=“disk” FSTYPE=“btrfs” LABEL=“MainPool” UUID=“2508707c-81aa-4109-9158-2c5522423b80”
see also sdd, sdb, sdc in same listing.
Yes, the ‘whole disk use’ of a very few fs’s does tend to throw things from time to time, ie residual fs signatures etc; we use that same command internally when wiping a ‘prior use’ disk:
Incidentally, due to changes in the development focus our stable release is now quite a few months ahead of the testing channel and due to such improvements as:
you would have seen red flashing warnings re the missing devices in the Web-UI header.
If you do fancy subscribing to the stable channel note that you will initially there after have to execute a:
yum update rockstor
due to a catch 22 bug that is now fixed but only via an update: