Adding drive to rockstor

Hi, I am trying to add another drive to my rockstor. It is an external drive, a seagate expansion 2TB, 2.5 inch.

As soon as I connect it, it says it is part of my pool. That’s not right as I didn’t add it to my pool. My pool size doesn’t increase either.

I used SSH to remove the partitions on the drive, but the same thing happens.

I have a couple of drives which start with this name,

ata-ST2000LM003_HN-M201RAD

One is already in the pool, and the other is the one I am adding. I suspect it can’t distinguish the drives? Is there anything I can do about this?

@petermc Hello again.

quote=“petermc, post:1, topic:4813”]
As soon as I connect it, it says it is part of my pool. That’s not right as I didn’t add it to my pool.
[/quote]

Thanks for the report: that’s quite a strange one.

I don’t think it’s related to the drive names as the kernel ensures that each drive has a unique name so we should be good there.

Yes, but we use device serial numbers to uniquely identify/track drives and their settings:

From this I’m assuming this is not a drive you have previously used with Rockstor or have installed Rockstor on. If it is then you may need to do a proper wipe, ie via ‘wipefs -a’ (very carefully), as just removing the partitions is not enough to remove any prior btrfs signatures.

Hopefully yes. I can’t think currently how this could happen unless you are running a really really old version of Rockstor which from your previous posts is not likely (it was years ago now).

Could you first confirm your Rockstor version via a paste of the following command (run as root):

yum info rockstor

and to help diagnose what’s happening here it would help if you could post a screen grab of both your Disks and Pools page, with the problem drive attached, as well as the output of the following commands:

btrfs fi show
ls -la /dev/disk/by-id

and

lsblk -P -o NAME,MODEL,SERIAL,SIZE,TRAN,VENDOR,HCTL,TYPE,FSTYPE,LABEL,UUID

It would also be good if you could post any drive / pool related messages you find within your:

less /opt/rockstor/var/log/rockstor.log

which is also accessible via the Web-UI from System - Logs Manager (thanks to @Flyer).

I’m due to have another look at drive management in the near future so it would be good to understand what’s happened with your setup prior to that stint.

Thanks again for the report and lets hope we can get this one sorted as it’s quiet strange.

Read my next reply. This has been resolved.

This drive must have been used with rockstor. Look at the result from btrfs fi show below.

yum info rockstor

Loaded plugins: changelog, fastestmirror
Loading mirror speeds from cached hostfile

  • base: centos.mirror.serversaustralia.com.au
  • epel: epel.mirror.digitalpacific.com.au
  • extras: mirror.internode.on.net
  • updates: mirror.internode.on.net
    Installed Packages
    Name : rockstor
    Arch : x86_64
    Version : 3.9.1
    Release : 16
    Size : 85 M
    Repo : installed
    From repo : Rockstor-Testing
    Summary : RockStor – Store Smartly
    License : GPL
    Description : RockStor – Store Smartly

Plus the screengrabs you requested,

btrfs fi show,

Label: ‘rockstor_rockstor’ uuid: 7c01412f-5b44-4f2a-bb79-37c661976ded
Total devices 1 FS bytes used 1.95GiB
devid 1 size 1.81TiB used 5.04GiB path /dev/sda5

Label: ‘MainPool’ uuid: 2508707c-81aa-4109-9158-2c5522423b80
Total devices 4 FS bytes used 1.20TiB
devid 1 size 1.82TiB used 312.52GiB path /dev/sdb
devid 2 size 2.73TiB used 312.52GiB path /dev/sdd
devid 3 size 2.73TiB used 312.52GiB path /dev/sde
devid 4 size 3.64TiB used 312.52GiB path /dev/sdc

warning, device 6 is missing
warning, device 5 is missing
warning, device 2 is missing
warning, device 1 is missing
warning, device 4 is missing
bytenr mismatch, want=13955653173248, have=0
ERROR: cannot read chunk root
Label: ‘MainPool’ uuid: 3c462d4d-a95f-44e4-ad2d-b04910136a5d
Total devices 6 FS bytes used 1.95MiB
devid 3 size 1.82TiB used 1.06TiB path /dev/sdg
*** Some devices missing

You can see that MainPool is included twice with multiple ids which is the problem.

ls -la /dev/disk/by-id

total 0
drwxr-xr-x 2 root root 560 May 26 12:07 .
drwxr-xr-x 7 root root 140 May 25 18:42 …
lrwxrwxrwx 1 root root 9 May 25 07:23 ata-KINGSTON_SV300S37A240G_50026B725A00857B → …/…/sdf
lrwxrwxrwx 1 root root 10 May 25 07:23 ata-KINGSTON_SV300S37A240G_50026B725A00857B-part1 → …/…/sdf1
lrwxrwxrwx 1 root root 9 May 26 12:07 ata-ST2000LM003_HN-M201RAD_S34RJ9CG153167 → …/…/sdg
lrwxrwxrwx 1 root root 9 May 25 07:23 ata-ST2000LM003_HN-M201RAD_S362J9DH125113 → …/…/sdb
lrwxrwxrwx 1 root root 9 May 26 12:06 ata-ST2000LM007-1R8174_WDZ6DHAZ → …/…/sda
lrwxrwxrwx 1 root root 10 May 26 12:06 ata-ST2000LM007-1R8174_WDZ6DHAZ-part1 → …/…/sda1
lrwxrwxrwx 1 root root 10 May 26 12:06 ata-ST2000LM007-1R8174_WDZ6DHAZ-part2 → …/…/sda2
lrwxrwxrwx 1 root root 10 May 26 12:06 ata-ST2000LM007-1R8174_WDZ6DHAZ-part3 → …/…/sda3
lrwxrwxrwx 1 root root 10 May 26 12:06 ata-ST2000LM007-1R8174_WDZ6DHAZ-part4 → …/…/sda4
lrwxrwxrwx 1 root root 10 May 26 12:06 ata-ST2000LM007-1R8174_WDZ6DHAZ-part5 → …/…/sda5
lrwxrwxrwx 1 root root 9 May 25 07:23 ata-ST3000DM001-1ER166_Z5005LSH → …/…/sdd
lrwxrwxrwx 1 root root 9 May 25 07:23 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1HNDK0A → …/…/sde
lrwxrwxrwx 1 root root 9 May 25 07:23 ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K2JV4UVK → …/…/sdc
lrwxrwxrwx 1 root root 9 May 26 12:07 wwn-0x50004cf20f098dc2 → …/…/sdg
lrwxrwxrwx 1 root root 9 May 25 07:23 wwn-0x50004cf211a32506 → …/…/sdb
lrwxrwxrwx 1 root root 9 May 25 07:23 wwn-0x5000c500673a06cb → …/…/sdd
lrwxrwxrwx 1 root root 9 May 26 12:06 wwn-0x5000c500aa2280a7 → …/…/sda
lrwxrwxrwx 1 root root 10 May 26 12:06 wwn-0x5000c500aa2280a7-part1 → …/…/sda1
lrwxrwxrwx 1 root root 10 May 26 12:06 wwn-0x5000c500aa2280a7-part2 → …/…/sda2
lrwxrwxrwx 1 root root 10 May 26 12:06 wwn-0x5000c500aa2280a7-part3 → …/…/sda3
lrwxrwxrwx 1 root root 10 May 26 12:06 wwn-0x5000c500aa2280a7-part4 → …/…/sda4
lrwxrwxrwx 1 root root 10 May 26 12:06 wwn-0x5000c500aa2280a7-part5 → …/…/sda5
lrwxrwxrwx 1 root root 9 May 25 07:23 wwn-0x50014ee2610caf7b → …/…/sde
lrwxrwxrwx 1 root root 9 May 25 07:23 wwn-0x50014ee2b9faa189 → …/…/sdc
lrwxrwxrwx 1 root root 9 May 25 07:23 wwn-0x50026b725a00857b → …/…/sdf
lrwxrwxrwx 1 root root 10 May 25 07:23 wwn-0x50026b725a00857b-part1 → …/…/sdf1

lsblk -P -o NAME,MODEL,SERIAL,SIZE,TRAN,VENDOR,HCTL,TYPE,FSTYPE,LABEL,UUID

NAME=“sdf” MODEL=“KINGSTON SV300S3” SERIAL=“50026B725A00857B” SIZE=“223.6G” TRAN=“sata” VENDOR=“ATA " HCTL=“4:0:0:0” TYPE=“disk” FSTYPE=”" LABEL=“” UUID=“”
NAME=“sdf1” MODEL=“” SERIAL=“” SIZE=“223.6G” TRAN=“” VENDOR=“” HCTL=“” TYPE=“part” FSTYPE=“ntfs” LABEL=“” UUID=“28C66E32C66E0104”
NAME=“sdd” MODEL=“ST3000DM001-1ER1” SERIAL=“Z5005LSH” SIZE=“2.7T” TRAN=“sata” VENDOR=“ATA " HCTL=“2:0:0:0” TYPE=“disk” FSTYPE=“btrfs” LABEL=“MainPool” UUID=“2508707c-81aa-4109-9158-2c5522423b80”
NAME=“sdb” MODEL=“BUP Slim SL " SERIAL=“S362J9DH125113” SIZE=“1.8T” TRAN=“usb” VENDOR=“Seagate " HCTL=“7:0:0:0” TYPE=“disk” FSTYPE=“btrfs” LABEL=“MainPool” UUID=“2508707c-81aa-4109-9158-2c5522423b80”
NAME=“sdg” MODEL=“Expansion " SERIAL=“S34RJ9CG153167” SIZE=“1.8T” TRAN=“usb” VENDOR=“Seagate " HCTL=“8:0:0:0” TYPE=“disk” FSTYPE=“btrfs” LABEL=“MainPool” UUID=“3c462d4d-a95f-44e4-ad2d-b04910136a5d”
NAME=“sde” MODEL=“WDC WD30EFRX-68E” SERIAL=“WD-WCC4N1HNDK0A” SIZE=“2.7T” TRAN=“sata” VENDOR=“ATA " HCTL=“3:0:0:0” TYPE=“disk” FSTYPE=“btrfs” LABEL=“MainPool” UUID=“2508707c-81aa-4109-9158-2c5522423b80”
NAME=“sdc” MODEL=“WDC WD40EZRZ-00G” SERIAL=“WD-WCC7K2JV4UVK” SIZE=“3.7T” TRAN=“sata” VENDOR=“ATA " HCTL=“0:0:0:0” TYPE=“disk” FSTYPE=“btrfs” LABEL=“MainPool” UUID=“2508707c-81aa-4109-9158-2c5522423b80”
NAME=“sda” MODEL=“Expansion " SERIAL=“WDZ6DHAZ” SIZE=“1.8T” TRAN=“usb” VENDOR=“Seagate " HCTL=“6:0:0:0” TYPE=“disk” FSTYPE=”” LABEL=”” UUID=””
NAME=“sda4” MODEL=”” SERIAL=”" SIZE=“7.8G” TRAN=“” VENDOR=“” HCTL=“” TYPE=“part” FSTYPE=“swap” LABEL=“” UUID=“815e870d-574b-4264-8386-269c6cdb73a1”
NAME=“sda2” MODEL=“” SERIAL=“” SIZE=“1M” TRAN=“” VENDOR=“” HCTL=“” TYPE=“part” FSTYPE=“” LABEL=“” UUID=“”
NAME=“sda5” MODEL=“” SERIAL=“” SIZE=“1.8T” TRAN=“” VENDOR=“” HCTL=“” TYPE=“part” FSTYPE=“btrfs” LABEL=“rockstor_rockstor” UUID=“7c01412f-5b44-4f2a-bb79-37c661976ded”
NAME=“sda3” MODEL=“” SERIAL=“” SIZE=“500M” TRAN=“” VENDOR=“” HCTL=“” TYPE=“part” FSTYPE=“ext4” LABEL=“” UUID=“f70d5ac9-1d25-4a26-94cc-a7e53117429d”
NAME=“sda1” MODEL=“” SERIAL=“” SIZE=“128M” TRAN=“” VENDOR=“” HCTL=“” TYPE=“part” FSTYPE=“” LABEL=“” UUID=“”

As I say above, I must be mistaken. This drive must have been used in rockstor.

Running this,

wipefs -a /dev/sdg

Did the trick. Thanks.

btrfs fi show

Label: ‘rockstor_rockstor’ uuid: 7c01412f-5b44-4f2a-bb79-37c661976ded
Total devices 1 FS bytes used 1.95GiB
devid 1 size 1.81TiB used 5.04GiB path /dev/sda5

Label: ‘MainPool’ uuid: 2508707c-81aa-4109-9158-2c5522423b80
Total devices 5 FS bytes used 1.20TiB
devid 1 size 1.82TiB used 311.47GiB path /dev/sdb
devid 2 size 2.73TiB used 311.47GiB path /dev/sdd
devid 3 size 2.73TiB used 311.47GiB path /dev/sde
devid 4 size 3.64TiB used 311.47GiB path /dev/sdc
devid 5 size 1.82TiB used 6.22GiB path /dev/sdg

Which looks less problematic now. Thanks.

I must say, I have come back to rockstor after trying to use windows 10 as a server, so it is a credit to you guys. I seem to have a machine which had a lot of driver issues and crashes. I am pleased to be back to stability. Thanks.

@petermc Thanks for the update and glad you managed to sort it.

Yes we definitely have a weakness here as we fail to pick up on forced pool label duplication (ie attaching a drive from a prior identically named pool which is allowed in btrfs but not allowed in Rockstor). But we do take steps to avoid this scenario when initially creating pools.

I’ve create an issue defining this buggy behaviour (ie we should flag the duplicate name / differing uuid) and have reference this forum thread as evidence:

https://github.com/rockstor/rockstor-core/issues/1932

We will probably have to do a follow up check on the uuid and some how flag the inconsistency within the UI.

The ‘impostor/legacy’ MainPool member:

and it’s existing legitimate MainPool members:

NAME=“sde” MODEL=“WDC WD30EFRX-68E” SERIAL=“WD-WCC4N1HNDK0A” SIZE=“2.7T” TRAN=“sata” VENDOR=“ATA " HCTL=“3:0:0:0” TYPE=“disk” FSTYPE=“btrfs” LABEL=“MainPool” UUID=“2508707c-81aa-4109-9158-2c5522423b80”
see also sdd, sdb, sdc in same listing.

Yes, the ‘whole disk use’ of a very few fs’s does tend to throw things from time to time, ie residual fs signatures etc; we use that same command internally when wiping a ‘prior use’ disk:

Incidentally, due to changes in the development focus our stable release is now quite a few months ahead of the testing channel and due to such improvements as:

you would have seen red flashing warnings re the missing devices in the Web-UI header.

If you do fancy subscribing to the stable channel note that you will initially there after have to execute a:

yum update rockstor

due to a catch 22 bug that is now fixed but only via an update:

https://github.com/rockstor/rockstor-core/issues/1870

Thanks again for the feedback and well done on working through this rather tricky one and sharing your findings.