Warning! Disk serial number is not legitimate or unique

Hello again,

I have now removed that partition. This does help only to some extend:

[16/Feb/2016 22:29:14] ERROR [storageadmin.util:46] request path: /api/disks/sdc/btrfs-disk-import method: POST data: <QueryDict: {}>
[16/Feb/2016 22:29:14] ERROR [storageadmin.util:47] exception: Failed to import any pool on this device(sdc). Error: 'unicode' object has no attribute 'disk_set'
Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 280, in _btrfs_disk_import
    import_shares(po, request)
  File "/opt/rockstor/src/rockstor/storageadmin/views/share_helpers.py", line 106, in import_shares
    cshare.pool.name))
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 261, in shares_info
    mnt_pt = mount_root(pool)
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 125, in mount_root
    device = pool.disk_set.first().name
AttributeError: 'unicode' object has no attribute 'disk_set'
[16/Feb/2016 22:29:14] DEBUG [storageadmin.util:48] Current Rockstor version: 3.8-11.10

On that drive (sdc) these subvolumes exist:

btrfs subvolume list /mnt/test
ID 3096 gen 564398 top level 5 path Backups
ID 3098 gen 564398 top level 5 path Daten
ID 3099 gen 564396 top level 5 path Dokumente
ID 3101 gen 564396 top level 5 path Fotos
ID 3102 gen 564390 top level 5 path HomeVideo
ID 3107 gen 564390 top level 5 path Musik
ID 3108 gen 564395 top level 5 path Video
ID 22971 gen 519943 top level 5 path RockOn
ID 22972 gen 504314 top level 5 path PlexConfig
ID 23130 gen 564397 top level 5 path Docker
ID 23860 gen 519418 top level 23130 path Docker/storage/btrfs/subvolumes/978b0fee5c405109be80f7be732bbd1157392c0b8e2257b13e78b5f1c819a52a
ID 23861 gen 519419 top level 23130 path Docker/storage/btrfs/subvolumes/8e0aa5f79abe873cb3baa5f315327ec957822ef920c07c3dbc2f294a1e9463e5
ID 23862 gen 519420 top level 23130 path Docker/storage/btrfs/subvolumes/1bc8d72513b390c8466f3dbea2becbc7b4eb9de810ed51c6512b9fe00df59aef
ID 23863 gen 519421 top level 23130 path Docker/storage/btrfs/subvolumes/f38588d4d01f7e667fdfcabd0d2c0740b224a5b7b828211c9afeff082fc08c92
ID 23864 gen 519422 top level 23130 path Docker/storage/btrfs/subvolumes/5330d7e97082dca00c1bccaf1b172bfe3a753073bd0da16ae081c0040cc7d639
ID 23865 gen 519423 top level 23130 path Docker/storage/btrfs/subvolumes/8dde79df7e567543588f4c8b0159e1b1d23afdcc9e059fd710f3f284cf3ad2cb
ID 23866 gen 519424 top level 23130 path Docker/storage/btrfs/subvolumes/9e1ea34a70c02895d384685a8f0c1236a9527f3a54429600f927600918cedfc7
ID 23867 gen 519425 top level 23130 path Docker/storage/btrfs/subvolumes/5aecbb0c285c0762c6e5c3efc50f8729880d744f14d95441a811f9e05c645988
ID 23868 gen 519426 top level 23130 path Docker/storage/btrfs/subvolumes/f3d0833492c96cf8cddb7c17ff7a2a07e93a08b5c051bf8b336dfc731fe9700a
ID 23869 gen 519427 top level 23130 path Docker/storage/btrfs/subvolumes/f25fb2c46a5cba7a2b3f96f1fde356a555bb8009efd3464146fb2c309728f4e2
ID 23870 gen 519428 top level 23130 path Docker/storage/btrfs/subvolumes/6f41bbecb8a6fe7685f804b7c5d3154c0b09cb21471b2c3f8f4326d411276bb5
ID 23871 gen 519429 top level 23130 path Docker/storage/btrfs/subvolumes/59474ca4ed6c68f1b9a20e9f73de8954fedb37d3a90f54cdd1445dc3653537d3
ID 23872 gen 519430 top level 23130 path Docker/storage/btrfs/subvolumes/195c88231a80802b4752ae6a34efd8be021a9334f305153c407734311fa0e9b0
ID 23873 gen 519431 top level 23130 path Docker/storage/btrfs/subvolumes/2181fb7190a545556df8a34c3001d6d4fc2f125df6aa79fb61ae6263a9a98a92
ID 23874 gen 519436 top level 23130 path Docker/storage/btrfs/subvolumes/0cf14593b8a71134f48a6215cb8ee11f70fc699badfad4d277e1858993c71631
ID 23876 gen 519437 top level 23130 path Docker/storage/btrfs/subvolumes/92ec6d044cb3e39ae00500126a88c9ac342678d0591675f2231daafbf0877778
ID 23878 gen 519475 top level 23130 path Docker/storage/btrfs/subvolumes/5a52a1d23303d12dc74119b5587bc4aa8b19b741d7a4fa253f24024c60692780
ID 23879 gen 519438 top level 23130 path Docker/storage/btrfs/subvolumes/2ef91804894a296102a9ce1e38bca8c00d2703c414166f59e08541ce8542ab1e
ID 23880 gen 519439 top level 23130 path Docker/storage/btrfs/subvolumes/f80999a1f330b7e680bdf16ae900455abe50121d7c69ba2b5e4fc1fc10e3cc70
ID 23881 gen 519451 top level 23130 path Docker/storage/btrfs/subvolumes/6cc0fc2a5ee3bc506e9a0c41eb25431b4d163b1e407c927ee3ac18169d53cc1a
ID 23890 gen 519454 top level 23130 path Docker/storage/btrfs/subvolumes/6227b6b8580ed194155df9fa3ad97aa870af1405278f7bfcf53d851d1e92672b
ID 23891 gen 519456 top level 23130 path Docker/storage/btrfs/subvolumes/77f3be1cdfd236adec9fa055c3f3a67ec4bf0e84d559bbbcbeed7b3b1931f945
ID 23892 gen 519455 top level 23130 path Docker/storage/btrfs/subvolumes/b5013af453ed30fb88b5653d6f5d4d446adf373f92fc569f5581754a50042cd8
ID 23893 gen 520229 top level 23130 path Docker/storage/btrfs/subvolumes/9d0a60b3fc3bfac520687cfb309796edf28e5832089e9d0733bf878f03d7fbde
ID 23896 gen 519463 top level 23130 path Docker/storage/btrfs/subvolumes/3230102aa42cc9b1fb67bbc210d32141d2bb6fdf4fac2b974c5651090ce3f8db
ID 23897 gen 519457 top level 23130 path Docker/storage/btrfs/subvolumes/84f1253e3996f344fb5f7c577c0d6d666eff34677b4986048c2211186816e817-init
ID 23898 gen 519458 top level 23130 path Docker/storage/btrfs/subvolumes/c40fe59aa44a5e7685681eecb225af542424c08a9467ca2fa153206df683d572-init
ID 23899 gen 519485 top level 23130 path Docker/storage/btrfs/subvolumes/84f1253e3996f344fb5f7c577c0d6d666eff34677b4986048c2211186816e817
ID 23900 gen 519485 top level 23130 path Docker/storage/btrfs/subvolumes/c40fe59aa44a5e7685681eecb225af542424c08a9467ca2fa153206df683d572
ID 23901 gen 519460 top level 23130 path Docker/storage/btrfs/subvolumes/53f4d02b59e83262bd8346888ff043394643f14a281f672c806329769ce21b60-init
ID 23902 gen 519485 top level 23130 path Docker/storage/btrfs/subvolumes/53f4d02b59e83262bd8346888ff043394643f14a281f672c806329769ce21b60
ID 23905 gen 519466 top level 23130 path Docker/storage/btrfs/subvolumes/2a0b105db03aa1adf11209f1289086dacdb65b630f8b4fd96da4f25b3f9202fe
ID 23908 gen 519469 top level 23130 path Docker/storage/btrfs/subvolumes/9a200def196d497dc472f69d97e4bafda686881b1bd990dbbbf36aae3cc9b843
ID 23911 gen 519472 top level 23130 path Docker/storage/btrfs/subvolumes/1c6eb33115b02635f6f3b99649b9260ad949aa2b2dd2ddf6393d78b3ec03188e
ID 23914 gen 519473 top level 23130 path Docker/storage/btrfs/subvolumes/b8bd0cd216328acac1bfb8c76f3114ea8125541a5107d009207358895deed165
ID 23917 gen 519478 top level 23130 path Docker/storage/btrfs/subvolumes/c1d802caf7cbeb141d551034be2cdea421f45ffb92043aa147893c8be001b1cf
ID 23920 gen 519479 top level 23130 path Docker/storage/btrfs/subvolumes/daad25024289474233b7825537a827f4098413c9d684b6a4a976757d8ab7fabc
ID 23921 gen 519480 top level 23130 path Docker/storage/btrfs/subvolumes/d74d4ee8b3242476aa130e8c445c6010f05fc5f870115627038fc08276663c62-init
ID 23922 gen 519485 top level 23130 path Docker/storage/btrfs/subvolumes/d74d4ee8b3242476aa130e8c445c6010f05fc5f870115627038fc08276663c62
ID 23925 gen 520016 top level 23130 path Docker/storage/btrfs/subvolumes/9c53af3f0e9df986e1c76170e7d7a933b59fa21edc94cde343814ea0cb7e73f9
ID 23928 gen 520019 top level 23130 path Docker/storage/btrfs/subvolumes/380640c16dbd97b9dc9d6e83592ec8cc706d9f63decf566bf09c122bbedbc8ae
ID 23931 gen 520022 top level 23130 path Docker/storage/btrfs/subvolumes/bd692530f1b8727292fa660c1a2fcbd34410277ba98f797781253ad34584b318
ID 23934 gen 520025 top level 23130 path Docker/storage/btrfs/subvolumes/e29683eab6e931f6989406aeebde78bc05562023664ed0ebcb714f9db029dae8
ID 23937 gen 520028 top level 23130 path Docker/storage/btrfs/subvolumes/6d0aae68947a621c6bb774d75fd21c9efc75aa2618e92249bf4250338405169c
ID 23940 gen 520031 top level 23130 path Docker/storage/btrfs/subvolumes/0af90b37e5648a57c0f5a9eb34c9e6b74013bf27731f45980075670873e56f08
ID 23943 gen 520078 top level 23130 path Docker/storage/btrfs/subvolumes/b64053bb87a7a9b963673c3efd8034d4915edb826596b322007a99b283cf39c0
ID 23944 gen 520033 top level 23130 path Docker/storage/btrfs/subvolumes/2b2ee21f9edb8f2c9dd1d53337e3aa717deb3b1513f8f55e5f49bfeca3ce0806-init
ID 23945 gen 520034 top level 23130 path Docker/storage/btrfs/subvolumes/2b2ee21f9edb8f2c9dd1d53337e3aa717deb3b1513f8f55e5f49bfeca3ce0806
ID 23946 gen 520052 top level 23130 path Docker/storage/btrfs/subvolumes/8f68c1a030790e5c4f702b8085c2fb555f53d1c04f87d63f8b7daa32754e6742-init
ID 23947 gen 520053 top level 23130 path Docker/storage/btrfs/subvolumes/8f68c1a030790e5c4f702b8085c2fb555f53d1c04f87d63f8b7daa32754e6742
ID 23948 gen 520071 top level 23130 path Docker/storage/btrfs/subvolumes/eadc5c97085f242109fcdec2b276c836843500ebced294350fb208fd96bbf672-init
ID 23949 gen 520072 top level 23130 path Docker/storage/btrfs/subvolumes/eadc5c97085f242109fcdec2b276c836843500ebced294350fb208fd96bbf672
ID 23950 gen 520079 top level 23130 path Docker/storage/btrfs/subvolumes/44ce3d179cd90425e203338031807f3d37d5c0fb7c63873138df9cd2143efa18-init
ID 23951 gen 520080 top level 23130 path Docker/storage/btrfs/subvolumes/44ce3d179cd90425e203338031807f3d37d5c0fb7c63873138df9cd2143efa18
ID 23954 gen 520232 top level 23130 path Docker/storage/btrfs/subvolumes/803cf45639119d365bd07f220385118fad228c0691fd17f46812e5952f3fdda0
ID 23957 gen 520235 top level 23130 path Docker/storage/btrfs/subvolumes/a3a7dd71f1c9e5a0333be3bb2aacb4473bf29b2f31086a1aabd055ead546d663
ID 23960 gen 520238 top level 23130 path Docker/storage/btrfs/subvolumes/d579bc62c749eddbb0d69593b6e025cee0a75411307c6723ebd0605d0f36eca7
ID 23963 gen 520241 top level 23130 path Docker/storage/btrfs/subvolumes/362d79cb1d4abac0b5612441104f924cc3cfc28c59135ddb75b0635709c0ff2a
ID 23966 gen 520244 top level 23130 path Docker/storage/btrfs/subvolumes/90424ad26c9bb72aa21bb63a9819a91a0c8467533ce5c67dd02b92da5ec855ac
ID 23969 gen 520248 top level 23130 path Docker/storage/btrfs/subvolumes/037d79171c0dcdfde87d733f2af01a03a89bdac6cc150604c0620b53d0732fd4
ID 23972 gen 520253 top level 23130 path Docker/storage/btrfs/subvolumes/a76faa5dc869a43e43aa079297ab0d6464f896fac1936a422ca95de32464255a
ID 23975 gen 520254 top level 23130 path Docker/storage/btrfs/subvolumes/31bdc7280165c646a0e466f5ebb7aaaf53464042a2de5aa3c8939fbccc2cc2cd

I have read the following threads and this doesnt solve my issue.

This is what I have before installing new drives:
2x interal SSD for rockstor configured with mdraid.
6x 4TB “WDC WD40EFRX-68W” (RAID10, main datastore)
1x 2TB Seagate Baracuda (Unused)
They all work fine until I try and add more drives, then the UUID’s disappear and I see the error (Topic) on rockstor console.
I have tried adding;
4x 750GB Seagate Baracuda drives
or
5x 2TB Seagate Baracuda drives

I first thought the 750GB drives were bad causing this issue. but I have since valiudated this not to be true. The 2TB were only in use within another system an hour earlier and they cause the same wierd problem when added.

It seems I can only have my base configuration, plus 2x more drives, without things crapping out and mucking up the UUIDs. If I remove the drives, it returns to normal and is fine.

Could this be related to my PSU?
At boot time I know the drives will draw more current and maybe my 450W is not sufficient, and this is causing instability in the drive. What is the recommendation here?
Or there is a limitation in the number of drives that can be managed by the system.
Or there is a bug.
Or something else I have missed.

Please help.

lsblk looks like this for the, previously good 4TB drives, others are just missing.
NAME=“sdc” MODEL=“WDC WD40EFRX-68W” SERIAL=“” SIZE=“3.7T” TRAN=“sas” VENDOR=“ATA " HCTL=“6:0:0:0” TYPE=“disk” FSTYPE=”" LABEL=“” UUID=“”
NAME=“sdd” MODEL=“WDC WD40EFRX-68W” SERIAL=“” SIZE=“3.7T” TRAN=“sas” VENDOR=“ATA " HCTL=“6:0:1:0” TYPE=“disk” FSTYPE=”" LABEL=“” UUID=“”
NAME=“sde” MODEL=“WDC WD40EFRX-68W” SERIAL=“” SIZE=“3.7T” TRAN=“sas” VENDOR=“ATA " HCTL=“6:0:2:0” TYPE=“disk” FSTYPE=”" LABEL=“” UUID=“”

Cheers

Is anyone able to provide some guidance or advice on my issue?

Please help?

Perhaps @phillxnet would have some insights?

I think he is the go to guy with these kinds of problems as he understand the inner workings of Rockstor in this regard.

1 Like

@phillxnet
Hi Phillip,

Are you able to provide insight here?

I do think it is power related, but it is a strange bug. I just dont want to go out and buy a new larger PSU to find that this is some other bug (potentially).

@GIDDION Hello again. I am just about to write a technical manual wiki entry on device / serial management in Rockstor and want to reference it here so hang in there. And yes there is a suspected bug in serial management but I have yet to root it out but I have only seen it occur on nvme devices so far ie in the following forum thread:

In that no db entry for serial should be null and yet in the above thread it was found to be the case: hence suspected bug, but I would rather have more info before creating a targeted issue on this as it is still a little hazy.

Maybe we can route it out here if you are also affected and game.

I’ll get this wiki entry done first then circle back around to your serial issue. But as a quick note your lsblk readout does indicate all those 4 drives are not reporting their serial, try the udevadm commands in the above referenced forum thread except on your problem drives and see if the serial numbers are extracted correctly then (posting the full output of both here will also help), as if lsblk reports no serial Rockstor fails over to trying udevadm to retrieve them. Sorry need to read more on your issue as reported but also will need the wiki. Also note the contents of your rockstor.log (System - Logs Manager) when you press the Rescan button on the Disks page. Essentially Rockstor parses the lsblk output to know what drives are connected and manages them via their serial numbers which are required to be unique.

Hope that helps.

Back in a bit.

Edit: and screen grabs of the Disks page would be good. Thanks.

@GIDDION We now have a Device management in Rockstor technical manual entry so at least that’s there to refer to now. Mainly intended as a developer reference but still it needed to be done. I await your info as previously requested in this thread.

Cheers.

1 Like

thanks, I will have a look when I can. We just had a baby, so time is limited.
This is important to me, so I will endeavour to action it asap.

I also have a number of other tests try, including rescan the bus manually. I will also look at updating the FW of the HBA, but this requires UEFI, something I have not done before, so it may take me a little bit to figure it out.

Thanks, I’ll update soon. :slight_smile:

This may be very long.

On the system before adding the new drives.

lsblk -P -o NAME,MODEL,SERIAL,SIZE,TRAN,VENDOR,HCTL,TYPE,FSTYPE,LABEL,UUID

NAME="sda" MODEL="INTEL SSDSC2BW12" SERIAL="CVTR608202EC120AGN" SIZE="111.8G" TRAN="sata" VENDOR="ATA     " HCTL="3:0:0:0" TYPE="disk" FSTYPE="" LABEL="" UUID=""
NAME="sda1" MODEL="" SERIAL="" SIZE="7.5G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:swap" UUID="6ed15611-9fad-55c3-4e40-22853491d3b1"
NAME="md127" MODEL="" SERIAL="" SIZE="7.5G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="swap" LABEL="" UUID="577f18f5-a582-441c-be48-7b48e042a8f4"
NAME="sda2" MODEL="" SERIAL="" SIZE="3.7G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:boot" UUID="2243c4de-53da-af02-25ce-4f0c402ed04b"
NAME="md125" MODEL="" SERIAL="" SIZE="3.7G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="ext4" LABEL="" UUID="171488f2-3597-442f-a1c1-df22384b46af"
NAME="sda3" MODEL="" SERIAL="" SIZE="100.6G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:root" UUID="2ab65b40-c4a3-f737-663d-3c522b91cd52"
NAME="md126" MODEL="" SERIAL="" SIZE="100.6G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="btrfs" LABEL="rockstor_rockstor" UUID="99b26017-c5bd-48a1-ab12-f849534083a7"
NAME="sdb" MODEL="INTEL SSDSC2BW12" SERIAL="CVTR6082019U120AGN" SIZE="111.8G" TRAN="sata" VENDOR="ATA     " HCTL="4:0:0:0" TYPE="disk" FSTYPE="" LABEL="" UUID=""
NAME="sdb1" MODEL="" SERIAL="" SIZE="7.5G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:swap" UUID="6ed15611-9fad-55c3-4e40-22853491d3b1"
NAME="md127" MODEL="" SERIAL="" SIZE="7.5G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="swap" LABEL="" UUID="577f18f5-a582-441c-be48-7b48e042a8f4"
NAME="sdb2" MODEL="" SERIAL="" SIZE="3.7G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:boot" UUID="2243c4de-53da-af02-25ce-4f0c402ed04b"
NAME="md125" MODEL="" SERIAL="" SIZE="3.7G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="ext4" LABEL="" UUID="171488f2-3597-442f-a1c1-df22384b46af"
NAME="sdb3" MODEL="" SERIAL="" SIZE="100.6G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:root" UUID="2ab65b40-c4a3-f737-663d-3c522b91cd52"
NAME="md126" MODEL="" SERIAL="" SIZE="100.6G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="btrfs" LABEL="rockstor_rockstor" UUID="99b26017-c5bd-48a1-ab12-f849534083a7"
NAME="sdc" MODEL="WDC WD40EFRX-68W" SERIAL="WD-WCC4ECKJHA99" SIZE="3.7T" TRAN="sas" VENDOR="ATA     " HCTL="0:0:0:0" TYPE="disk" FSTYPE="btrfs" LABEL="4TB-HDD-R10" UUID="bbea75b0-74bb-4fe7-b8d6-f8ff1a79faaa"
NAME="sdd" MODEL="WDC WD40EFRX-68W" SERIAL="WD-WCC4EAKDT0XU" SIZE="3.7T" TRAN="sas" VENDOR="ATA     " HCTL="0:0:1:0" TYPE="disk" FSTYPE="btrfs" LABEL="4TB-HDD-R10" UUID="bbea75b0-74bb-4fe7-b8d6-f8ff1a79faaa"
NAME="sde" MODEL="WDC WD40EFRX-68W" SERIAL="WD-WCC4E7R91DJD" SIZE="3.7T" TRAN="sas" VENDOR="ATA     " HCTL="0:0:2:0" TYPE="disk" FSTYPE="btrfs" LABEL="4TB-HDD-R10" UUID="bbea75b0-74bb-4fe7-b8d6-f8ff1a79faaa"
NAME="sdf" MODEL="WDC WD40EFRX-68W" SERIAL="WD-WCC4EM0WNP5L" SIZE="3.7T" TRAN="sas" VENDOR="ATA     " HCTL="0:0:3:0" TYPE="disk" FSTYPE="btrfs" LABEL="4TB-HDD-R10" UUID="bbea75b0-74bb-4fe7-b8d6-f8ff1a79faaa"
NAME="sdg" MODEL="WDC WD40EFRX-68W" SERIAL="WD-WCC4E7R91LV4" SIZE="3.7T" TRAN="sas" VENDOR="ATA     " HCTL="0:0:4:0" TYPE="disk" FSTYPE="btrfs" LABEL="4TB-HDD-R10" UUID="bbea75b0-74bb-4fe7-b8d6-f8ff1a79faaa"
NAME="sdh" MODEL="WDC WD40EFRX-68W" SERIAL="WD-WCC4E4AH7JC2" SIZE="3.7T" TRAN="sas" VENDOR="ATA     " HCTL="0:0:5:0" TYPE="disk" FSTYPE="btrfs" LABEL="4TB-HDD-R10" UUID="bbea75b0-74bb-4fe7-b8d6-f8ff1a79faaa"

udevadm info --name sdd

P: /devices/pci0000:00/0000:00:01.1/0000:02:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdd
N: sdd
S: disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4EAKDT0XU
S: disk/by-id/wwn-0x50014ee2b52b24d9
S: disk/by-label/4TB-HDD-R10
S: disk/by-path/pci-0000:02:00.0-sas-0x443322110d000000-lun-0
S: disk/by-path/pci-0000:02:00.0-sas-phy13-lun-0
S: disk/by-uuid/bbea75b0-74bb-4fe7-b8d6-f8ff1a79faaa
E: DEVLINKS=/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4EAKDT0XU /dev/disk/by-id/wwn-0x50014ee2b52b24d9 /dev/disk/by-label/4TB-HDD-R10 /dev/disk/by-path/pci-0000:02:00.0-sas-0x443322110d000000-lun-0 /dev/disk/by-path/pci-0000:02:00.0-sas-phy13-lun-0 /dev/disk/by-uuid/bbea75b0-74bb-4fe7-b8d6-f8ff1a79faaa
E: DEVNAME=/dev/sdd
E: DEVPATH=/devices/pci0000:00/0000:00:01.1/0000:02:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdd
E: DEVTYPE=disk
E: ID_ATA=1
E: ID_ATA_DOWNLOAD_MICROCODE=1
E: ID_ATA_FEATURE_SET_HPA=1
E: ID_ATA_FEATURE_SET_HPA_ENABLED=1
E: ID_ATA_FEATURE_SET_PM=1
E: ID_ATA_FEATURE_SET_PM_ENABLED=1
E: ID_ATA_FEATURE_SET_PUIS=1
E: ID_ATA_FEATURE_SET_PUIS_ENABLED=0
E: ID_ATA_FEATURE_SET_SECURITY=1
E: ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
E: ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=510
E: ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=510
E: ID_ATA_FEATURE_SET_SMART=1
E: ID_ATA_FEATURE_SET_SMART_ENABLED=1
E: ID_ATA_ROTATION_RATE_RPM=5400
E: ID_ATA_SATA=1
E: ID_ATA_SATA_SIGNAL_RATE_GEN1=1
E: ID_ATA_SATA_SIGNAL_RATE_GEN2=1
E: ID_ATA_WRITE_CACHE=1
E: ID_ATA_WRITE_CACHE_ENABLED=1
E: ID_BUS=ata
E: ID_FS_LABEL=4TB-HDD-R10
E: ID_FS_LABEL_ENC=4TB-HDD-R10
E: ID_FS_TYPE=btrfs
E: ID_FS_USAGE=filesystem
E: ID_FS_UUID=bbea75b0-74bb-4fe7-b8d6-f8ff1a79faaa
E: ID_FS_UUID_ENC=bbea75b0-74bb-4fe7-b8d6-f8ff1a79faaa
E: ID_FS_UUID_SUB=5c4c3493-e351-4534-a139-ce160c90369d
E: ID_FS_UUID_SUB_ENC=5c4c3493-e351-4534-a139-ce160c90369d
E: ID_MODEL=WDC_WD40EFRX-68WT0N0
E: ID_MODEL_ENC=WDC\x20WD40EFRX-68WT0N0\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
E: ID_PATH=pci-0000:02:00.0-sas-0x443322110d000000-lun-0
E: ID_PATH_TAG=pci-0000_02_00_0-sas-0x443322110d000000-lun-0
E: ID_REVISION=80.00A80
E: ID_SAS_PATH=pci-0000:02:00.0-sas-phy13-lun-0
E: ID_SERIAL=WDC_WD40EFRX-68WT0N0_WD-WCC4EAKDT0XU
E: ID_SERIAL_SHORT=WD-WCC4EAKDT0XU
E: ID_TYPE=disk
E: ID_WWN=0x50014ee2b52b24d9
E: ID_WWN_WITH_EXTENSION=0x50014ee2b52b24d9
E: MAJOR=8
E: MINOR=48
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: USEC_INITIALIZED=837749

After adding the drives;

lsblk -P -o NAME,MODEL,SERIAL,SIZE,TRAN,VENDOR,HCTL,TYPE,FSTYPE,LABEL,UUID

NAME="sda" MODEL="INTEL SSDSC2BW12" SERIAL="CVTR608202EC120AGN" SIZE="111.8G" TRAN="sata" VENDOR="ATA     " HCTL="3:0:0:0" TYPE="disk" FSTYPE="" LABEL="" UUID=""
NAME="sda1" MODEL="" SERIAL="" SIZE="7.5G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:swap" UUID="6ed15611-9fad-55c3-4e40-22853491d3b1"
NAME="md127" MODEL="" SERIAL="" SIZE="7.5G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="swap" LABEL="" UUID="577f18f5-a582-441c-be48-7b48e042a8f4"
NAME="sda2" MODEL="" SERIAL="" SIZE="3.7G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:boot" UUID="2243c4de-53da-af02-25ce-4f0c402ed04b"
NAME="md126" MODEL="" SERIAL="" SIZE="3.7G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="ext4" LABEL="" UUID="171488f2-3597-442f-a1c1-df22384b46af"
NAME="sda3" MODEL="" SERIAL="" SIZE="100.6G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:root" UUID="2ab65b40-c4a3-f737-663d-3c522b91cd52"
NAME="md125" MODEL="" SERIAL="" SIZE="100.6G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="btrfs" LABEL="rockstor_rockstor" UUID="99b26017-c5bd-48a1-ab12-f849534083a7"
NAME="sdb" MODEL="INTEL SSDSC2BW12" SERIAL="CVTR6082019U120AGN" SIZE="111.8G" TRAN="sata" VENDOR="ATA     " HCTL="4:0:0:0" TYPE="disk" FSTYPE="" LABEL="" UUID=""
NAME="sdb1" MODEL="" SERIAL="" SIZE="7.5G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:swap" UUID="6ed15611-9fad-55c3-4e40-22853491d3b1"
NAME="md127" MODEL="" SERIAL="" SIZE="7.5G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="swap" LABEL="" UUID="577f18f5-a582-441c-be48-7b48e042a8f4"
NAME="sdb2" MODEL="" SERIAL="" SIZE="3.7G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:boot" UUID="2243c4de-53da-af02-25ce-4f0c402ed04b"
NAME="md126" MODEL="" SERIAL="" SIZE="3.7G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="ext4" LABEL="" UUID="171488f2-3597-442f-a1c1-df22384b46af"
NAME="sdb3" MODEL="" SERIAL="" SIZE="100.6G" TRAN="" VENDOR="" HCTL="" TYPE="part" FSTYPE="linux_raid_member" LABEL="localhost:root" UUID="2ab65b40-c4a3-f737-663d-3c522b91cd52"
NAME="md125" MODEL="" SERIAL="" SIZE="100.6G" TRAN="" VENDOR="" HCTL="" TYPE="raid1" FSTYPE="btrfs" LABEL="rockstor_rockstor" UUID="99b26017-c5bd-48a1-ab12-f849534083a7"
NAME="sdc" MODEL="WDC WD40EFRX-68W" SERIAL="" SIZE="3.7T" TRAN="sas" VENDOR="ATA     " HCTL="0:0:0:0" TYPE="disk" FSTYPE="" LABEL="" UUID=""

udevadm info --name sdc

P: /devices/pci0000:00/0000:00:01.1/0000:02:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sdc
N: sdc
S: disk/by-path/pci-0000:02:00.0-sas-0x4433221109000000-lun-0
S: disk/by-path/pci-0000:02:00.0-sas-phy9-lun-0
E: DEVLINKS=/dev/disk/by-path/pci-0000:02:00.0-sas-0x4433221109000000-lun-0 /dev/disk/by-path/pci-0000:02:00.0-sas-phy9-lun-0
E: DEVNAME=/dev/sdc
E: DEVPATH=/devices/pci0000:00/0000:00:01.1/0000:02:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sdc
E: DEVTYPE=disk
E: ID_PATH=pci-0000:02:00.0-sas-0x4433221109000000-lun-0
E: ID_PATH_TAG=pci-0000_02_00_0-sas-0x4433221109000000-lun-0
E: ID_SAS_PATH=pci-0000:02:00.0-sas-phy9-lun-0
E: MAJOR=8
E: MINOR=32
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: USEC_INITIALIZED=71913

I am unable to upload all the dmesg output so here is just the last part after the below command.

udevadm trigger

[  536.703003] scsi_io_completion: 10 callbacks suppressed
[  536.703008] sd 0:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  536.703010] sd 0:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
[  536.703011] blk_update_request: 10 callbacks suppressed
[  536.703012] blk_update_request: I/O error, dev sdc, sector 7814036992
[  536.704996] sd 0:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  536.705000] sd 0:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
[  536.705001] blk_update_request: I/O error, dev sdc, sector 7814036992
[  536.705664] Buffer I/O error on dev sdc, logical block 976754624, async page read
[  536.910466] sd 0:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  536.910470] sd 0:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
[  536.910471] blk_update_request: I/O error, dev sdc, sector 7814036992
[  536.911445] sd 0:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  536.911447] sd 0:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
[  536.911449] blk_update_request: I/O error, dev sdc, sector 7814036992
[  536.912142] Buffer I/O error on dev sdc, logical block 976754624, async page read
[  545.379671] sd 0:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  545.379674] sd 0:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
[  545.379676] blk_update_request: I/O error, dev sdc, sector 7814036992
[  545.380624] sd 0:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  545.380626] sd 0:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
[  545.380627] blk_update_request: I/O error, dev sdc, sector 7814036992
[  545.381329] Buffer I/O error on dev sdc, logical block 976754624, async page read
[  545.581149] sd 0:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  545.581153] sd 0:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
[  545.581154] blk_update_request: I/O error, dev sdc, sector 7814036992
[  545.581888] sd 0:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  545.581890] sd 0:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
[  545.581891] blk_update_request: I/O error, dev sdc, sector 7814036992
[  545.582615] Buffer I/O error on dev sdc, logical block 976754624, async page read
[  549.483577] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
[  549.483599] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
[  549.483611] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
[  609.789630] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
[  609.789636] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
[  609.789640] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO

more to come as I explore…

1 Like

@phillxnet
Where can I send the rockstor.log, it is way too large to post here?

now with most drives removed and only 2 extras installed. If I add 1 more it all goes funny like the above post.

@GIDDION First off congratulations on the new baby.

My current thoughts are that you have a dodgy drive or port as the following excerpt from you logs indicates a blk_update_request io error and always on the same sector. Could it be that this drive (most likely drive I think) is throwing a controller and causing all drives attached to that controller, or part there of, to fail detection?

From the Rockstor perspective if lsblk doesn’t report it then it doesn’t exist and hence the bunch of detached / removed drives when ever this problem appears as the db knows they used to be attached and they are no longer attached. That is how we determine a detached device. Incidentally in testing these drives are now given a “detached-uuid4” name to make this clearer, rather than just a uuid4 name.

It also seems that the “WDC WD40EFRX-68W” drive when it was sdc with the above quoted errors also failed to report it’s serial (ie not showing in the lsblk output) and as a result udev wasn’t able to assign a by-id; as they require a serial. Hence the no serial Web-UI report.

Now I know that earlier on you tried adding 4 drives and things went wrong so you sensibly tried adding 4 completely different drives and the same thing occurred. This is very strange but judging from the quoted log entry something is definitely not happy at quite a low level here.

Essentially the lsblk output has to see drives for Rockstor to work with them and it or udevadm also has to be able to extract their serial, this in turn means we get a by-id which is soon to be required also: but is essentially just an extension of needing a serial in the first place.

A quick note on the screen grab pics, they are really low res so I can hardly make them out. If you just drag and drop the original it will be uploaded and auto sized and then, when clicked on, will provide a full or near enough, version.

So in your last post you say adding an additional drive throws things off, is that if you add any drive, and on any spare port? No limits on Rockstor drive capacity beyond that of the underlying hardware and CentOS by the way. Although there is currently a know limit of 9 partitions on the system drive but that is irrelevant here.

Can’t quite get away from the “blk_update_request: I/O error”. Could it be that your interface has more connections than it is able to present to the OS simultaneously with it’s current config, ie it can present only a maximum of 8 drives to the system but can connect say twice as many (in it’s current or maybe any config, another question), ie one can connect 16 real drives but only present them to the OS as 8 virtual drives (via a hw raid arrangement). Sorry just guessing here but it is rather strange. That might explain why after 8 drives you have issues as the card fails to report the devices correctly. Also Rockstor currently doesn’t deal with multi path devices, not sure if that’s relevant here.

For the time being I would stick to the low level output of lsblk and keep an eye on the logs for the IO errors, and ensure udevadm is able to see serials for all devices. As without those the underlying config of the hardware or OS is the problem. Once that is sorted we are back into Rockstor land. So continue with your hw juggling diagnostics and we should have more information with which to work this problem.

Incidentally the bug I suspected in the previously referenced issue does not seem to be in play here, at least no yet anyway as there a device did present it’s serial via udev but Rockstor mis-interpreted it as it didn’t recognise the device type / name and messed up when enforcing unique serial, the device also had to be a system drive I believe.

I think you are close to sorting this one simply by elimination, maybe someone with more experience with your particular hardware can chime in with wiser words. What is the hardware arrangement, ie the SAS controller make / model / spec etc.

Hope that helps.

The LSI HBA (quad port HBA) port/cable is a SAS 4-lane connection to the SAS backplane of my Norco RPC-4224 chassis, which enables 4 drives per cable/connection. So, with a quad port card I can physically connect to 4 of the backplane ports, which is how it is. Supporting 16 drives directly without any virtual drive stuff going on, I would have to believe.
I will connect up the 6x 2TB HDD i have, instead of the 6x 4TB drives and report that output, before adding the 4TB back into the setup and report that, all without using the very old 750GB drives.

As for the potential faulty drive, I thought this at first too, and went through 12x 750GB HDD, writing in big black text “DEAD”, but the drives work fine if they are below the 8 drive threshold.

I will eliminate the potential issue of the 750GB HDD, which I previsouly did, by only using the 2TB and 4TB. Now we know the 4TB drives work fine, as they have all my data on them. I also know the 2TB drives work fine as they were in use (alternate system) only recently (before data relocated to new 4TB configuration).
By pulling the 4TB drives and setting up the 2TB drives into a configuration, gather data, then adding the 4TB drives back in, we should see the same wierd behaviour. Eliminating potential drive issue and port/backplane issue, as different ports/backplane used for each drive type.

I do need to look at the firmware of the HBA.

Hi @phillxnet, I have been doing some catch up on the various responses you’ve made on this topic. I was hoping that you could clarify something for me. I have just installed rockstor on a pi4 and have an ORICO 4 Bay USB 3.0 to SATA enclosure.

I have run the command $ lsblk -P -o NAME,MODEL,SERIAL,SIZE,TRAN,VENDOR,HCTL,TYPE,FSTYPE,LABEL,UUID | grep -i -e d93a79d7-d6c3-4e1b-8b8d-ecf697c48cb6 and the results were as follow:

NAME=“sdb” MODEL=“USB3.0_DISK00” SERIAL=“ZFL6NWMZ” SIZE=“1.8T” TRAN=“usb” VENDOR=“External” HCTL=“1:0:0:0” TYPE=“disk” FSTYPE=“btrfs” LABEL=“btrfs-raid10” UUID=“d93a79d7-d6c3-4e1b-8b8d-ecf697c48cb6”
NAME=“sdc” MODEL=“USB3.0_DISK01” SERIAL=“ZFL6NBB2” SIZE=“1.8T” TRAN=“usb” VENDOR=“External” HCTL=“1:0:0:1” TYPE=“disk” FSTYPE=“btrfs” LABEL=“btrfs-raid10” UUID=“d93a79d7-d6c3-4e1b-8b8d-ecf697c48cb6”
NAME=“sdd” MODEL=“USB3.0_DISK02” SERIAL=“ZFL6P5MC” SIZE=“1.8T” TRAN=“usb” VENDOR=“External” HCTL=“1:0:0:2” TYPE=“disk” FSTYPE=“btrfs” LABEL=“btrfs-raid10” UUID=“d93a79d7-d6c3-4e1b-8b8d-ecf697c48cb6”
NAME=“sde” MODEL=“USB3.0_DISK03” SERIAL=“ZFL6P5PQ” SIZE=“1.8T” TRAN=“usb” VENDOR=“External” HCTL=“1:0:0:3” TYPE=“disk” FSTYPE=“btrfs” LABEL=“btrfs-raid10” UUID=“d93a79d7-d6c3-4e1b-8b8d-ecf697c48cb6”

It seems as though in this instance the serials for the HDDs are available and unique. Is there anything you can advise on how I could use these with my setup?

Thanks in advance

@redplague Welcome to the Rockstor community form.
Re:

Hopefully I can:

As you have likely read, some of these have been problematic in that they don’t issue unique serials. Or they issue serial numbers relevant only to the particular bay, which means Rockstor can only track bays, not actual devices.

But:

Yes, it does, doesn’t it. Those serials look like regular Seagate serials or the like. Each drive will actually have it’s serial printed on it somewhere (sometimes this is on the very end), so if those serials match exactly what you see in this list then it’s job done and all is well. If that is the case let us know the exact model of this enclosure as ORICO make some nice equipment and it’s a shame they based some on low end, basically faulty, chips that obfoscate

Our current code to flag known problematic enclosures has the following notes/entries:

It would be useful to know if udev equally returns unique and hopefully original serials as well. It likely will.

Our future preferred serial retrieval will likely be via udev directly, i.e. this procedure:

So double check what the following command returns, serial info wise, for each of your drives in that enclosure:

udevadm info --name=device_name

where “device_name” is, from the above procedures docstrings:

:param device_name: eg /dev/sda as per lsblk output used in scan_disks()

The likelyhood is they will match those returned by lsblk.

As far as how you could use these drives, if the above double check pans-out , they should be usable however you like. You are still running through presumably a single USB port, but at least this means when the USB bus falters which it is known to do on various wims, all of the enclosed drives are simultaneously inaccessible. And likewise they all then come back, as one, after the bus as reset. So I would just advise that you do not create pools that have members both within, and outside of this enclosure. I.e. if all of a pool, or pools members are within the enclosure they will come-and-go as one with the vagaries of the USB bus, so this should not lead to a common problem where one drive is on one USB bus (adapter) and anther drive (in the same pool) is on another USB bus. One bus goes down and activity continues to the remaining drive, then the first drive returns after a USB bus blip and you have potentially a split brain situation in the making. Btrfs is still very sensitive to drive dropping our and then returning.

Let us know how it goes and what the performance it like. It could be quite a nice setup. Maybe send a picture of this enclosure if possible. We are always on the look-out for well behaved/reported devices so do keep us informed of who this device holds up under use.

I see from the lsblk output that the drives are LABEL=“btrfs-raid10”, is this from you already having tried them out within Rockstor or from a prior life they have had?

Hope that helps and that this is in-fact a perfectly usable multi-drive external enclosure.

3 Likes

Hi @phillxnet

Thanks for the speedy response!

  • I can confirm that those are indeed the Seagate Barracuda drive serial numbers. I found them printed on the original boxes the drives arrived in as SN:xxxxxxx.
  • The ORICO model is: 9548RU3:4 Bay-RAID USB 3.0

Rockstor:~ # udevadm info --name=/dev/sda P: /devices/platform/scb/fd500000.pcie/pci0000:00/0000:00:00.0/0000:01:00.0/usb2/2-2/2-2:1.0/host0/target0:0:0/0:0:0:0/block/sda N: sda L: 0 S: disk/by-id/usb-External_USB3.0_DISK00_20170331000DA-0:0 S: disk/by-path/platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0 S: disk/by-label/btrfs-raid10 S: disk/by-uuid/d93a79d7-d6c3-4e1b-8b8d-ecf697c48cb6 E: DEVPATH=/devices/platform/scb/fd500000.pcie/pci0000:00/0000:00:00.0/0000:01:00.0/usb2/2-2/2-2:1.0/host0/target0:0:0/0:0:0:0/block/sda E: DEVNAME=/dev/sda E: DEVTYPE=disk E: MAJOR=8 E: MINOR=0 E: SUBSYSTEM=block E: USEC_INITIALIZED=87233884517 E: DONT_DEL_PART_NODES=1 E: ID_VENDOR=External E: ID_VENDOR_ENC=External E: ID_VENDOR_ID=152d E: ID_MODEL=USB3.0_DISK00 E: ID_MODEL_ENC=USB3.0\x20DISK00\x20\x20\x20 E: ID_MODEL_ID=0567 E: ID_REVISION=0103 E: ID_SERIAL=External_USB3.0_DISK00_20170331000DA-0:0 E: ID_SERIAL_SHORT=20170331000DA E: ID_TYPE=disk E: ID_INSTANCE=0:0 E: ID_BUS=usb E: ID_USB_INTERFACES=:080650: E: ID_USB_INTERFACE_NUM=00 E: ID_USB_DRIVER=usb-storage E: ID_PATH=platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0 E: ID_PATH_TAG=platform-fd500000_pcie-pci-0000_01_00_0-usb-0_2_1_0-scsi-0_0_0_0 E: ID_FS_LABEL=btrfs-raid10 E: ID_FS_LABEL_ENC=btrfs-raid10 E: ID_FS_UUID=d93a79d7-d6c3-4e1b-8b8d-ecf697c48cb6 E: ID_FS_UUID_ENC=d93a79d7-d6c3-4e1b-8b8d-ecf697c48cb6 E: ID_FS_UUID_SUB=618922bc-87f2-49a0-8512-d5612e1e6a14 E: ID_FS_UUID_SUB_ENC=618922bc-87f2-49a0-8512-d5612e1e6a14 E: ID_FS_TYPE=btrfs E: ID_FS_USAGE=filesystem E: COMPAT_SYMLINK_GENERATION=2 E: ID_BTRFS_READY=1 E: DEVLINKS=/dev/disk/by-id/usb-External_USB3.0_DISK00_20170331000DA-0:0 /dev/disk/by-path/platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0 /dev/disk/by-label/btrfs-raid10 /dev/disk/by-uuid/d93a79d7-d6c3-4e1b-8b8d-ecf697c48cb6 E: TAGS=:systemd: E: CURRENT_TAGS=:systemd:

Comparing this output to the other drives in the enclosure revealed:

Rockstor:~ # udevadm info --name=/dev/sda | grep -i -e serial E: ID_SERIAL=External_USB3.0_DISK00_20170331000DA-0:0 E: ID_SERIAL_SHORT=20170331000DA Rockstor:~ # udevadm info --name=/dev/sdb | grep -i -e serial E: ID_SERIAL=External_USB3.0_DISK01_20170331000DA-0:1 E: ID_SERIAL_SHORT=20170331000DA Rockstor:~ # udevadm info --name=/dev/sdc | grep -i -e serial E: ID_SERIAL=External_USB3.0_DISK02_20170331000DA-0:2 E: ID_SERIAL_SHORT=20170331000DA Rockstor:~ # udevadm info --name=/dev/sdd | grep -i -e serial E: ID_SERIAL=External_USB3.0_DISK03_20170331000DA-0:3 E: ID_SERIAL_SHORT=20170331000DA

Unfortunately it looks as though all the drives return incorrect serial information and it is not unique according to udevadm.

This is a left over from when I manually created a raid10 configuration on the command line: sudo mkfs.btrfs -f -L "btrfs-raid10" -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde

I have attached an image of the enclosure even though it’s not as well behaved as I had hoped. I would like to have been able to use it with Rockstor:

orico

1 Like

@phillxnet

One more question: When I look in the Rockstor UI at the existing manually and externally created raid setup, I can see that I have the option to import pools from the first disc that does not show a warning. What do you expect would happen if I did import the pool and tried to use the setup as usual within Rockstor?

1 Like

@redplague Thanks for the follow-up and extra info.
Re:

Ok, that’s a shame. And a little surprising but there we go.

The following looks familiar from these types of devices:

/dev/sda - ID_SERIAL=External_USB3.0_DISK00_20170331000DA-0:0
/dev/sdb - ID_SERIAL=External_USB3.0_DISK01_20170331000DA-0:1
/dev/sdc - ID_SERIAL=External_USB3.0_DISK02_20170331000DA-0:2
/dev/sdd - ID_SERIAL=External_USB3.0_DISK03_20170331000DA-0:3

and the same ID_SERIAL_SHORT=20170331000DA across all devices also. This also explains your disk page serial warnings. The first drive is not marked with a warning as it was simply the first to be found which what initially looks like an unflagged serial number. It is only once the system realises that there is then a second/third/fourth repeat that it marks them as repeats. So basically this is not going to work. An import will not get you anything more than further down a path of confusion regarding disk management. Rockstor needs an anchor with with to track devices, and that is serial numbers. But they are all the same via udev!

Interesting also is how the model column is also populated, i.e. USB3.0 DISK00, USB3.0 DISK01, etc.

There is hardware obfuscation afoot here and that is problematic for us. I’m just a little surprised we didn’t pick-up on the earlier serial numbers within lsblk’s output.

And having a look at your copied in full output from:

udevadm info --name=/dev/sda

I don’t see a single reference to the actual serial anywhere. Yes lsblk has it! I was certain that lsblk, these days, used udev to get this info. But if it has retrieved the serial, there is hope for us to some-how do the same.

If you can find a standard program to retrieve the ‘real’ serial as per what is on the drive we could pop-in some compatibility for these types of devices. Maybe it can be retrieved via for example smartmontools or the like?

Apologies for offering little way around this currently. Ideally I would need one, in-house, to experiment with. But give you have this, in-hand, do let us know if you find a quick/simple serial drive retrieval from dev name method. That is all we need in this case. The we can, on seeing such devices, revert to this back-up method. Udev is somehow playing along with this obfoscation, by design or by accident.

Let us know how your investigations go. But as-is this is not Rockstor compatible and I’d really like to add support for these devices but they are currently just not behaving like regular independent drives on the same bus: as they have the same serial (as per udev)!! I’ll puzzle some more as I go and hopefully in time we can add a clause for these devices as the are rather nice.

Hope that helps, in some way. But there may well be a 3rd way to retrieve the original hardware assigned serials that we are currently just missing. That would be the clincher for gaining compatibility.

3 Likes

@redplague I couldn’t resist, had to do a quick search again on this.
Re:

This has cropped up before actually, and I think it is what I was thinking of and instead defaulted to smartmontools. Apparently hdparm can tell us serials!!

What do the enclosure drives return when you try hdparm on them as I’ve just done here:

hdparm -i /dev/sdX

for each of the drive names there-in.

If it can retrieve them we may have a potential work-around.

Hope that helps and further suggestions welcome. What we are after really is something cheap/quick and built in. That we can parse rapidly ideally.

3 Likes

@phillxnet Thank you for your investment in time, I know how precious it is!

For some reason I haven’t looked into deeply enough the -i flag didn’t work for me and instead returned the error below:

$ sudo hdparm -i /dev/sdb

/dev/sdb:
HDIO_GET_IDENTITY failed: Invalid argument

However looking at hdparm -h showed an alternative option:

 -i   Display drive identification
 -I   Detailed/current information directly from drive

Promising results on the first drive:

$ sudo hdparm -I /dev/sdb
/dev/sdb:
ATA device, with non-removable media
Model Number: ST2000DM008-2UB102
Serial Number: ZFL6NWMZ
Firmware Revision: 0964
Standards:
Supported: 7 6 5 4
Likely used: 7
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63

CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 3907029168
SG_IO: bad/missing sense data, sb[]: 70 00 0b 00 00 00 00 0a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Logical Sector size: 512 bytes
Physical Sector size: 512 bytes
device size with M = 10241024: 1907729 MBytes
device size with M = 1000
1000: 2000398 MBytes (2000 GB)
cache/buffer size = unknown
Capabilities:
LBA, IORDY(can be disabled)
Standby timer values: spec’d by Vendor, no device specific minimum
R/W multiple sector transfer: Max = 1 Current = 1
Advanced power management level: disabled
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=240ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* NOP cmd
Advanced Power Management feature set
SET_MAX security extension
* 48-bit Address feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Software settings preservation
Security:
supported
not enabled
not locked
frozen
not expired: security count
not supported: enhanced erase
more than 508min for SECURITY ERASE UNIT.
Checksum: correct

For the remaining disks I got:

$ sudo hdparm -I /dev/sdc | grep -i -e number
Model Number: ST2000DM008-2UB102
Serial Number: ZFL6NBB2
$ sudo hdparm -I /dev/sdd | grep -i -e number
Model Number: ST2000DM008-2UB102
Serial Number: ZFL6P5MC
$ sudo hdparm -I /dev/sde | grep -i -e number
Model Number: ST2000DM008-2UB102
Serial Number: ZFL6P5PQ

This seems hopeful, what do you think?

3 Likes