Device-mapper managed disk - unusable as pool member

Brief description of the problem

Adding a new composite/hybrid disk, handled via Linux device-mapper, results in:
Warning! Disk unusable as pool member - serial number is not legitimate or unique.

Detailed step by step instructions to reproduce the problem

I’m experimenting with a hybrid disk setup, in which an SSD and 2 HDDs are combined into a single large disk, where the SSD serves as a buffer, and also stores metadata, while the HDDs only store data blocks. The whole setup is handled via dm-zoned and is working as expected.

Problem is, I get this

Warning! Disk unusable as pool member - serial number is not legitimate or unique.

in the “Serial” column on the Storage->Disks page
and later if I click on the question-mark sign next to the disk’s Name (dmz-20178B801832) in an attempt to assign a role to the disk, I get the traceback listed below.

How can I overcome this? Can I manually assign a made-up ‘unique’ serial number to the dm-managed disk for example? Or some other trick that would allow me to use this hybrid disk in RockStor?
Its performance seems impressive, and I would like to make it run properly in RockStor.

I’m still using RockStor 3.9 - not sure if things will be better in the new 4-series - don’t have the HW to test those on yet.

Regards,

  • Stefan

Web-UI screenshot

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/src/rockstor/storageadmin/views/disk_smart.py", line 43, in _validate_disk return Disk.objects.get(id=did) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/manager.py", line 127, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 334, in get self.model._meta.object_name DoesNotExist: Disk matching query does not exist.

@pepsov, @phillxnet will likely be able to provide more targeted guidance on what to investigate where, but you could have a look at this thread, starting around here that goes into detail about the udev usage and disk ids:

lots of it is probably not applicable to your situation, as that thread deals with an HP server setup, but you might be able to extract some details that can help you overcome this situation.
I don’t believe, that the 4.x version of Rockstor will make any difference here, but of course, if we there is a permanent fix to be had, it would only be released in that latest development.

1 Like

@Hooverdan Thank you for the input, Dan!

I checked and tested what was described in that thread, but without any success.
The point in my case is that of the complete lack of a Serial number altogether, because the hybrid dm-managed disk does not have one by construction.

I can imagine this being a rather general case - the device-mapper in Linux is quite popular and powerful, after all - to deserve the attention of the developers (@phillxnet ?).

It looks like the DM is assigning a unique ID to the hybrid disk it manages, in the form of an “DM_UUID” - see below. Can’t that be used somehow?

  • Stefan

[root@rn626x ~]# /usr/bin/udevadm info /dev/mapper/dmz-20178B801832
P: /devices/virtual/block/dm-0
N: dm-0
S: disk/by-id/dm-name-dmz-20178B801832
S: disk/by-id/dm-uuid-dmz-2b97bfbe-ffdd-413c-bfc7-e94c65ac40a1
S: disk/by-label/multi-dm-z
S: disk/by-uuid/a3979d8c-9628-4408-b031-12c5e9b5b831
S: mapper/dmz-20178B801832
E: DEVLINKS=/dev/disk/by-id/dm-name-dmz-20178B801832 /dev/disk/by-id/dm-uuid-dmz-2b97bfbe-ffdd-413c-bfc7-e94c65ac40a1 /dev/disk/by-label/multi-dm-z /dev/disk/by-uuid/a3979d8c-9628-4408-b031-12c5e9b5b831 /dev/mapper/dmz-20178B801832
E: DEVNAME=/dev/dm-0
E: DEVPATH=/devices/virtual/block/dm-0
E: DEVTYPE=disk
E: DISKSEQ=21
E: DM_ACTIVATION=1
E: DM_NAME=dmz-20178B801832
E: DM_SUSPENDED=0
E: DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1
E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
E: DM_UDEV_RULES_VSN=2
E: DM_UUID=dmz-2b97bfbe-ffdd-413c-bfc7-e94c65ac40a1
E: ID_FS_LABEL=multi-dm-z
E: ID_FS_LABEL_ENC=multi-dm-z
E: ID_FS_TYPE=xfs
E: ID_FS_USAGE=filesystem
E: ID_FS_UUID=a3979d8c-9628-4408-b031-12c5e9b5b831
E: ID_FS_UUID_ENC=a3979d8c-9628-4408-b031-12c5e9b5b831
E: MAJOR=253
E: MINOR=0
E: MPATH_SBIN_PATH=/sbin
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: USEC_INITIALIZED=317709455

Come to think about it again, RockStor is already handling properly multi-device (aka Software RAID) disks, and I assume it uses the unique “MD_UUID” parameter for that purpose (see below). Should be possible and easy to enable the use of the “DM_UUID” in a similar manner.

Here is what my RAID1 array looks like (pretty damn close to the device-mapper case in the previous post):

[root@rn626x ~]# /usr/bin/udevadm info /dev/md123
P: /devices/virtual/block/md123
N: md123
L: 100
S: disk/by-id/md-name-0a452c24:data-0
S: disk/by-id/md-uuid-cd43dd38:be922ee6:d89cb081:f0a87671
S: disk/by-label/0a452c24:data
S: disk/by-uuid/004bb35f-ae0d-461a-9996-afd25bee9231
S: md/0a452c24:data-0
E: DEVLINKS=/dev/disk/by-id/md-name-0a452c24:data-0 /dev/disk/by-id/md-uuid-cd43dd38:be922ee6:d89cb081:f0a87671 /dev/disk/by-label/0a452c24:data /dev/disk/by-uuid/004bb35f-ae0d-461a-9996-afd25bee9231 /dev/md/0a452c24:data-0
E: DEVNAME=/dev/md123
E: DEVPATH=/devices/virtual/block/md123
E: DEVTYPE=disk
E: DISKSEQ=17
E: ID_FS_LABEL=0a452c24:data
E: ID_FS_LABEL_ENC=0a452c24:data
E: ID_FS_TYPE=btrfs
E: ID_FS_USAGE=filesystem
E: ID_FS_UUID=004bb35f-ae0d-461a-9996-afd25bee9231
E: ID_FS_UUID_ENC=004bb35f-ae0d-461a-9996-afd25bee9231
E: ID_FS_UUID_SUB=91ec6667-c6f4-4078-a3a6-6f6bb6568dc6
E: ID_FS_UUID_SUB_ENC=91ec6667-c6f4-4078-a3a6-6f6bb6568dc6
E: MAJOR=9
E: MD_DEVICES=2
E: MD_DEVNAME=0a452c24:data-0
E: MD_LEVEL=raid1
E: MD_METADATA=1.2
E: MD_NAME=0a452c24:data-0
E: MD_UUID=cd43dd38:be922ee6:d89cb081:f0a87671
E: MINOR=123
E: MPATH_SBIN_PATH=/sbin
E: SUBSYSTEM=block
E: SYSTEMD_WANTS=mdmonitor.service
E: TAGS=:systemd:
E: UDISKS_MD_DEVICES=2
E: UDISKS_MD_DEVICE_dev_sda3_DEV=/dev/sda3
E: UDISKS_MD_DEVICE_dev_sda3_ROLE=0
E: UDISKS_MD_DEVICE_dev_sdc3_DEV=/dev/sdc3
E: UDISKS_MD_DEVICE_dev_sdc3_ROLE=1
E: UDISKS_MD_DEVNAME=0a452c24:data-0
E: UDISKS_MD_LEVEL=raid1
E: UDISKS_MD_METADATA=1.2
E: UDISKS_MD_NAME=0a452c24:data-0
E: UDISKS_MD_UUID=cd43dd38:be922ee6:d89cb081:f0a87671
E: USEC_INITIALIZED=73301

The RAID1 md disk shows up with the Name “md-uuid-cd43dd38:be922ee6:d89cb081:f0a87671” in the “Storage->Disks” table, and clearly is assigned the MD_UUID as it “Serial”:

1 Like

@pepsov Hello there.
Re:

Indeed, but we have to limit our scope as we have very limited developer ‘resources’ however if you fancy having a look at this, as you have been doing actually:

Hopefully/possibly:

Indeed, I did do some work on supporting mdraid as there was interest in this, however we initially only had support for this in a very limited sense. Our focus is very much on having btrfs manage the disks, i.e. via the pool/pool-members type approach. However we do already have, as you see, some preliminary capabilities regarding the md raid arrangement, but such capability often brings unwelcome complexity and can end-up hinder our progress. E.g. not directly related, but when we added the capability to handle partitions as members we did this ‘properly’ only in the data drives. There is still a kind of special treatment in the system disk regarding partitions. I would very much like to normalise the data disk approach to partitions across to the system disk also. But that is a side line to this but indicative of a need to prioritise ‘stuff’.

However, if you are interested, do take a look at how we ‘handle’ both the ‘meta’ device created by mdraid as well as it’s members, as there are two parts to this issue. Recognising both the members of such multi-level constructions (so they can be labeled as such in the disk place), and their higher order creations, the md0 for example, so they can be included as potential members of pools.

But all this is in the context that if btrfs doesn’t know of a device ‘sturucture’ under it, it is necessarily undermined by that structure. I.e. in mdraid it can and will change data under btrfs unknowing of which copy is correct. Hence our focus on using only btrfs as multi-device manager.

We also, and likely will not for the foreseeable future, support such things as LVM. It’s just too complex and we need to have our focus on keeping things simple.

Take a look at the underlying code in say:

And keep in mind that almos all of that code also has extensive tests to prove their function and to guard against breakages. All code that in-turn uses that code must also be aware. Again the modifications to accomodate md were only aimed at the system drive initially, and are far from complete. But a small extension along the same lines may serve your needs and begin a similar process of enabling more compatibility. But we are likely not to support it officially as it’s just more complexity and more points of failure under the fs that threaten data integrity. But horses for courses and it may be of greater interest.

Also do take note of the following Wiki entry that should also help with what modifications would be required:

Hope that helps and the initial problem is extending the serial retrieval mechanism, which does have tests associated with it so they, in-turn, may help in further development if that is what takes your fancy. Hopefully it’s a ‘sister’ modification to what introduced our fledgling md compatibility, such as it is.

3 Likes

Thank, you @phillxnet!
Looks like I will have to become a developer of sorts :slight_smile:
I’ll try and will report back if I achieve anything meaningful.
Cheers,

  • Stefan
3 Likes

@pepsov
Re:

So will I! I’m afraid I referenced the wrong starting point!! Let me have anther go (first reference related but not best to start at).

Take a look first at the following procedure:

get_disk_serial()

and specifically the following lines:

There is more to it but that looks like a good place to begin. If the parallel with md holds.

But note also the somewhat complex caller of this function:

That could be your second port of call. And remember that when you change your installed instance of these files (in /opt/rockstor/src/rockstor) you will have to re-start the rockstor services for the changes to be noticed by the running version of the code. Otherwise it will carry on regardless with the older versions before you made the changes.

This can all be done with a non production regular install of Rockstor but don’t do it if there is any data connected to that instance as such changes are by-definition experimental with unknown consequences.

For a full development setup, using our yet to be released in rpm form (soon hopefully) testing branch, just take a look at our following doc section:
https://rockstor.com/docs/contribute/contribute.html#developers

And any feedback on that document would be appreciated. It has undergone a few improvements but there are always more to be had. And your perspective in this case may be all the more valuable if you are in fact a beginner of sorts in one or more associated areas.

Do ask here on the forum, likely in focused new threads, questions you may have during this endeavour. Our code is now pretty well documented and Python is also fairly easy to follow. Plus there are now many current and former code contributors here on the forum. But we do still have some complex stuff that needs to be simplified, and you may well come across some of that unfortunately. Also this is critical-path stuff so we are unlikely to be able to just drop any modifications in without associated tests but that is another bridge and another time.

Hope that helps and my apologies for the higher level reference earlier, it is in the lower level that we must first manage the categorisation of the disks, or their exclusion. This is not a trivial task, but we hopefully have a parallel already in place. And again, that device management link should be a good first read at least to understand the relevance and central nature of our use of the disk serial.

2 Likes