Rockstor crash recovery

Brief description of the problem

Disk failed, Rockstor won’t allow to wipe or otherwise correct.

Detailed step by step instructions to reproduce the problem

Due to BTRFS issues with RAID 5/6, this is a whole-disk partition on a RAID controller. Disk failure in the RAID set caused the logical disk volume to fail. Now Rockstor won’t allow wiping and restoring the volume.

Web-UI screenshot

Error Traceback provided on the Web-UI

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 884, in _role_disk
disk = self._validate_disk(did, request)
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 519, in _validate_disk
handle_exception(Exception(e_msg), request)
File “/opt/rockstor/src/rockstor/storageadmin/util.py”, line 48, in handle_exception
status_code=status_code, detail=e_msg, trace=traceback.format_exc()
RockStorAPIException: [‘Disk id (5288) does not exist.’, ‘Traceback (most recent call last):\n File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 516, in _validate_disk\n return Disk.objects.get(id=did)\n File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/manager.py”, line 127, in manager_method\n return getattr(self.get_queryset(), name)(*args, **kwargs)\n File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py”, line 334, in get\n self.model._meta.object_name\nDoesNotExist: Disk matching query does not exist.\n’]

So Rockstor apparently needs a bit of work on the crash recovery front. I’m not sure at this point how to make that disk go away in Rockstor and re-attach the new recovered logical disk.

@jcdick1 Could you indicate the version of Rockstor you are experiencing this with.

Also note that irrispective of this, there can be issues with disks being ‘let-go’ in btrfs itself. Often a reboot or btrfs disk scan can help with this.

Let us know if a reboot helps and the version of Rockstor you are currently using.

Cheers. Although from the error message this does look more like a db issue actually.

However. I also note that from a zoom in; your disk has a fake-serial !! And so it’s name/serial will change likely with every refresh of the page. Could you give us some more info / background on what lead to this situation as it’s not possible to create a pool within Rockstor on a disk with a unreliable serial number.

Along with the details of what lead to this apparent no-win situation could you also post a pic of the disks overview and associated pool details view.

As we re-assign a serial to detached disks we also re-name them as detached- where as in your case this disk still has it’s ‘proper’ name. Also a detached disk can just be removed via the disk overview if it has first been removed from the pool via a ReRaid operation. Hence more info on you situation here.

And always will, given the nature of breakage being breakage we, and our upstream, have to account for the unknown by definition. It’s broken in possibly new ways but yes we are in a spot here as I’m not sure of why there is a fake serial but a by-id name is in play. You may have multiple disks with this same serial for instance hence the request for more info.

Hope that helps.

1 Like

This is on my new Opensuse based 4.1.0 install on an HP DL380 G8 using a P420i controller. I suffered the “thermal runaway” issue of the controller disliking SMART data from the disks, and so it failed three of them out.

That caused the logical disk/raid set to collapse. Upon restoration of the physical disks and rebuilding, it won’t allow me to do anything with the previously identified logical disk in Rockstor, because - at least I assume - that the rebuilt logical disk is considered something new but matches the old one in every way.

But a new disk doesn’t show up. Only the old one is listed, and I can’t do anything with it.

@jcdick1 Hello again.
Re:

Oh dear. That’s not good. More info for others potentially running the same/similar hardware might be good.

But to your:

OK, hence the identical serials. That explains it. Again pics of the requested Web-UI components would help to confirm what’s going on.

But if you have very little config here you might as well do a re-install, it’s only a few minutes and you end up with a fresh install. You situation is rather unique in that you apparently have a ghost disk situation as otherwise Rockstor would have jut picked up the same disk assuming they all have unique serials still and there are no duplicates.

Rough ride on this one I know but we haven’t had to touch the disk management code for a long time now and we have weathered complete VM resets where all disks have been transitioned via some backup VM level program that ended up really stretching our capabilities re tracking drives, but we got there in the end with that last change.

Let see the requested screen grabs to hopefully confirm the current situation. From your description your drives are back with the same serials. But a fake-serial is generated by Rockstor itself if it finds 2 serials the same. Hence the drive scan it does looks to be generating 2 attached disks with the same serial. We could still likely have a bug here obviously, and it would be great to have a way-out, or to fully understand the cause and have a reproducer so that we can address the ease of use element here.

So are you using hardware raid not set to JBOD mode? I.e. failing the disk out bit. This is far from ideal and you loose the capabilities of btrfs raid with such an arrangement. You also undermine the btrfs ability to repair. See our recently updated minimum system requirements.

Hope that helps and with a little more info we should be able to confirm the situation as-is.

Also,what is the output of:

lsblk -P -p -o NAME,MODEL,SERIAL,SIZE,TRAN,VENDOR,HCTL,TYPE,FSTYPE,LABEL,UUID

and

 ls -la /dev/disk/by-id/

Cheers.

1 Like

“Thermal runaway” is a term coined by David Soussan for a known issue with some 3rd party drives in HP servers. The controller doesn’t recognize the drive’s temp reporting, and so the system fans all kick up to 78% and eventually the controller will fail out the drive as overheated, even if that is not actually true. And there’s no reliable way to know if it will present itself until the drive is actually used.

Yes. As this is a 16 spindle volume (not including the hotspare), I’m not going to waste half of it in a mirrored configuration, and BTRFS isn’t stable with its native RAID 5/6. So I’m letting the controller do the heavy lifting in terms of physical disk, and letting BTRFS basically just handle data integrity.

If I do a full reinstall, will that cause it to generate a new appliance ID and impact my license key?

1 Like

@jcdick1 Hello again, and thanks the learning.
Re:

In which case you might like to consider a Backported Stable kernel and use raid6 data and metadata raid1c4. You will either have to create the pool via command line, or use the Web-UI to create a raid6 pools and then command-line rebalance the metadata to raid1c4. See our new How-to on the openSUSE Stable kernel backport install here:

https://rockstor.com/docs/howtos/stable_kernel_backport.html

This is currently our recommended way to host pools what have a 2 disk failure requirement such as you would likely want with 16 drives. We also have some info on accessing SMART info from behind a raid controller as that can often be an issue, assuming it’s JBOD:

S.M.A.R.T through Hardware RAID Controllers: https://rockstor.com/docs/howtos/smart.html#s-m-a-r-t-through-hardware-raid-controllers
Although this approach does depend on the driver concerned.

Also, although btrfs can sens checksum errors it can’t correct them without a redundant pool. And hardware raid under btrfs lowers is reliability overall. But as always horses for courses.

Possibly, this depends of if your system has a recornised ‘fake’ appliance ID. If it does then a different one will be generated. But if so then just login to Appman and edit the Appliance ID associated with your ‘Computer’ against the relevant subscription and it will take immidiate effect. So no worries on that front really.

And do take a look at the btrfs6 data raid1c4 metadata, that may be a viable option for your to reduce your layers. This combination with a stable backport still has some parity raid weakneses in that it’s still less well developed but the raid1c4 metadata side-steps some other issues regarding parity raid metadata write holes. Also if you go this route make sure to create the pool after installing and re-booting into the back-ported kernel as you then bet space_cache_v2 by default as the newer kernel / filesystem stack defaults to this now.

Hope that helps. And given you have already run into an instability on the controller raid hardware mix you may well be running far less of a risk on running a parity btrfs raid level. Assuming of course it doesn’t do the same in JBOD mode.

2 Likes