Disk scan error on '/dev/sde'

Brief description of the problem

Fresh install of Rockstor 4.0.9-0 from iso.
Trying to import a 4 disk btrfs on md-raid5 array.
Based on Intel® Celeron® CPU N3150 (2 Port SATA), 2 port SATA Marvell Technology Group Ltd. Device 9215, 4 port SATA ASMedia Technology Inc. ASM1062

Error Traceback provided on the Web-UI

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 502, in post
return self._update_disk_state()
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py”, line 145, in inner
return func(*args, **kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 357, in _update_disk_state
p_info = dev_pool_info[dev_name]
KeyError: ‘/dev/sde’

@g6094199 Hello again.

Could you give some information along with this report.

I would hazard a guess that your system does not meed the minimum system requirements re disk serial numbers:

https://rockstor.com/docs/installation/quickstart.html#minimum-system-requirements

Hope that helps.

1 Like

Nope. these is a 4 disk array of real disks:

Model Family: Seagate Archive HDD
Device Model: ST5000AS0011-1L5178
Serial Number: W4xxx2H9

I added HW info to initial post

@g6094199 Re:

Cheers.

This makes sense. Md-raid5 indicates md software raid. We were once able to cope with these, on system drive only I think, but it’s not idea.

This all looks find and is generic so that’s good.

Whe you say md it’s not dm is it. That’s a fudge between software and hardware. Md is the long established software raid that is not recommended to be placed under btrfs as it, unlike btrfs raid, it can’t tell which drive copy of data is correct. Where as btrfs can. So it undermines btrfs’s capability to self correct/repair a volume.

Incidentally it may be worth trying 4.1.0-0 (testing but stable very soon) as it has a little tweak to drive management that may help.

I’ve not tested our md raid capability for a long time, and likely it’s buggy. We basically don’t support this on the data drive and would rather not see it on the system drive.

Hope that helps, at least to narrow down what’s at play here.

By the way I’ve created the following issue as a result of your feedback here as I realised we haven’t included that note in our Minimum System Requirements section:

https://github.com/rockstor/rockstor-doc/issues/355

Thanks for helping to highlight that here.

1 Like

the main cause for using md/btrfs combination was a fully array lost on btrfs in Raid5 a few years ago and thus switching to OMV which preferred MD and i chose btrfs on top cause of its enhancements. but migrating away from this (not my favorite anyway but stable) 20TB setup is not that easy, as you may understand. some years ago i moved from OMV (i like debian very very much but dont like OMV as much as Rockstor) back to Rockstor 3.9. Import worked flawlessly. And so the NAS was fire and forgotten for a few years :wink: but since there are very annoying problems (show stoppers) with the elrepo kernel and rlt8168 NICs drivers i would like to take a step forward…

2 Likes

@g6094199 Re:

Yes the parity raids of 5/6 within btrfs are a lot younger and consequently less developed than the other raid levels. In fact our upstream openSUSE/SuSE have gone as far as to default the btrfs parity raid levels to read only now in Leap 15.3 anyway, unless one makes efforts to disable this. Our answer here is to suggest the use of stable backport kernel versions so that we simultaneously don’t second guess their decision and enable our migrating users to at least use such setups if they deem it a risk they are willing to take.

Take a look at the following new How-to of ours on this front:
https://rockstor.com/docs/howtos/stable_kernel_backport.html

When absolutely requiring 2 drive failure capability one has only the choice of parity raid 6, but given the current development status of this (in btrfs) it’s recommeneded to use a raid1c4 metadata in combination with data raid6. Btrfs has the ability to have differing raid levels for data & metadata. Rockstor can be confused by this arrangement; but not much. It will simply revert the metadata on drive operations or more specifically the auto balance there after. But if you know this and can manage from the command line for now then it’s an option some Rockstor users are managing.

OMV has no native btrfs support (with the Web-UI) and so uses the far older mdraid. This is not as capable as btrfs’s raid and we get great benefits from focusing purely on btrfs and this is our intended focus going forward. Transitioning of mdraid data sets can always be done via the command line given we are a generic Leap 15.3 under-the-hood. More How-tos can help with such transitions however.

Hope that helps.

2 Likes