Rockstor on Suse console messages

I don’t know if this is the right place for this or if the issue has already been recognised but just in case it is useful…

I built a Leap 15.2 x86_64 installer as per the instructions on a Suse Leap 15.2 VM (Hyper-V host).

I ran that to create another VM in the same environment with three disks. I’ve installed to one and created a RAID 1 pool with the others.

Web interface works and seems to be fully functional but I’ve noticed a constant stream of messages on the console. I don’t know if I’ve messed something up, this is a known issue or I’ve found something you need to know about.


1 Like

@jmangan Hello again and thanks for the report. And well done on getting the Rockstor 4 installer built and installed.

So this to me looks unrelated to Rockstor code as such and likely something to do with how the kernel’s storage drivers are interacting with your Hyper-V host environment. I would look further afield for similar reports.

Also is there any more pointers earlier in the log as they may help with finding the context for this series of looping errors.
The main pointer so far is the:

[storvsc] Add. Sense: invalid command operation code

You could take a look in the main journal via:


to see if there are any more pointers to help your search. But it’s definitely something not happy there somewhere. Let us know how your explorations go on this but my guess is that it’s kernel/hyper-visor related, i.e. an incompatibility or failure somewhere along the line. Sorry I can’t be of more help here. But there is likely more info in the main journal than what you are seeing on the terminal.

Our kernel for the Rockstor 4 release is exactly that of a fully updated Leap 15.2 currently:

Hope that helps.

1 Like

Much appreciated. I’ve just starting using Hyper-V so it could well be something I’ve missed.

I’ll keep playing with my VM in the meantime.

Thanks again.

Well I’ve tried rebuilding and a couple of tweaks but I’ve still got the same error. The Leap 15.2 VM I built to build the installer doesn’t have the issue so it must be something I did.

On the ‘Change this’ line for RPM version I wasn’t sure what to put so left it as 4.0.4 - is this correct?


Or still something we do but is not default in Leap15.2. An example might be our smart info probe.

this in turn calls:

Which ends up running:

/usr/sbin/smartctl --info dev-name-here

Although the above dev-name and any custom options are handled by:

But from memory if no custom options have been specified, see:

then it should be just what we call internally the base device (no partitions) which is retrieved via:

and a path is added and you have your device name.

Likely just the by-id name (with path) of the base device.

It’s worth executing the resulting command, as above, to see if this triggers the same error as then at least we have a very likely candidate. But it may be completely unrelated to this call. It’s just the only thing that springs to mind given your report of it not having been seen in the generic Leap 15.2 which, as far as I know, doesn’t by default do this smart capability/availability probe.

There is another probe we do using hdparm but that can be another post once we have check this smart probe’s potential side effect.

Yes that will do. If you subscribe to testing you will then be offered our 4.0.5 testing rpm but as that has a few too many changes it’s still only in testing and not in our Stable channel which currently only has a place holder of 4.0.4 as release candidate 5.

Hope that helps.

1 Like

In the web console the SMART status of the drives is ‘Not Supported’ which makes some sense.

I’ve got the following devices:

And they all return the same information from the smartctl command:

I’m surprised they are shown as ‘thin provisioned’ because I allocated them fully at setup.

Does that help?