"Warning: /dev/disk/by-label/root does not exist" when installing on an MSI Spatium S270 SSD

The installation fails when I try to install to a 240 GB MSI Spatium S270 SSD. I tested the installation with 3 identical drives and 2 different computers (one has an H110 chipset and the other an H170 chipset)

Rockstore installed okay on a 1 TB Crucial SSD.

The 240 GB MSI Spatium S270 drives work with other operating systems so it’s an incompatibility issue with that model of drive and Rockstore.

[ok] Finished dracut pre-mount hook
[depend] Dependency failed for a File System Check on /dev/disk/by-label/ROOT.
[depend] Dependency failed for /sysroot.
[depend] Dependency failed for Initrd Root File System.
[depend] Dependency failed for Mountpoints Configured in the Real Root.
[ok] Stopped Dispatch Password Requests to Console Directory Watch.
[ok] Stopped target Basic System.
[ok] Stopped target Initrd File Systems.
[ok] Stopped target System Initialization.
[ok] Stopped dracut pre-mount hook.
[ok] Stopped dracut initqueue hook.
[ok] Stopped dracut pre-triger hook.
[ok] Stopped dracut pre-udev hook.
[ok] Stopped dracut cmdline hook.
[ok] Stopped dracut ask for additional cmdline parameters.
[ok] Started Emergency Shell.
[ok] Reached target Emergency Mode.
Warning: /dev/disk/by-label/root does not exist

Generating “/run/initramfs/rdsosreport.txt”

Entering emergency mode. Exit the shell to continue.
Type “journalctl” to view system logs.
You might want to save"/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a big report.

Press Enter for maintenance
(or press Control-D to continue):

@strider789 welcome to the Rockstor community.

Two questions, did you wipe the entire Spatium drive before attempting to install Rockstor to ensure there is no dangling conflict?

Did you attempt the install with the Slowroll or Tumbleweed flavor of Rockstor? Sometimes that handles specific chipset/drive/graphics (though not relevant here) better than Leap

2 Likes

Yes I wiped all partitions. I tried both Rockstor Leap and Rockstor Tumbleweed - both had the same issue.

I tried OpenSUSE Leap and it install okay. Lubuntu, Fedora KDE, and TrueNAS-Scale also work.

Rockstor is the only OS that won’t install on that make/model of SSD. I tried a different brand of SSD and it was okay.

Thanks for posting this issue, I had a MSI Spatium S270 installed as the OS drive but after a reboot it, would break the BTFRS, and when that happened WDSS would not work anymore, I had to destroy the pool and rebuild it.

I am wondering whether, since the bare-metal install of Rockstor builds on the JeOS image with minimal things there, there is a kernel module that’s required for the drive to work/be recognized. There were a few instances in the past, where that was the case.

It could be worthwhile to compare the loaded kernel modules between the full Leap install and Rockstor. Also, if you’re up for it to try, you could install Rockstor on a vanilla Leap installation (that recognizes the drive) and see whether you still run into the problem. If not, it would point to a firmware/kernel/product driver that would need to be installed on the bare-metal install of Rockstor. I think you could check/compare using something like:

zypper se -s kernel-firmware

But I am not entirely sure what specific firmware to look for, that might be another google search away.

I have exactly this same problem on Dell 5070

in emerygency mode i see that /dev/sda4 is a root, I can mount it …but blkid not show this partition and label on it

I try btrfs filesystem label /mount_sda4 ROOT but it’s not help

debug log:

https://pastebin.com/f0i8awW9

EDIT (it was patently wrong what I said below. The kernel-default package is used by Rockstor and it is also the one at least required vs. the stripped down kernel-default-base package.)
Another search seems to indicate that in some cases, the packagekernel-default-baseis required when NVMe drives are involved. I believe, the Rockstor base install "only" contains thekernel-default`.

@koniuszko welcome to the Rockstor community. In your case it’s the same type of drive you’re having the same issue with, or another type?

I still think the best bet (albeit a bit cumbersome) is to do a comparison with a vanilla LEAP install to see what might be missing on the JeOS based Rockstor image.

2 Likes

I resolved problem.
Previously I have installed on disk proxmox with zfs formated - that was problem.

If You have “dev/disk/by-label/ROOT does not exist”:

check if You have any other fs signatures using non destructive (with -n) command:

wipefs -n </dev/your-labaled-root-part>

-n tell to not do anything on fs, just scan for signatures

In my case I have a lot zpool signatures and btrfs label=root

so, I cleaned disk from any fs signatures:

WARNING that clean all signatures and data on that partition are lost !

wipefs -a </dev/your-labaled-root-part>

After that You need install again rockstor

Now partition with LABEL=ROOT should working

The fundamental question:

Shouldn’t the installation do this… especially when it warns during installation that data from the specified disk will be overwritten/lost?

Why doesn’t it clean previous signatures if it formats this partition anyway…

Glad you were able to fix it.

What was found over the years is that the wipe doesn’t always seems to work from within the Rockstor application for various reasons. It’s usually not a problem, when destroying an existing btrfs pool for example and then recreating it, however in some case, e.g. if multiple partitions are on the disk (and considering Rockstor does not support - at least not yet - the partition-based vs. device-based setup) the deletion method used is not effective.

Hence there is the Pre-install Best Practice section in the Installation section of Rockstor. See this chapter on disk wiping:

My first question above:

was referring to that approach of cleaning “foreign” disks first before attempting to install. I could have been more explicit about it.

1 Like