[Solved] INSTALL Failed Same Way on two setups

Okay, maybe this is a “Feature” and not a bug. I will leave it to y’all to decide.

I just upgraded both my setups to the “latest” released stable version. Both setups had the same problem.

When doing the install, I input a 6 digit password, the install seems to continue with this message:


The install seemed to complete properly, I could login etc…

After install, shutdown, remove USB flash, reboot, I get this:


Reinstalling from scratch, I used a new 11 digit mixed password and everything came out fine.

Is this related to the other GRUB goofs in other posts?


Hi @Tex1954,

I’m very short on time so please pardon my extreme conciseness below.
The BAD PASSWORD message is just here to say that the password you chose is considered weak maybe not enough variety in characters, special characters, numbers, etc…); it is not related to your problem when trying to reboot.

The can't find Linux efi type of message highlights a problem related to Legacy Bios va UEFI boot. What likely happened here is that you “booted” your USB install flashdrive in UEFI mode so Rockstor installed in UEFI mode. However when rebooting, your motherboard BIOS/UEFI settings are such that the HDD on which you have installed Rockstor is booting in Legacy BIOS mode (and not UEFI). The solution here would be to either:

  • make sure you start your USB install flashdrive in Legacy BIOS mode. This would be the easiest and less “risky” thing to try in my opinion.
  • make sure to set booting of your HDD in UEFI.

Unfortunately these settings depend on your specific motherboard and how its BIOS/UEFI is set so we can’t really give you more specific instructions. These settings can usually be found in BOOT options or something like that. If you want to go with the first option listed above, you can actually override the boot device for that one time in your BIOS, in which case you make sure to select your “USB flashdrive” option and not “UEFI USB flashdrive” option.
Sorry again for such a rushed post but wanted to at least point you into that direction.

Hope this helps,

1 Like

I did indeed set the boot option to UEFI Flash device in boot menu. However, it’s weird that changing the PW to an acceptable one let it work and boot properly.

I will however give it another try since I am still in testing mode.



Okay, set it up to NOT use UEFI, did the whole install with 6 digi password and did the zypper up and all that and rebooted and guess what?

Everything worked perfectly!

Case closed!



@Tex1954 Glad your now up and running.

That is recommended to be:

zypper up --no-recommends

so that you don’t inadvertently bloat your system. We are JeOS, Just enough Operating System based. Without the --no-recommends you end up pulling in everything and the kitchen sink as it were. Just noting for clarity. You likely did as per this stage in the installer:


anyway but in case others reading jump in with a straight zypper dup as it adds tons of other stuff that is simply not needed.

But your case, reported error was covered by @Flox’s explanation of mismatch between USB boot mode and resulting systems target device (hdd/ssd) boot mode. The grub setup is complaining that it’s not a uefi install. Suggesting as per @Flox mismatch. UEFI is a pain and not ‘even’ across all hardware.

Hope that helps and thanks for the ‘sorted’ / [Solved] update. We have the following UEFI issue as well but it doesn’t affect all UEFI systems:

Our upstream kiwi-ng installer has had it’s UEFI issues also and we are awaiting a linked issue’s resolution on our own improvement in that area as it were. All in good time.


And another tidbit learned!

No worries, everything is in test mode more or less with separate backups for now.

I am about 3 parts and a couple days away from finalizing the hardware setup and buttoning everything up.

My new WS setup will be next, all water cooled, 5950X thing that will be able to run a ton of Folding@Home and BOINC tasks.

The only observation I will point out is that doing the zypper update, the base openSUSE revision bumps from 59.37 to 59.43. I don’t know if that is due to a kernal update or just the other files being added. In any case, I see no harm as of yet and I’m still only using 2.5 to 3.5 G of space on 240G SSDs.

And, besides, it seemed to me the setup process more or less told me to update everything! LOL!

But, I hear ya, and I can tell you it’s ZERO problem for me to re-install Rockstor WITHOUT detaching anything! I’m so sold on Rockstor.

With the latest 4.x update, all my problems went away. I will re-install yet again on both systems to stay in sync with your releases to avoid “other” problems in the final operational setups, so no worries!

I have the main and backup setups operating 100% and they have both survived at least a dozen each power shutdowns while busy writing without any problems whatsoever! Try that with ZFS!!! ( I did and it failed every time!)


@Tex1954 Glad it’s working out, bit by bit. And thanks again for all the feedback.

It’s advisable to run a scrub from time to time, especially if you are aware of ‘abuse’ such as power failure. Btrfs is super robust there’s days (outside of the parity raids of 5/6) but a scrub can help to correct stuff ahead of time, it basically re-reads everything to check it can and, as normal with all operations in btrfs, it checks the checksum to prove the data is correct. And if not, and you are using a redundant profile, can correct stuff on-mass. Rather than when you happen to read / write a piece of data that may have been affected.

You can schedule such an operation via our scheduled tasks and set it for say once every month or two depending on the activity of your data. It’s basically a health check / cleaning process that can be good/reassuring and can help to avoid a buildup of issues that may end up compounding into something more serious.

When your build is done pictures would be nice; as always.

Hope that helps.