In future I think priority should be to keep up with openSUSE’s releases. Currently stable is couple of steps behind and has not got any security updates for months (which for connected data storage is kinda critical IMO).
Hi @Jorma_Tuomainen ,
Thank you for the feedback and the understanding. We do agree with you and that’s the reason why we started setting Leap 15.6 as target for our next Stable release even before its full release. Unfortunately, this has brought its own set of work as a few changes crept up for which Rockstor needed to adapt. This was on top of an already busy release cycle for this Rockstor Stable rpm as we were attacking a very big change related to our technical debt: moving to Python and to the latest Django LTS release. These two were enormous tasks for Rockstor-core’s code itself but also for our backend infrastructure so that is why the current testing release cycle has been taking longer than what we would have hoped. We are almost there, though, and it shouldn’t be too long anymore.
Once the next Stable rpm is release, we’re hoping to focus the next testing cycle on modernizing our front end so that we can more easily implement long awaited features and improvements on that front. This should also greatly help attract and onboard new contributions that would ultimately help speed up the pace of development and thus release.
I hope this provides some reassurance on our seriousness and focus on providing up-to-date code and dependencies. At the very least, I hope I was able to provide some additional context here.
Thank you again for your continued contribution and feedback!
Hello @phillxnet
I just signed up for the Stable updates channel and getting similar error:
I understand that when the next release candidate (RC) becomes the ‘stable’ then the stable repo will be that release.
My question is: is there any “harm” in updating the OpenSuse Leap 15.6 RPM’s on the OS side? (via zypper) … I didn’t want to affect my Stable 5.0.9.0 as I am putting this into production. Also, I have been running 5.0.13-0 very successfully, but reformatted to move to the stable release.
Really like Rockstor compared to TrueNAS Core!
@johnc Thanks for your patience on this front.
Re:
No.
The 5.0.9-0 rpm was an earlier Release Candidate (RC4 as it goes) testing channel rpm:
It was the latest available Stable RC status rpm when we last rebuild our installers. But it is still pre-stable. This way we got folks off to a far newer start than having them run the older (and last available) 4.6.1-0 Stable rpm. And only the testing RC status rpms had fixes in that allowed us to use the newer Leap OSs.
Good, and yes, 5.0.13-0 is our RC8 Stable Release Candidate:
Then it is best to use the latest available RC status rpm: which today is 5.0.13-0 in testing: but this week I hope to also release 5.0.14-0 (RC9) into testing. You can then, as this version is only available in the testing channel to-date (as we are in late testing phase, hence the RC status), move your install over to the Stable channel to avoid accidentally running into the beginning of the next testing phase.
The hope is that 5.0.14-0 will, if no show-stoppers appear, then be promoted into the Stable channel, (new repo in 15.6), and we then kick-off the next testing phase.
So in-summary, use latest RC from testing, but then move back to Stable to halt the system at that version of the ‘rockstor’ rpm. Then when 5.0.14-0 (or what-ever is ear-marked as our first stable rpm in this latest run) is published in stable your system will just update to that, and all subsequent smaller fixes released to the stable channel.
Thank again for your patience and understanding here. I will soon begin working on a back-end to create an empty signed repo so we avoid these errors with the next release. But as-always time is always short and these repo errors, in zypper, are purely cosmetic; if also quite distracting. Plus we are now so close to the next Stable rpm that the repo can be created when that rpm is finally published.
We also plan, upon the first Stable release (which may be RC9, as above) to rebuild our installers so folks start out on our first released Stable rpm from the get-go (initial install). But again resources, human and otherwise, have held us back on that front. But as we develop, these restrictions are easing, especially as folks chip-in to support our efforts. Thanks again for your Stable subscription on that front.
Hope that helps.
@phillxnet
Thank you for your detailed reply! I feel a little stupid reformatting my NAS PC (with external drives) from the running 5.0.13-0 (RC8) to 5.0.9.0 (RC4)I I was thinking I had to start from the 5.0.9.0 ISO and then immediately activate my stable subscription. I will pay better attention to the forum announcements moving forward. I will follow your instructions to move to the testing channel to get 5.0.13.0 and then move back to the stable channel and await 5.0.14-0 (RC9) release.
I am currently running on 24TB of drive pool and working GREAT!
Just a quick update regarding:
This has now been accomplished, and we now have a shiny & signed, but alas still empty, Stable updates repo for our 15.6 based installs, i.e those experiencing the main issue reported in this thread, missing repo, should no longer experience this same error of “Valid metaqdata not found at specified URL”
As indicated earlier in this thread, we are looking at having already published our last significant code changes (before Stable channel spin-off) for the `rockstor’ rpm as of 5.0.14-0 in the testing channel:
We have some minor cosmetic/maintenance issues remaining in the current Milestone but they are not significant code wise: or if they end up that way they will be shunted to the next testing phase anyway.
Hope that helps, at least by way of some recent development context.
Today I tried upgrading my stable Rockstor installation (built on openSUSE 15.4) to the new Testing Release Candidate built on openSUSE 15.6 by doing a fresh installation with the latest installer:
Rockstor-Leap15.6-generic.x86_64-5.0.9-0.install.iso
Unfortunatelly I do get some errors and I can’t finish the setup:
- While doing the interactive locale & keyboard-layout selection, a “Failed” message about the
Network Manager Wait Online
service appeared, it later showed up here:
- The internet connection was fine and I also could upgrade via CLI with “zypper dup”, as suggested for the current RC-installer. Even after rebooting the upgraded system, the same service failed again.
The web interface was not reachable (on the default HTTPS port 443), although the myip
command printed the correct IP address of my machine (I know the IP address, because it is a “static-DHCP” address).
- I then had a look at the service, tried to start it manually and printed the output using
journalctl
A few things I can mention for giving context:
- the machine is bare-metal with 2 network interface of which only 1 is physically connected
- I was running rockstor stable built on openSUSE 15.4 before with no issues regarding rockstor.
- I only had some issues with docker containers (I managed them manually without the use of Rock-ons) mainly regarding DNS resolution to other containers on docker networks. That’s why I wanted to have a fresh installation now.
I am currently trying to revert to a backup of my working installation built on openSUSE 15.4 (using dd
) so I can’t unfortunately provide further log outputs - I hope that I have included some useful stuff above.
Cheers
Simon
Follow-up:
reverting to the old installation (Rockstor 4.6.1-0 built on openSUSE Leap 15.4) worked fortunately.
To me it looks like Network Manager Wait Online is also failing, although rockstor in general does work without observable issues …
admin@Kolibri:~> sudo systemctl status NetworkManager-wait-online.service
[sudo] password for root:
× NetworkManager-wait-online.service - Network Manager Wait Online
Loaded: loaded (/usr/lib/systemd/system/NetworkManager-wait-online.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2024-09-20 17:35:03 CEST; 4min 6s ago
Docs: man:nm-online(1)
Main PID: 873 (code=exited, status=1/FAILURE)
Sep 20 17:34:33 Kolibri systemd[1]: Starting Network Manager Wait Online...
Sep 20 17:35:03 Kolibri systemd[1]: NetworkManager-wait-online.service: Main process exited, code=exited, status=1/FAILURE
Sep 20 17:35:03 Kolibri systemd[1]: NetworkManager-wait-online.service: Failed with result 'exit-code'.
Sep 20 17:35:03 Kolibri systemd[1]: Failed to start Network Manager Wait Online.
admin@Kolibri:~> sudo journalctl -xu NetworkManager-wait-online.service
Sep 20 17:34:33 Kolibri systemd[1]: Starting Network Manager Wait Online...
░░ Subject: A start job for unit NetworkManager-wait-online.service has begun execution
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit NetworkManager-wait-online.service has begun execution.
░░
░░ The job identifier is 245.
Sep 20 17:35:03 Kolibri systemd[1]: NetworkManager-wait-online.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ An ExecStart= process belonging to unit NetworkManager-wait-online.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Sep 20 17:35:03 Kolibri systemd[1]: NetworkManager-wait-online.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit NetworkManager-wait-online.service has entered the 'failed' state with result 'exit-code'.
Sep 20 17:35:03 Kolibri systemd[1]: Failed to start Network Manager Wait Online.
░░ Subject: A start job for unit NetworkManager-wait-online.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit NetworkManager-wait-online.service has finished with a failure.
░░
░░ The job identifier is 245 and the job result is failed.
Cheers
Simon
@simon-77, yes there was a change implemented not too long ago via this issue:
and its associated PR:
And your observation that the failure of the NetworkManager to start up in time is obscured before this fix was implemented.
Using the assumption that the network needs to be there for Rockstor to function as a NAS, that was the conclusion.
What you are seeing I’ve noticed on some VMs I had set up, though not on my main system. @Flox had pointed me to checking whether I had any (in this case, virtual) NICs that would not acquire an IP/connect. Once I removed them from the setup (disabled really) that issue went away.
So, not sure whether it’s trying to get to your second NIC, fails and hence fails the whole thing, or something else is preventing it from starting.
Not a long-term solution, but usually if you restart the network manager, you can then also start rockstor again (once the boot is completed). But for a lights-out setup that is obviously rather annoying whenever a reboot is required.
What I can see in my setup when Rockstor fails (under 5.0.14-0 on Leap 15.6) is this:
What I see in journalctl -xe
:
░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state.
Sep 20 11:39:15 rockwurst systemd[1]: NetworkManager-wait-online.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ An ExecStart= process belonging to unit NetworkManager-wait-online.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Sep 20 11:39:15 rockwurst systemd[1]: NetworkManager-wait-online.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit NetworkManager-wait-online.service has entered the 'failed' state with result 'exit-code'.
Sep 20 11:39:15 rockwurst systemd[1]: Failed to start Network Manager Wait Online.
░░ Subject: A start job for unit NetworkManager-wait-online.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit NetworkManager-wait-online.service has finished with a failure.
░░
░░ The job identifier is 235 and the job result is failed.
Sep 20 11:39:15 rockwurst systemd[1]: Dependency failed for Build Rockstor.
░░ Subject: A start job for unit rockstor-build.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit rockstor-build.service has finished with a failure.
░░
░░ The job identifier is 241 and the job result is dependency.
Sep 20 11:39:15 rockwurst systemd[1]: Dependency failed for Tasks required prior to starting Rockstor.
░░ Subject: A start job for unit rockstor-pre.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit rockstor-pre.service has finished with a failure.
░░
░░ The job identifier is 251 and the job result is dependency.
Sep 20 11:39:15 rockwurst systemd[1]: Dependency failed for Rockstor startup script.
░░ Subject: A start job for unit rockstor.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit rockstor.service has finished with a failure.
░░
░░ The job identifier is 250 and the job result is dependency.
Sep 20 11:39:15 rockwurst systemd[1]: Dependency failed for Rockstor bootstrapping tasks.
░░ Subject: A start job for unit rockstor-bootstrap.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit rockstor-bootstrap.service has finished with a failure.
░░
░░ The job identifier is 249 and the job result is dependency.
Sep 20 11:39:15 rockwurst systemd[1]: rockstor-bootstrap.service: Job rockstor-bootstrap.service/start failed with result 'dependency'.
Sep 20 11:39:15 rockwurst systemd[1]: rockstor.service: Job rockstor.service/start failed with result 'dependency'.
Sep 20 11:39:15 rockwurst systemd[1]: rockstor-pre.service: Job rockstor-pre.service/start failed with result 'dependency'.
Sep 20 11:39:15 rockwurst systemd[1]: rockstor-build.service: Job rockstor-build.service/start failed with result 'dependency'.
Sep 20 11:39:15 rockwurst systemd[1]: Reached target Network is Online.
Checking dmesg
:
the NICs are recognized:
[ 13.017478] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 13.045627] e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 13.048746] e1000: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
After boot (but failed Rockstor startup) using nmcli device
they show up like this:
DEVICE TYPE STATE CONNECTION
eth0 ethernet connected Wired connection 1
eth2 ethernet connected Wired connection 3
lo loopback connected (externally) lo
eth1 ethernet connecting (getting IP configuration) Wired connection 2
eth1
shows as connecting. This is what’s causing the the error that Network Manager start failed (driven by the NetworkManager-wait-online.service
). This also happens more often when using a bonded (bridged) profile that keeps lingering in the same manner (@Flox pointed that out in another thread).
From what I understand, that in the above case(s) one might have to either make the NIC invisible (if not used) e.g. via BIOS), or if it’s just taking a bit longer (beyond 60 seconds I think), maybe increase the timeout to see whether that takes care of it.
For reference, more info on the NetworkManager-wait-online.service.
Since in my case, this is “just” occurring on VMs I experiment with, I either disable the adapter and reboot, or just restart Network Manager and then start Rockstor:
systemctl restart network
systemctl start rockstor
Thank you for the fast and extensive reply @Hooverdan
Just to be clear - my server is up and running again with the old installation and I don’t need immediate support. You don’t need to bother around with my problem if things are expected to work once the new Rockstor 5 is released.
Regarding the NetworkManager-wait-online
service I do have the same behaviour on my “old” 4.6.1-0 - Leap 15.4 installation and the trial yesterday with the latest 5.0.9 - Leap 15.6 installer.
It just seems that the old installation continues working even though the NetworkManager-wait-online
service failed, while the installer yesterday did not continue after this service has failed.
This is the output of my “old” & currently working rockstor installation:
admin@Kolibri:~> sudo systemctl status NetworkManager-wait-online.service
× NetworkManager-wait-online.service - Network Manager Wait Online
Loaded: loaded (/usr/lib/systemd/system/NetworkManager-wait-online.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sat 2024-09-21 16:13:13 CEST; 1min 35s ago
Docs: man:nm-online(1)
Main PID: 852 (code=exited, status=1/FAILURE)
Sep 21 16:12:43 Kolibri systemd[1]: Starting Network Manager Wait Online...
Sep 21 16:13:13 Kolibri systemd[1]: NetworkManager-wait-online.service: Main process exited, code=exited, status=1/FAILURE
Sep 21 16:13:13 Kolibri systemd[1]: NetworkManager-wait-online.service: Failed with result 'exit-code'.
Sep 21 16:13:13 Kolibri systemd[1]: Failed to start Network Manager Wait Online.
admin@Kolibri:~> sudo systemctl start NetworkManager-wait-online.service
Job for NetworkManager-wait-online.service failed because the control process exited with error code.
See "systemctl status NetworkManager-wait-online.service" and "journalctl -xeu NetworkManager-wait-online.service" for details.
I can also power-cycle the machine (mainboard will reboot once power is applied) and without any manual intervention everything will start up, from NFS & SMB, to all docker containers and even the rockstor web-UI.
Regarding the network interfaces, I actually see 3 devices:
- eth0 (unavailable)
- eth1 (connected)
- usb0 (connecting)
The usb0 is in “STATE” (vianmcli device
):connecting (getting IP configuration)
which is exactly the reason why the theNetworkManager-wait-online
service fails according to the documentation.
After about 10 Minutes (definitely larger than 3) the usb0 will change to state disconnected
, after which I can manually start the NetworkManager-wait-onlin
service and it will exit with SUCCESS:
admin@Kolibri:~> sudo systemctl start NetworkManager-wait-online.service
[sudo] password for root:
admin@Kolibri:~> sudo systemctl status NetworkManager-wait-online.service
● NetworkManager-wait-online.service - Network Manager Wait Online
Loaded: loaded (/usr/lib/systemd/system/NetworkManager-wait-online.service; enabled; vendor preset: disabled)
Active: active (exited) since Sat 2024-09-21 16:52:25 CEST; 5s ago
Docs: man:nm-online(1)
Process: 17731 ExecStart=/bin/bash -c if [ ${NM_ONLINE_TIMEOUT} -gt 0 ]; then /usr/bin/nm-online -s -q --timeout=${NM_ONLINE_TIMEOUT} ; else /bin/true ; fi (code=exited, status=0/SUCCESS)
Main PID: 17731 (code=exited, status=0/SUCCESS)
Sep 21 16:52:25 Kolibri systemd[1]: Starting Network Manager Wait Online...
Sep 21 16:52:25 Kolibri systemd[1]: Finished Network Manager Wait Online.
admin@Kolibri:~>
So the physically unconnected NIC is not an issue with real hardware, because the state is simply “unavailable” - but the usb0
interface is an issue in my setup.
To be honest, I have no clue about the usb0
interface on my machine. I find an option to disable each of the ethernet interfaces in the BIOS, but no mentioning of a usb ethernet interface directly.
I do have a server Motherboard (Asus P12R-I) with onboard management/kvm functionality and also some kind of BMC. Although I love the management over IP & kvm functionality I have not had a look at what the BMC and Sideband Interface present on my motherboard actually are.
I will probably have a look if I can somehow disable some of this functionality and check whether I can let the usb0 interface disappear.
Cheers
Simon
A quick follow-up on my issue after fiddling around for a day:
As (server) motherboards have some interface to the Baseboard Management Controller (BMC), there is the relatively new Redfish standard which is also implemented on my server running Rockstor.
Additionally to the out-of-band interface, Redfish allows for an in-band interface where the Host OS can access the Redfish server as well. And this in-band Redfish server is implemented via an additional usb0
interface on my Motherboard (Asus P12R-I) which is causing troubles with the NetworkManager-wait-online
service.
(Sorry for the following links in German language, these where the only sources I found)
# dmidecode -t 42
can be used to identify a management controller host interface and to use this interface, a static IP must be configured manually (the redfish server IP address can be changed in the UEFI)
I dug through all BMC & UEFI settings of the motherboard today and although the redfish server can be disabled, the usb0
interface was always visible by the host OS and always in state connecting
causing the NetworkManager-wait-online
service to fail.
The only way I could find to disable this usb0
interface was to blacklist the according kernel module with:
# echo "blacklist cdc_ether" > /etc/modprobe.d/blacklist-cdc_ether.conf
After rebooting the machin, there was no usb0
interface anymore and the NetworkManager-wait-online
service exited with SUCCESS even though only one of my 2 ethernet ports where connected (because the service will return success when “devices are either activated or in a disconnect state”)
admin@Kolibri:~> systemctl status NetworkManager-wait-online.service
● NetworkManager-wait-online.service - Network Manager Wait Online
Loaded: loaded (/usr/lib/systemd/system/NetworkManager-wait-online.service; enabled; vendor preset: disabled)
Active: active (exited) since Mon 2024-09-23 17:21:34 CEST; 59min ago
Docs: man:nm-online(1)
Main PID: 850 (code=exited, status=0/SUCCESS)
admin@Kolibri:~> nmcli device | grep 'eth[[:digit:]] '
eth1 ethernet connected Wired connection 2
eth0 ethernet unavailable --
admin@Kolibri:~>
So, although I haven’t tried it with the new installer, I assume I have to manually blacklist the kernel module (as I have described above), restart the server and then continue with the installer.
Another option (although I have not tried it) should be to set a static IP on the interface manually via NetworkManager.
Cheers Simon
Coming back to the topic - at least kind of …
When I would like to re-install my server to the latest RC 9 (Rockstor 5.0.14 built on openSUSE Leap 15.6), there is still only the RC 4 installer available for donwload and the upgrade procedure is not completely clear for me.
So I install rockstor using the RC4 installer, proceed with local & keyboard setup on the command line, and afterwards I have to manually make a zypper dup
on the command line, because at the time the installer was built the OS (leap 15.6) was not stable yet?
This distribuiton-update can be done before continuing with the setup on the web-UI?
Will the system afterwards be more or less in the same state, compared to waiting for the newly built rockstor installer which is built on a stable release of openSUSE OS (leap 15.6)?
And are there any rumours about the timeline when a new rockstor installer will be released? As the latest RC9 is also now about a month old, I assume that a stable rockstor Version 5 is not too far into the future.
Cheers Simon
You should be able to do the disti update based on this:
https://rockstor.com/docs/howtos/15-5_to_15-6.html
And I believe, you can do this before you continue with the Rockstor update/config.
once it’s on 15.6 the subsequent installer should be very similar to what you did manually.
Depending on the non-Rockstor changes you have/will be making to the system, the backup/restore works pretty well for most of the other settings you would otherwise make in the WebUI.
I think the timeline on the stable release is dependent on @phillxnet’s capacity at this time, however it should not be too far in the future. Part of it will be, where to cut off (in terms of new issues being found and how critical they are).
On your other comment further up:
Since this hard dependency was created between Network Manager, the NetworkManager-wait-online
portion and a successful Rockstor start-up, at this time it will be also present in the latest installer. So, you will likely have to continue to use the Kernel blacklisting. Of course, you could also manually increase the timeout from 60 seconds in the NetworkManager-wait-online
service to 10 minutes … but that would make waiting for a reboot rather painful.
Thank you for explaining the steps @Hooverdan
Otherwise I would really have missed, that one or two of the repos had to be changed from “15.5” to “15.6” although the latest installer is already based on 15.6
Nevertheless, today I re-installed Rockstor and I am now running V5.0.14 on openSUSE 15.6 & afterwards I have switched back to the stable channel.
Regarding the NetworkManager-wait-online
service:
While setting up the new Rockstor installation on the command line (there is only the locale & keyboard layout to be selected), an error about this service will interrupt and bring the “cli-gui” somewhat out of shape (but it is still usable). Once the installer is finished and the login-prompt is presented, I can log-in - but the rockstor web-UI is not available.
After logging in, I blacklisted the kernel module
and did the upgrade steps (which will not be necessary in the future). After rebooting I could log into the web-UI and continue as usual.
Thank’s for the support, my server is up and running successfully so far
Although I am not sure if this belongs here, I would like to share some of my Rockstor fresh-installation-migration issues. If I should post them somewhere else, I am happy to do so.
After doing a full Rockstor configuration backup from my old system (4.6.1-0) and importing it to my new installation (5.0.14) there where some more or less crucial things missing:
- SSL certificates
- E-Mail configuartion
- the password of my rockstor-created users (which where not the new system before importing !)
- the groups I created where there but the users where not assigned to the groups (to be fair, I have added the users to the group via CLI
usermod -aG
, because I could not find a Rockstor web-UI setting) - the SMB share “Custom configuration” set via web-UI (I first imported the disks before restoring the rockstor config from backup)
- all NFS shares where “read-only”
Just some minor inconveniences that I could fix quickly, I just wanted to share this experience.
Another completely unrelated issue regarding the new openSUSE leap version was that the libvirt daemons (for KVM/Quemu virtualisation) are apparently running in “modular daemons” instead of “monolithic daemons” and have to be started manually after installing:
for drv in qemu network nodedev nwfilter secret storage
do
sudo systemctl enable virt${drv}d.service
sudo systemctl enable virt${drv}d{,-ro,-admin}.socket
done
for drv in qemu network nodedev nwfilter secret storage
do
sudo systemctl start virt${drv}d{,-ro,-admin}.socket
done
This is definitely nothing that you (Rockstor development team) have to care about, as virtual machines are not supported anyway.
It is just really straight-forward to install libvirt for virtual machines on openSUSE and I was running (one) VM on top of rockstor for about a month now and was just struggling a bit to find the information I shared above for bringing the libvirt daemons up on openSUSE Leap 15.6.
It is working now perfectly on Rockstor 5.0.14 as well
I will also try running Cockpit on Rockstor, because it is only available on openSUSE Leap since 15.6. This would add a nice web-UI for managing virtual machines where only some additional packages are required which are all provided in the standard openSUSE Leap 15.6 repos.
Then Rockstor could serve as a user-friendly (easy installation & management web-UI for VMs) Virtual-Machine Host.
But I will write more about this (probably in a new thread) once things are running.
Cheers
Simon
Great that you got it all working! And thanks for adding the additional details on here as well. It also means that the documentation should be updated for a few things, as well possibly some of your system require further investigation.
On this:
- The SSL certificates: that’s a good point. Considering that the backup file is a fairly simple json, I don’t think the intent was to backup the SSL certificates. A corresponding note for a reinstall should be added to the documentation on that.
- e-mail configuration: that might be an item that could be/should be backed up. Otherwise, also a note in the documentation would be helpful.
- password of rockstor-created users: no idea whether that should be part of the backup or not to be honest.
- the group assignments: since within the WebUI there’s only a 1:1 mapping possible, the backup would not catch the multi-assignments. Again, could be considered for new development or should be added to the documentation (since I assume, that’s not such an uncommon setup).
- SMB share: was that the only one where the custom config was not created out of multiple or was it also just the only SMB share you had and that custom config wasn’t created? I have noticed that on one of my restores where I had multiple SMBs, but not all were created (not just the custom config). Might require a new issue on github. If you have any logs that might highlight what happened, that would of course be great.
- Read-only NFS shares: no idea, since I don’t really use them, so not sure whether that’s by design, or has to do with the order in which those are generated (post user/group set up).
Also, thanks for the info on the virtualization observations. Should be helpful for some folks that want to take Rockstor beyond the current boundaries … The Cockpit installation. As you’ve mentioned it would be great if you can post a How-to for that.
This actually surprised me the most. Because I expected everything to work after restoring the config, but I just could not authenticate my user when mounting the SMB share. This also took me a while to figure out what was going wrong.
I had 2 SMB shares active and each of them had a custom config - neither of them was restored - the other settings where fine for both shares.
Unfortunatelly I don’t have any logs available, it was just me trying to get my server back up and running
I think the default setting when creating a new NFS share is read-only, so maybe this setting is not included in the backup
when looking at the code, currently a random password is created during the restore of the users (which is not mentioned in the documentation). Originally all user passwords were set to the same, but that was a bit too on the nose, hence the randomization. That helped with the problem of getting encrypted password transferred out and then back in, but one needs to know that the passwords have been reset:
One the Samba restore I think I the global config as part of the service is restored, but I also couldn’t see anything explicitly about the share-level config. Probably another area that could be added to. You could check your json file to see whether you can find any entry that reflects that share level config, then we would know for sure whether it’s a missing feature, or whether there might be an issue during the import of that config.
On the NFS exports I can’t see anything specially done, so maybe your assumption that they get created with a default read-only is correct:
I actually do see the lines of my SMB Custom configuration, although they are spread out over multiple JSON “Elements” (however this is called, the lines of the same config are in different {}
blocks).
{"model": "storageadmin.sambashare", "pk": 1, "fields": {"share": 39, "path": "/mnt2/scanner", "comment": "SMB share for scanner", "browsable": "yes", "read_only": "no", "guest_ok": "no", "shadow_copy": false, "time_machine": false, "snapshot_prefix": null}}, {"model": "storageadmin.sambashare", "pk": 2, "fields": {"share": 30, "path": "/mnt2/old-photos", "comment": "Samba-Export", "browsable": "yes", "read_only": "no", "guest_ok": "no", "shadow_copy": false, "time_machine": false, "snapshot_prefix": null}}, {"model": "storageadmin.sambacustomconfig", "pk": 1, "fields": {"smb_share": 1, "custom_config": "valid users = @scanner"}}, {"model": "storageadmin.sambacustomconfig", "pk": 3, "fields": {"smb_share": 1, "custom_config": "directory mask = 0775"}}, {"model": "storageadmin.sambacustomconfig", "pk": 4, "fields": {"smb_share": 1, "custom_config": "force group = scanner"}}, {"model": "storageadmin.sambacustomconfig", "pk": 6, "fields": {"smb_share": 2, "custom_config": "valid users = @family"}}, {"model": "storageadmin.sambacustomconfig", "pk": 10, "fields": {"smb_share": 2, "custom_config": "force user = admin"}}, {"model": "storageadmin.sambacustomconfig", "pk": 11, "fields": {"smb_share": 2, "custom_config": "force group = users"}}, {"model": "storageadmin.sambacustomconfig", "pk": 12, "fields": {"smb_share": 1, "custom_config": "create mask = 0664"}}, {"model": "storageadmin.sambacustomconfig", "pk": 15, "fields": {"smb_share": 2, "custom_config": "directory mask = 0755"}}, {"model": "storageadmin.sambacustomconfig", "pk": 16, "fields": {"smb_share": 2, "custom_config": "create mask = 0644"}},
For reference, this is a rockstor screenshot of the “scanner” share:
Interestingly enough there is a “editable” field in the JSON file which is set to “rw” …
And the Host String was restored coreectly to 10.71.128.0/24
in Rockstor.
{"model": "storageadmin.nfsexportgroup", "pk": 2, "fields": {"host_str": "10.71.128.0/24", "editable": "rw", "syncable": "async", "mount_security": "insecure", "nohide": false, "enabled": true, "admin_host": null}},
Note: the JSON file is exported from my old Rockstor 4.6.1 installation.
Hey guys, sorry for bothering again
I just wanted to check if the zypper repository setup is intended the way it is in Rockstor built on Leap 15.6:
admin@Kolibri:~> zypper lr -P
# | Alias | Name | Enabled | GPG Check | Refresh | Priority
---+------------------------------------+---------------------------------------------------------------------------------------------+---------+-----------+---------+---------
6 | home_rockstor_branches_Base_System | home_rockstor_branches_Base_System | Yes | (r ) Yes | Yes | 97
1 | Leap_15_6 | Leap_15_6 | Yes | (r ) Yes | Yes | 99
2 | Leap_15_6_Updates | Leap_15_6_Updates | Yes | (r ) Yes | Yes | 99
3 | Rockstor-Stable | Rockstor-Stable | Yes | (r ) Yes | Yes | 99
7 | repo-backports-debug-update | Update repository with updates for openSUSE Leap debuginfo packages from openSUSE Backports | No | ---- | ---- | 99
8 | repo-backports-update | Update repository of openSUSE Backports | Yes | (r ) Yes | Yes | 99
9 | repo-openh264 | repo-openh264 | Yes | (r ) Yes | Yes | 99
10 | repo-sle-debug-update | Update repository with debuginfo for updates from SUSE Linux Enterprise 15 | No | ---- | ---- | 99
11 | repo-sle-update | Update repository with updates from SUSE Linux Enterprise 15 | Yes | (r ) Yes | Yes | 99
5 | home_rockstor | home_rockstor | Yes | (r ) Yes | Yes | 105
4 | filesystems | Filesystem tools and FUSE-related packages (15.6) | Yes | (r ) Yes | No | 200
Don’t mid the last filesystems
repo, which I have added manually.
I was just wondering whether the repo-backports-update
is intentionally (1) enabled and (2) has default priority instead of a “lower” (= number larger than 99) priority.
While fiddling around with cockpit I apparently ran into an issue with the newer version from the backports repo, which was selected by zypper during installation because the repos have the same priority, then the newer version will be selected.
admin@Kolibri:~> sudo zypper search --details cockpit
[sudo] password for root:
Loading repository data...
Reading installed packages...
S | Name | Type | Version | Arch | Repository
---+------------------------+------------+-----------------+--------+----------------------------------------
i+ | cockpit | package | 321-bp156.2.9.1 | x86_64 | Update repository of openSUSE Backports
v | cockpit | package | 320-bp156.2.6.3 | x86_64 | Update repository of openSUSE Backports
v | cockpit | package | 316-bp156.2.3.1 | x86_64 | Update repository of openSUSE Backports
v | cockpit | package | 309-bp156.1.7 | x86_64 | Leap_15_6
| cockpit | srcpackage | 321-bp156.2.9.1 | noarch | Update repository of openSUSE Backports
| cockpit | srcpackage | 320-bp156.2.6.3 | noarch | Update repository of openSUSE Backports
| cockpit | srcpackage | 316-bp156.2.3.1 | noarch | Update repository of openSUSE Backports
My issue with the cockbit could be resolved by changing the repo priorities and installing the package from Leap_15_6 repo, which is apparently working fine:
- Lower the priority of the backports repo:
sudo zypper mr -p 100 repo-backports-update
- If already installed packages should change repo, make a
dup
:
sudo zypper dup
For a “stable” (NAS) server I would have expected that the backports repos at least has a lower priority.