Missing repository Rockstor Stable on 15.6

What I can see in my setup when Rockstor fails (under 5.0.14-0 on Leap 15.6) is this:

What I see in journalctl -xe:

░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state.
Sep 20 11:39:15 rockwurst systemd[1]: NetworkManager-wait-online.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ An ExecStart= process belonging to unit NetworkManager-wait-online.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Sep 20 11:39:15 rockwurst systemd[1]: NetworkManager-wait-online.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit NetworkManager-wait-online.service has entered the 'failed' state with result 'exit-code'.
Sep 20 11:39:15 rockwurst systemd[1]: Failed to start Network Manager Wait Online.
░░ Subject: A start job for unit NetworkManager-wait-online.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit NetworkManager-wait-online.service has finished with a failure.
░░
░░ The job identifier is 235 and the job result is failed.
Sep 20 11:39:15 rockwurst systemd[1]: Dependency failed for Build Rockstor.
░░ Subject: A start job for unit rockstor-build.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit rockstor-build.service has finished with a failure.
░░
░░ The job identifier is 241 and the job result is dependency.
Sep 20 11:39:15 rockwurst systemd[1]: Dependency failed for Tasks required prior to starting Rockstor.
░░ Subject: A start job for unit rockstor-pre.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit rockstor-pre.service has finished with a failure.
░░
░░ The job identifier is 251 and the job result is dependency.
Sep 20 11:39:15 rockwurst systemd[1]: Dependency failed for Rockstor startup script.
░░ Subject: A start job for unit rockstor.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit rockstor.service has finished with a failure.
░░
░░ The job identifier is 250 and the job result is dependency.
Sep 20 11:39:15 rockwurst systemd[1]: Dependency failed for Rockstor bootstrapping tasks.
░░ Subject: A start job for unit rockstor-bootstrap.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit rockstor-bootstrap.service has finished with a failure.
░░
░░ The job identifier is 249 and the job result is dependency.
Sep 20 11:39:15 rockwurst systemd[1]: rockstor-bootstrap.service: Job rockstor-bootstrap.service/start failed with result 'dependency'.
Sep 20 11:39:15 rockwurst systemd[1]: rockstor.service: Job rockstor.service/start failed with result 'dependency'.
Sep 20 11:39:15 rockwurst systemd[1]: rockstor-pre.service: Job rockstor-pre.service/start failed with result 'dependency'.
Sep 20 11:39:15 rockwurst systemd[1]: rockstor-build.service: Job rockstor-build.service/start failed with result 'dependency'.
Sep 20 11:39:15 rockwurst systemd[1]: Reached target Network is Online.

Checking dmesg:
the NICs are recognized:

[   13.017478] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[   13.045627] e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[   13.048746] e1000: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX

After boot (but failed Rockstor startup) using nmcli device they show up like this:

DEVICE  TYPE      STATE                                  CONNECTION
eth0    ethernet  connected                              Wired connection 1
eth2    ethernet  connected                              Wired connection 3
lo      loopback  connected (externally)                 lo
eth1    ethernet  connecting (getting IP configuration)  Wired connection 2

eth1 shows as connecting. This is what’s causing the the error that Network Manager start failed (driven by the NetworkManager-wait-online.service). This also happens more often when using a bonded (bridged) profile that keeps lingering in the same manner (@Flox pointed that out in another thread).

From what I understand, that in the above case(s) one might have to either make the NIC invisible (if not used) e.g. via BIOS), or if it’s just taking a bit longer (beyond 60 seconds I think), maybe increase the timeout to see whether that takes care of it.

For reference, more info on the NetworkManager-wait-online.service.

Since in my case, this is “just” occurring on VMs I experiment with, I either disable the adapter and reboot, or just restart Network Manager and then start Rockstor:

systemctl restart network
systemctl start rockstor
2 Likes

Thank you for the fast and extensive reply @Hooverdan

Just to be clear - my server is up and running again with the old installation and I don’t need immediate support. You don’t need to bother around with my problem if things are expected to work once the new Rockstor 5 is released.


Regarding the NetworkManager-wait-online service I do have the same behaviour on my “old” 4.6.1-0 - Leap 15.4 installation and the trial yesterday with the latest 5.0.9 - Leap 15.6 installer.
It just seems that the old installation continues working even though the NetworkManager-wait-online service failed, while the installer yesterday did not continue after this service has failed.

This is the output of my “old” & currently working rockstor installation:

admin@Kolibri:~> sudo systemctl status NetworkManager-wait-online.service 
× NetworkManager-wait-online.service - Network Manager Wait Online
     Loaded: loaded (/usr/lib/systemd/system/NetworkManager-wait-online.service; enabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Sat 2024-09-21 16:13:13 CEST; 1min 35s ago
       Docs: man:nm-online(1)
   Main PID: 852 (code=exited, status=1/FAILURE)

Sep 21 16:12:43 Kolibri systemd[1]: Starting Network Manager Wait Online...
Sep 21 16:13:13 Kolibri systemd[1]: NetworkManager-wait-online.service: Main process exited, code=exited, status=1/FAILURE
Sep 21 16:13:13 Kolibri systemd[1]: NetworkManager-wait-online.service: Failed with result 'exit-code'.
Sep 21 16:13:13 Kolibri systemd[1]: Failed to start Network Manager Wait Online.

admin@Kolibri:~> sudo systemctl start NetworkManager-wait-online.service 
Job for NetworkManager-wait-online.service failed because the control process exited with error code.
See "systemctl status NetworkManager-wait-online.service" and "journalctl -xeu NetworkManager-wait-online.service" for details.

I can also power-cycle the machine (mainboard will reboot once power is applied) and without any manual intervention everything will start up, from NFS & SMB, to all docker containers and even the rockstor web-UI.


Regarding the network interfaces, I actually see 3 devices:

  • eth0 (unavailable)
  • eth1 (connected)
  • usb0 (connecting)
    The usb0 is in “STATE” (via nmcli device): connecting (getting IP configuration) which is exactly the reason why the the NetworkManager-wait-online service fails according to the documentation.

After about 10 Minutes (definitely larger than 3) the usb0 will change to state disconnected, after which I can manually start the NetworkManager-wait-onlin service and it will exit with SUCCESS:

admin@Kolibri:~> sudo systemctl start NetworkManager-wait-online.service 
[sudo] password for root: 
admin@Kolibri:~> sudo systemctl status NetworkManager-wait-online.service 
● NetworkManager-wait-online.service - Network Manager Wait Online
     Loaded: loaded (/usr/lib/systemd/system/NetworkManager-wait-online.service; enabled; vendor preset: disabled)
     Active: active (exited) since Sat 2024-09-21 16:52:25 CEST; 5s ago
       Docs: man:nm-online(1)
    Process: 17731 ExecStart=/bin/bash -c if [ ${NM_ONLINE_TIMEOUT} -gt 0 ]; then /usr/bin/nm-online -s -q --timeout=${NM_ONLINE_TIMEOUT} ; else /bin/true ; fi (code=exited, status=0/SUCCESS)
   Main PID: 17731 (code=exited, status=0/SUCCESS)

Sep 21 16:52:25 Kolibri systemd[1]: Starting Network Manager Wait Online...
Sep 21 16:52:25 Kolibri systemd[1]: Finished Network Manager Wait Online.
admin@Kolibri:~> 

So the physically unconnected NIC is not an issue with real hardware, because the state is simply “unavailable” - but the usb0 interface is an issue in my setup.

To be honest, I have no clue about the usb0 interface on my machine. I find an option to disable each of the ethernet interfaces in the BIOS, but no mentioning of a usb ethernet interface directly.

I do have a server Motherboard (Asus P12R-I) with onboard management/kvm functionality and also some kind of BMC. Although I love the management over IP & kvm functionality I have not had a look at what the BMC and Sideband Interface present on my motherboard actually are.

I will probably have a look if I can somehow disable some of this functionality and check whether I can let the usb0 interface disappear.

Cheers
Simon

A quick follow-up on my issue after fiddling around for a day:

As (server) motherboards have some interface to the Baseboard Management Controller (BMC), there is the relatively new Redfish standard which is also implemented on my server running Rockstor.

Additionally to the out-of-band interface, Redfish allows for an in-band interface where the Host OS can access the Redfish server as well. And this in-band Redfish server is implemented via an additional usb0 interface on my Motherboard (Asus P12R-I) which is causing troubles with the NetworkManager-wait-online service.

(Sorry for the following links in German language, these where the only sources I found)
# dmidecode -t 42 can be used to identify a management controller host interface and to use this interface, a static IP must be configured manually (the redfish server IP address can be changed in the UEFI)

I dug through all BMC & UEFI settings of the motherboard today and although the redfish server can be disabled, the usb0 interface was always visible by the host OS and always in state connecting causing the NetworkManager-wait-online service to fail.

The only way I could find to disable this usb0 interface was to blacklist the according kernel module with:

# echo "blacklist cdc_ether" > /etc/modprobe.d/blacklist-cdc_ether.conf

After rebooting the machin, there was no usb0 interface anymore and the NetworkManager-wait-online service exited with SUCCESS even though only one of my 2 ethernet ports where connected (because the service will return success when “devices are either activated or in a disconnect state”)

admin@Kolibri:~> systemctl status NetworkManager-wait-online.service 
● NetworkManager-wait-online.service - Network Manager Wait Online
     Loaded: loaded (/usr/lib/systemd/system/NetworkManager-wait-online.service; enabled; vendor preset: disabled)
     Active: active (exited) since Mon 2024-09-23 17:21:34 CEST; 59min ago
       Docs: man:nm-online(1)
   Main PID: 850 (code=exited, status=0/SUCCESS)
admin@Kolibri:~> nmcli device | grep 'eth[[:digit:]] '
eth1             ethernet  connected               Wired connection 2 
eth0             ethernet  unavailable             --                 
admin@Kolibri:~> 

So, although I haven’t tried it with the new installer, I assume I have to manually blacklist the kernel module (as I have described above), restart the server and then continue with the installer.

Another option (although I have not tried it) should be to set a static IP on the interface manually via NetworkManager.

Cheers Simon

2 Likes

Coming back to the topic - at least kind of :sweat_smile:

When I would like to re-install my server to the latest RC 9 (Rockstor 5.0.14 built on openSUSE Leap 15.6), there is still only the RC 4 installer available for donwload and the upgrade procedure is not completely clear for me.

So I install rockstor using the RC4 installer, proceed with local & keyboard setup on the command line, and afterwards I have to manually make a zypper dup on the command line, because at the time the installer was built the OS (leap 15.6) was not stable yet?
This distribuiton-update can be done before continuing with the setup on the web-UI?

Will the system afterwards be more or less in the same state, compared to waiting for the newly built rockstor installer which is built on a stable release of openSUSE OS (leap 15.6)?

And are there any rumours about the timeline when a new rockstor installer will be released? As the latest RC9 is also now about a month old, I assume that a stable rockstor Version 5 is not too far into the future.

Cheers Simon

1 Like

You should be able to do the disti update based on this:

https://rockstor.com/docs/howtos/15-5_to_15-6.html

And I believe, you can do this before you continue with the Rockstor update/config.

once it’s on 15.6 the subsequent installer should be very similar to what you did manually.

Depending on the non-Rockstor changes you have/will be making to the system, the backup/restore works pretty well for most of the other settings you would otherwise make in the WebUI.

I think the timeline on the stable release is dependent on @phillxnet’s capacity at this time, however it should not be too far in the future. Part of it will be, where to cut off (in terms of new issues being found and how critical they are).

On your other comment further up:

Since this hard dependency was created between Network Manager, the NetworkManager-wait-online portion and a successful Rockstor start-up, at this time it will be also present in the latest installer. So, you will likely have to continue to use the Kernel blacklisting. Of course, you could also manually increase the timeout from 60 seconds in the NetworkManager-wait-online service to 10 minutes … but that would make waiting for a reboot rather painful.

2 Likes

Thank you for explaining the steps @Hooverdan
Otherwise I would really have missed, that one or two of the repos had to be changed from “15.5” to “15.6” although the latest installer is already based on 15.6


Nevertheless, today I re-installed Rockstor and I am now running V5.0.14 on openSUSE 15.6 & afterwards I have switched back to the stable channel.

Regarding the NetworkManager-wait-online service:
While setting up the new Rockstor installation on the command line (there is only the locale & keyboard layout to be selected), an error about this service will interrupt and bring the “cli-gui” somewhat out of shape (but it is still usable). Once the installer is finished and the login-prompt is presented, I can log-in - but the rockstor web-UI is not available.

After logging in, I blacklisted the kernel module

and did the upgrade steps (which will not be necessary in the future). After rebooting I could log into the web-UI and continue as usual.

Thank’s for the support, my server is up and running successfully so far :blush: :+1:


Although I am not sure if this belongs here, I would like to share some of my Rockstor fresh-installation-migration issues. If I should post them somewhere else, I am happy to do so.

After doing a full Rockstor configuration backup from my old system (4.6.1-0) and importing it to my new installation (5.0.14) there where some more or less crucial things missing:

  • SSL certificates
  • E-Mail configuartion
  • the password of my rockstor-created users (which where not the new system before importing !)
  • the groups I created where there but the users where not assigned to the groups (to be fair, I have added the users to the group via CLI usermod -aG, because I could not find a Rockstor web-UI setting)
  • the SMB share “Custom configuration” set via web-UI (I first imported the disks before restoring the rockstor config from backup)
  • all NFS shares where “read-only”

Just some minor inconveniences that I could fix quickly, I just wanted to share this experience.


Another completely unrelated issue regarding the new openSUSE leap version was that the libvirt daemons (for KVM/Quemu virtualisation) are apparently running in “modular daemons” instead of “monolithic daemons” and have to be started manually after installing:

for drv in qemu network nodedev nwfilter secret storage
do
 sudo systemctl enable virt${drv}d.service
 sudo systemctl enable virt${drv}d{,-ro,-admin}.socket
done
for drv in qemu network nodedev nwfilter secret storage
do
 sudo systemctl start virt${drv}d{,-ro,-admin}.socket
done

This is definitely nothing that you (Rockstor development team) have to care about, as virtual machines are not supported anyway.

It is just really straight-forward to install libvirt for virtual machines on openSUSE and I was running (one) VM on top of rockstor for about a month now and was just struggling a bit to find the information I shared above for bringing the libvirt daemons up on openSUSE Leap 15.6.
It is working now perfectly on Rockstor 5.0.14 as well :blush: :+1:

I will also try running Cockpit on Rockstor, because it is only available on openSUSE Leap since 15.6. This would add a nice web-UI for managing virtual machines where only some additional packages are required which are all provided in the standard openSUSE Leap 15.6 repos.
Then Rockstor could serve as a user-friendly (easy installation & management web-UI for VMs) Virtual-Machine Host. :innocent:

But I will write more about this (probably in a new thread) once things are running.

Cheers
Simon

3 Likes

Great that you got it all working! And thanks for adding the additional details on here as well. It also means that the documentation should be updated for a few things, as well possibly some of your system require further investigation.

On this:

  • The SSL certificates: that’s a good point. Considering that the backup file is a fairly simple json, I don’t think the intent was to backup the SSL certificates. A corresponding note for a reinstall should be added to the documentation on that.
  • e-mail configuration: that might be an item that could be/should be backed up. Otherwise, also a note in the documentation would be helpful.
  • password of rockstor-created users: no idea whether that should be part of the backup or not to be honest.
  • the group assignments: since within the WebUI there’s only a 1:1 mapping possible, the backup would not catch the multi-assignments. Again, could be considered for new development or should be added to the documentation (since I assume, that’s not such an uncommon setup).
  • SMB share: was that the only one where the custom config was not created out of multiple or was it also just the only SMB share you had and that custom config wasn’t created? I have noticed that on one of my restores where I had multiple SMBs, but not all were created (not just the custom config). Might require a new issue on github. If you have any logs that might highlight what happened, that would of course be great.
  • Read-only NFS shares: no idea, since I don’t really use them, so not sure whether that’s by design, or has to do with the order in which those are generated (post user/group set up).

Also, thanks for the info on the virtualization observations. Should be helpful for some folks that want to take Rockstor beyond the current boundaries … The Cockpit installation. As you’ve mentioned it would be great if you can post a How-to for that.

3 Likes

This actually surprised me the most. Because I expected everything to work after restoring the config, but I just could not authenticate my user when mounting the SMB share. This also took me a while to figure out what was going wrong.

I had 2 SMB shares active and each of them had a custom config - neither of them was restored - the other settings where fine for both shares.
Unfortunatelly I don’t have any logs available, it was just me trying to get my server back up and running :sweat_smile:

I think the default setting when creating a new NFS share is read-only, so maybe this setting is not included in the backup :man_shrugging:

1 Like

when looking at the code, currently a random password is created during the restore of the users (which is not mentioned in the documentation). Originally all user passwords were set to the same, but that was a bit too on the nose, hence the randomization. That helped with the problem of getting encrypted password transferred out and then back in, but one needs to know that the passwords have been reset:

One the Samba restore I think I the global config as part of the service is restored, but I also couldn’t see anything explicitly about the share-level config. Probably another area that could be added to. You could check your json file to see whether you can find any entry that reflects that share level config, then we would know for sure whether it’s a missing feature, or whether there might be an issue during the import of that config.

On the NFS exports I can’t see anything specially done, so maybe your assumption that they get created with a default read-only is correct:

2 Likes

I actually do see the lines of my SMB Custom configuration, although they are spread out over multiple JSON “Elements” (however this is called, the lines of the same config are in different {} blocks).

{"model": "storageadmin.sambashare", "pk": 1, "fields": {"share": 39, "path": "/mnt2/scanner", "comment": "SMB share for scanner", "browsable": "yes", "read_only": "no", "guest_ok": "no", "shadow_copy": false, "time_machine": false, "snapshot_prefix": null}}, {"model": "storageadmin.sambashare", "pk": 2, "fields": {"share": 30, "path": "/mnt2/old-photos", "comment": "Samba-Export", "browsable": "yes", "read_only": "no", "guest_ok": "no", "shadow_copy": false, "time_machine": false, "snapshot_prefix": null}}, {"model": "storageadmin.sambacustomconfig", "pk": 1, "fields": {"smb_share": 1, "custom_config": "valid users = @scanner"}}, {"model": "storageadmin.sambacustomconfig", "pk": 3, "fields": {"smb_share": 1, "custom_config": "directory mask = 0775"}}, {"model": "storageadmin.sambacustomconfig", "pk": 4, "fields": {"smb_share": 1, "custom_config": "force group = scanner"}}, {"model": "storageadmin.sambacustomconfig", "pk": 6, "fields": {"smb_share": 2, "custom_config": "valid users = @family"}}, {"model": "storageadmin.sambacustomconfig", "pk": 10, "fields": {"smb_share": 2, "custom_config": "force user = admin"}}, {"model": "storageadmin.sambacustomconfig", "pk": 11, "fields": {"smb_share": 2, "custom_config": "force group = users"}}, {"model": "storageadmin.sambacustomconfig", "pk": 12, "fields": {"smb_share": 1, "custom_config": "create mask = 0664"}}, {"model": "storageadmin.sambacustomconfig", "pk": 15, "fields": {"smb_share": 2, "custom_config": "directory mask = 0755"}}, {"model": "storageadmin.sambacustomconfig", "pk": 16, "fields": {"smb_share": 2, "custom_config": "create mask = 0644"}}, 

For reference, this is a rockstor screenshot of the “scanner” share:

Interestingly enough there is a “editable” field in the JSON file which is set to “rw” …
And the Host String was restored coreectly to 10.71.128.0/24 in Rockstor.

{"model": "storageadmin.nfsexportgroup", "pk": 2, "fields": {"host_str": "10.71.128.0/24", "editable": "rw", "syncable": "async", "mount_security": "insecure", "nohide": false, "enabled": true, "admin_host": null}},

Note: the JSON file is exported from my old Rockstor 4.6.1 installation.

2 Likes

Hey guys, sorry for bothering again :sweat_smile:

I just wanted to check if the zypper repository setup is intended the way it is in Rockstor built on Leap 15.6:

admin@Kolibri:~> zypper lr -P
#  | Alias                              | Name                                                                                        | Enabled | GPG Check | Refresh | Priority
---+------------------------------------+---------------------------------------------------------------------------------------------+---------+-----------+---------+---------
 6 | home_rockstor_branches_Base_System | home_rockstor_branches_Base_System                                                          | Yes     | (r ) Yes  | Yes     |   97
 1 | Leap_15_6                          | Leap_15_6                                                                                   | Yes     | (r ) Yes  | Yes     |   99
 2 | Leap_15_6_Updates                  | Leap_15_6_Updates                                                                           | Yes     | (r ) Yes  | Yes     |   99
 3 | Rockstor-Stable                    | Rockstor-Stable                                                                             | Yes     | (r ) Yes  | Yes     |   99
 7 | repo-backports-debug-update        | Update repository with updates for openSUSE Leap debuginfo packages from openSUSE Backports | No      | ----      | ----    |   99
 8 | repo-backports-update              | Update repository of openSUSE Backports                                                     | Yes     | (r ) Yes  | Yes     |   99
 9 | repo-openh264                      | repo-openh264                                                                               | Yes     | (r ) Yes  | Yes     |   99
10 | repo-sle-debug-update              | Update repository with debuginfo for updates from SUSE Linux Enterprise 15                  | No      | ----      | ----    |   99
11 | repo-sle-update                    | Update repository with updates from SUSE Linux Enterprise 15                                | Yes     | (r ) Yes  | Yes     |   99
 5 | home_rockstor                      | home_rockstor                                                                               | Yes     | (r ) Yes  | Yes     |  105
 4 | filesystems                        | Filesystem tools and FUSE-related packages (15.6)                                           | Yes     | (r ) Yes  | No      |  200

Don’t mid the last filesystems repo, which I have added manually.

I was just wondering whether the repo-backports-update is intentionally (1) enabled and (2) has default priority instead of a “lower” (= number larger than 99) priority.

While fiddling around with cockpit I apparently ran into an issue with the newer version from the backports repo, which was selected by zypper during installation because the repos have the same priority, then the newer version will be selected.

admin@Kolibri:~> sudo zypper search --details cockpit
[sudo] password for root: 
Loading repository data...
Reading installed packages...

S  | Name                   | Type       | Version         | Arch   | Repository
---+------------------------+------------+-----------------+--------+----------------------------------------
i+ | cockpit                | package    | 321-bp156.2.9.1 | x86_64 | Update repository of openSUSE Backports
v  | cockpit                | package    | 320-bp156.2.6.3 | x86_64 | Update repository of openSUSE Backports
v  | cockpit                | package    | 316-bp156.2.3.1 | x86_64 | Update repository of openSUSE Backports
v  | cockpit                | package    | 309-bp156.1.7   | x86_64 | Leap_15_6
   | cockpit                | srcpackage | 321-bp156.2.9.1 | noarch | Update repository of openSUSE Backports
   | cockpit                | srcpackage | 320-bp156.2.6.3 | noarch | Update repository of openSUSE Backports
   | cockpit                | srcpackage | 316-bp156.2.3.1 | noarch | Update repository of openSUSE Backports

My issue with the cockbit could be resolved by changing the repo priorities and installing the package from Leap_15_6 repo, which is apparently working fine:

  1. Lower the priority of the backports repo:
sudo zypper mr -p 100 repo-backports-update
  1. If already installed packages should change repo, make a dup:
sudo zypper dup

For a “stable” (NAS) server I would have expected that the backports repos at least has a lower priority.

I believe, the priority setup for the OpenSUSE repos is just following what upstream is using. When looking at a Leap installation the priorities and active flags are set up the same. If I remember correctly, the backport repo is what’s allowing Leap to offer other packages maintained by volunteers and unaffiliated with SUSE without impacting the baseline scope of the kernel/package versions. While called backports they’re still considered stable.

I found this article as well, though it’s a bit older:

https://en.opensuse.org/Portal:Backports

Not that it matters really, you were able to resolve it, but it’s my first experience to hear that this would cause a problem.

Out of curiosity, could an alternative have been to force a specific version install instead of changing the repo priorities?

2 Likes

Yep, that’s also possible.


I apologize, my conclusion previously was drawn too fast … :grimacing:

  1. I have just put the priority of the backports repo back to default (99) and observed the changes (made with zypper dup):
    It’s only the cockpit related packages that will change in a Version.

  2. Although the “older” cockpit version of the Leap 15.6 was working, it also had a bug that was already fixed in the backports repo since 3 months

So having the backports repo at the same priority seems like a sensible setting for openSUSE :blush: :+1:


Regarding my issue with cockpit:
Interestingly enough the “new” version from backports is running now without any issues …
:drum: :drum: :drum:
… because they switched the system-cockpit-username used by cockpit. Parts of the new openSUSE package did still use the old username, which was not set up by the new package, but was set up when using the old package in the mean time :joy: :joy:

EDIT:
Just found that there is already a BUG Ticket for this
https://bugzilla.suse.com/show_bug.cgi?id=1230546

3 Likes

Excellent!

FYI, for the above described samba/nfs observations, I have opened an issue on github:

3 Likes

Thank you :+1:


I have another question regarding some systemd units
… and I notice that this thread has already got a diary character of my journey on switching to Leap 15.6 :joy:

  1. I have written a custom script & systemd unit for a backup (using btrbk) and I have tapped into the mountpoints under /mnt2
    My issue: when the system starts up, my script will run and fail because the mountpoints are not yet mounted under /mnt2 by rockstor.
    Question: is there an easy relationship I could establish to other rockstor systemd units that my unit is running after the volumes are mounted under /mnt2 ?

I had a look at the rockstor unit files and I think that the rockstor-bootstrap.service is the last unit that will be loaded?
I already tried putting a Requires=rockstor-bootstrap.service relationship into my unit file, but it still fails because the directories under /mnt2 are not mounted yet.

This is not an important topic for me, I just want to check if someone of the developers knows whether a systemd unit relationship can be established that my unit is started only after /mnt2 got mounted.

  1. I have noticed that the service dmraid-activation.service is enabled and failing every boot.
admin@Kolibri:~> sudo systemctl status dmraid-activation.service 
× dmraid-activation.service - Activation of DM RAID sets
     Loaded: loaded (/usr/lib/systemd/system/dmraid-activation.service; enabled; preset: enabled)
     Active: failed (Result: exit-code) since Wed 2024-09-25 20:52:38 CEST; 22min ago
   Main PID: 780 (code=exited, status=1/FAILURE)
        CPU: 4ms

Sep 25 20:52:38 Kolibri systemd[1]: Starting Activation of DM RAID sets...
Sep 25 20:52:38 Kolibri dmraid[780]: no raid disks
Sep 25 20:52:38 Kolibri systemd[1]: dmraid-activation.service: Main process exited, code=exited, status=1/FAILURE
Sep 25 20:52:38 Kolibri systemd[1]: dmraid-activation.service: Failed with result 'exit-code'.
Sep 25 20:52:38 Kolibri systemd[1]: Failed to start Activation of DM RAID sets.

This does not have any negative consequences, I just noticed it as the only unit that is failing to start (has a red flag in the cockpit web-UI).

And I was just curious why this unit is even enabled, because the btrfs-raid philosophy discourages the use of (an underlaying) dm-raid and to my knwoledge there is no rockstor element that would configure a dm-raid.

Cheers Simon

1 Like

On the best dependency connection for your scenario, I’ll let @phillxnet or @Flox answer.
I think the dmraid-activation.service actually comes with the upstream JeOS image, as opposed being instituted by Rockstor. If I’m correct in that, then it’s in line with trying to minimize tweaking upstream layouts/setups.

update: I actually found an issue on the repository that @phillxnet posted, stating that this is kept alive by Rockstor, but is also inconsequential to the functioning of Rockstor:

Could you use RequiresMountsFor= instead, which would tie it to a “absolute” mount point(s)? So, wouldn’t be as universal if you decided to change your pool/shares, etc. in the future, you would have to possibly adjust it.

https://www.freedesktop.org/software/systemd/man/latest/systemd.unit.html#RequiresMountsFor=

and the mounting piece that systemd goes through that the above option relies upon:

https://www.freedesktop.org/software/systemd/man/latest/systemd.mount.html

I have to give credit to this resolved github issue that highlighted some of the dependencies, etc.

3 Likes

First of all I would like to announce that I have written a (somewhat extensive) post about using VMs with cockpit on top of Rockstor

Feel free to copy & modify this guide in case you would like to add a section about VMs to the Rockstor documentation.


I agree that it is inconsequential & I only noticed it because cockpit warned me about a failing service with a red exclamation mark:

As it is the only failing service, I will probably simply disable it to get rid of the warning :sweat_smile:


Thank you for pointing that out, I was not aware of this feature but it sounds great for my purpose.

Although, as I do have to write the mount unit regardless of external scripts, I will probably switch to another mounting location (probably a subfolder of /mnt) to not entangle my script with the rockstor scipts.

1 Like

@simon-77 Hello again,
re:

You may also need an After = i.e. such as we do for our systemd services:

This can help to serialise the services. Let us know if this works. I don’t think the mount option suggested will work as we don’t register our bootstrap service as a systemd mount service as such. And I would have thought the indicated RequiresMountsFor= would reference only systemd mounted mount points. But I don’t actually know.

But @Hooverdan issue link suggested the following addition to their docs:

Systemd requires a corresponding mount unit file to support the unit where RequiresMountsFor reference is placed. Systemd doesn’t automatically infer the mount points out of its control.

So it may be we should ourselves be instantiating systemd mount unit files dynamically as we create/mount at least Pools. More food for thought. If folks could chip in on this one with more knowledge/corrections etc that would be great. We do try to fit-in re systemd as much as we can. Hence the more recent addition of our rockstor-build service.

@simon-77 Re your Cockpit how-to: nice. And this may well make for a good howto in our docs. Care to present it via a PR as there would then be proper attribution. But I do have one reservation. I think I remember reading something about openSUSE dropping support for Cockpit: something to do with some share library or other that they are no longer willing to maintain. That might be important context to properly discern re a pending docs howto. My apologies for not being able to look this up currently: pulled in many directions etc.

Hope that helps, at least with some context. And once you have a working systemd options set to ensure our mounts are all-done: that would also make for a nice little how-to as this is likely going to be super useful for others wanting the same.

1 Like

Re:

The following seems to boast about it’s inclusion: not sure what I was remembering/miss-remembering then:

The inclusion of the Cockpit[1] package in openSUSE Leap 15.6 represents a significant enhancement in system and container management capabilities for users. This integration into Leap 15.6 improves usability and access as well as providing a link between advanced system administration and user-friendly operations from the web browser. The addition underscores openSUSE’s commitment to providing powerful tools that cater to both professionals and hobbyists. Leap does not come with a SELinux policy, so SELinux capablities for Cockpit are not functioning.

2 Likes

Thank you @phillxnet for you extensive answer as well :blush:

Regarding my systemd unit file isse …

I have to admit that I “solved” it in the meantime by mounting the required filesystems under /mnt separately.

Regardless of the startup sequence dependency, I think it is actually the neater way that my backup script is not tapping into the /mnt2 structure.

Although this sounds great in the first place, I have do admit that my expertise regarding systemd is very limited - I actually didn’t know about the mount files at all before @Hooverdan pointed that out.

My only thought on this is, that systemd always seemed like a “static” configuration for me (which requires reboots for certain changes to be effective, …) and I therefore don’t know how easy or complicated it is to dynamically modify these - especially as the Pools are only discovered dynamically when I understood it correctly.

1 Like