Missing repository Rockstor Stable on 15.6

I actually do see the lines of my SMB Custom configuration, although they are spread out over multiple JSON “Elements” (however this is called, the lines of the same config are in different {} blocks).

{"model": "storageadmin.sambashare", "pk": 1, "fields": {"share": 39, "path": "/mnt2/scanner", "comment": "SMB share for scanner", "browsable": "yes", "read_only": "no", "guest_ok": "no", "shadow_copy": false, "time_machine": false, "snapshot_prefix": null}}, {"model": "storageadmin.sambashare", "pk": 2, "fields": {"share": 30, "path": "/mnt2/old-photos", "comment": "Samba-Export", "browsable": "yes", "read_only": "no", "guest_ok": "no", "shadow_copy": false, "time_machine": false, "snapshot_prefix": null}}, {"model": "storageadmin.sambacustomconfig", "pk": 1, "fields": {"smb_share": 1, "custom_config": "valid users = @scanner"}}, {"model": "storageadmin.sambacustomconfig", "pk": 3, "fields": {"smb_share": 1, "custom_config": "directory mask = 0775"}}, {"model": "storageadmin.sambacustomconfig", "pk": 4, "fields": {"smb_share": 1, "custom_config": "force group = scanner"}}, {"model": "storageadmin.sambacustomconfig", "pk": 6, "fields": {"smb_share": 2, "custom_config": "valid users = @family"}}, {"model": "storageadmin.sambacustomconfig", "pk": 10, "fields": {"smb_share": 2, "custom_config": "force user = admin"}}, {"model": "storageadmin.sambacustomconfig", "pk": 11, "fields": {"smb_share": 2, "custom_config": "force group = users"}}, {"model": "storageadmin.sambacustomconfig", "pk": 12, "fields": {"smb_share": 1, "custom_config": "create mask = 0664"}}, {"model": "storageadmin.sambacustomconfig", "pk": 15, "fields": {"smb_share": 2, "custom_config": "directory mask = 0755"}}, {"model": "storageadmin.sambacustomconfig", "pk": 16, "fields": {"smb_share": 2, "custom_config": "create mask = 0644"}}, 

For reference, this is a rockstor screenshot of the “scanner” share:

Interestingly enough there is a “editable” field in the JSON file which is set to “rw” …
And the Host String was restored coreectly to 10.71.128.0/24 in Rockstor.

{"model": "storageadmin.nfsexportgroup", "pk": 2, "fields": {"host_str": "10.71.128.0/24", "editable": "rw", "syncable": "async", "mount_security": "insecure", "nohide": false, "enabled": true, "admin_host": null}},

Note: the JSON file is exported from my old Rockstor 4.6.1 installation.

2 Likes

Hey guys, sorry for bothering again :sweat_smile:

I just wanted to check if the zypper repository setup is intended the way it is in Rockstor built on Leap 15.6:

admin@Kolibri:~> zypper lr -P
#  | Alias                              | Name                                                                                        | Enabled | GPG Check | Refresh | Priority
---+------------------------------------+---------------------------------------------------------------------------------------------+---------+-----------+---------+---------
 6 | home_rockstor_branches_Base_System | home_rockstor_branches_Base_System                                                          | Yes     | (r ) Yes  | Yes     |   97
 1 | Leap_15_6                          | Leap_15_6                                                                                   | Yes     | (r ) Yes  | Yes     |   99
 2 | Leap_15_6_Updates                  | Leap_15_6_Updates                                                                           | Yes     | (r ) Yes  | Yes     |   99
 3 | Rockstor-Stable                    | Rockstor-Stable                                                                             | Yes     | (r ) Yes  | Yes     |   99
 7 | repo-backports-debug-update        | Update repository with updates for openSUSE Leap debuginfo packages from openSUSE Backports | No      | ----      | ----    |   99
 8 | repo-backports-update              | Update repository of openSUSE Backports                                                     | Yes     | (r ) Yes  | Yes     |   99
 9 | repo-openh264                      | repo-openh264                                                                               | Yes     | (r ) Yes  | Yes     |   99
10 | repo-sle-debug-update              | Update repository with debuginfo for updates from SUSE Linux Enterprise 15                  | No      | ----      | ----    |   99
11 | repo-sle-update                    | Update repository with updates from SUSE Linux Enterprise 15                                | Yes     | (r ) Yes  | Yes     |   99
 5 | home_rockstor                      | home_rockstor                                                                               | Yes     | (r ) Yes  | Yes     |  105
 4 | filesystems                        | Filesystem tools and FUSE-related packages (15.6)                                           | Yes     | (r ) Yes  | No      |  200

Don’t mid the last filesystems repo, which I have added manually.

I was just wondering whether the repo-backports-update is intentionally (1) enabled and (2) has default priority instead of a “lower” (= number larger than 99) priority.

While fiddling around with cockpit I apparently ran into an issue with the newer version from the backports repo, which was selected by zypper during installation because the repos have the same priority, then the newer version will be selected.

admin@Kolibri:~> sudo zypper search --details cockpit
[sudo] password for root: 
Loading repository data...
Reading installed packages...

S  | Name                   | Type       | Version         | Arch   | Repository
---+------------------------+------------+-----------------+--------+----------------------------------------
i+ | cockpit                | package    | 321-bp156.2.9.1 | x86_64 | Update repository of openSUSE Backports
v  | cockpit                | package    | 320-bp156.2.6.3 | x86_64 | Update repository of openSUSE Backports
v  | cockpit                | package    | 316-bp156.2.3.1 | x86_64 | Update repository of openSUSE Backports
v  | cockpit                | package    | 309-bp156.1.7   | x86_64 | Leap_15_6
   | cockpit                | srcpackage | 321-bp156.2.9.1 | noarch | Update repository of openSUSE Backports
   | cockpit                | srcpackage | 320-bp156.2.6.3 | noarch | Update repository of openSUSE Backports
   | cockpit                | srcpackage | 316-bp156.2.3.1 | noarch | Update repository of openSUSE Backports

My issue with the cockbit could be resolved by changing the repo priorities and installing the package from Leap_15_6 repo, which is apparently working fine:

  1. Lower the priority of the backports repo:
sudo zypper mr -p 100 repo-backports-update
  1. If already installed packages should change repo, make a dup:
sudo zypper dup

For a “stable” (NAS) server I would have expected that the backports repos at least has a lower priority.

I believe, the priority setup for the OpenSUSE repos is just following what upstream is using. When looking at a Leap installation the priorities and active flags are set up the same. If I remember correctly, the backport repo is what’s allowing Leap to offer other packages maintained by volunteers and unaffiliated with SUSE without impacting the baseline scope of the kernel/package versions. While called backports they’re still considered stable.

I found this article as well, though it’s a bit older:

https://en.opensuse.org/Portal:Backports

Not that it matters really, you were able to resolve it, but it’s my first experience to hear that this would cause a problem.

Out of curiosity, could an alternative have been to force a specific version install instead of changing the repo priorities?

2 Likes

Yep, that’s also possible.


I apologize, my conclusion previously was drawn too fast … :grimacing:

  1. I have just put the priority of the backports repo back to default (99) and observed the changes (made with zypper dup):
    It’s only the cockpit related packages that will change in a Version.

  2. Although the “older” cockpit version of the Leap 15.6 was working, it also had a bug that was already fixed in the backports repo since 3 months

So having the backports repo at the same priority seems like a sensible setting for openSUSE :blush: :+1:


Regarding my issue with cockpit:
Interestingly enough the “new” version from backports is running now without any issues …
:drum: :drum: :drum:
… because they switched the system-cockpit-username used by cockpit. Parts of the new openSUSE package did still use the old username, which was not set up by the new package, but was set up when using the old package in the mean time :joy: :joy:

EDIT:
Just found that there is already a BUG Ticket for this
https://bugzilla.suse.com/show_bug.cgi?id=1230546

3 Likes

Excellent!

FYI, for the above described samba/nfs observations, I have opened an issue on github:

3 Likes

Thank you :+1:


I have another question regarding some systemd units
… and I notice that this thread has already got a diary character of my journey on switching to Leap 15.6 :joy:

  1. I have written a custom script & systemd unit for a backup (using btrbk) and I have tapped into the mountpoints under /mnt2
    My issue: when the system starts up, my script will run and fail because the mountpoints are not yet mounted under /mnt2 by rockstor.
    Question: is there an easy relationship I could establish to other rockstor systemd units that my unit is running after the volumes are mounted under /mnt2 ?

I had a look at the rockstor unit files and I think that the rockstor-bootstrap.service is the last unit that will be loaded?
I already tried putting a Requires=rockstor-bootstrap.service relationship into my unit file, but it still fails because the directories under /mnt2 are not mounted yet.

This is not an important topic for me, I just want to check if someone of the developers knows whether a systemd unit relationship can be established that my unit is started only after /mnt2 got mounted.

  1. I have noticed that the service dmraid-activation.service is enabled and failing every boot.
admin@Kolibri:~> sudo systemctl status dmraid-activation.service 
× dmraid-activation.service - Activation of DM RAID sets
     Loaded: loaded (/usr/lib/systemd/system/dmraid-activation.service; enabled; preset: enabled)
     Active: failed (Result: exit-code) since Wed 2024-09-25 20:52:38 CEST; 22min ago
   Main PID: 780 (code=exited, status=1/FAILURE)
        CPU: 4ms

Sep 25 20:52:38 Kolibri systemd[1]: Starting Activation of DM RAID sets...
Sep 25 20:52:38 Kolibri dmraid[780]: no raid disks
Sep 25 20:52:38 Kolibri systemd[1]: dmraid-activation.service: Main process exited, code=exited, status=1/FAILURE
Sep 25 20:52:38 Kolibri systemd[1]: dmraid-activation.service: Failed with result 'exit-code'.
Sep 25 20:52:38 Kolibri systemd[1]: Failed to start Activation of DM RAID sets.

This does not have any negative consequences, I just noticed it as the only unit that is failing to start (has a red flag in the cockpit web-UI).

And I was just curious why this unit is even enabled, because the btrfs-raid philosophy discourages the use of (an underlaying) dm-raid and to my knwoledge there is no rockstor element that would configure a dm-raid.

Cheers Simon

1 Like

On the best dependency connection for your scenario, I’ll let @phillxnet or @Flox answer.
I think the dmraid-activation.service actually comes with the upstream JeOS image, as opposed being instituted by Rockstor. If I’m correct in that, then it’s in line with trying to minimize tweaking upstream layouts/setups.

update: I actually found an issue on the repository that @phillxnet posted, stating that this is kept alive by Rockstor, but is also inconsequential to the functioning of Rockstor:

Could you use RequiresMountsFor= instead, which would tie it to a “absolute” mount point(s)? So, wouldn’t be as universal if you decided to change your pool/shares, etc. in the future, you would have to possibly adjust it.

https://www.freedesktop.org/software/systemd/man/latest/systemd.unit.html#RequiresMountsFor=

and the mounting piece that systemd goes through that the above option relies upon:

https://www.freedesktop.org/software/systemd/man/latest/systemd.mount.html

I have to give credit to this resolved github issue that highlighted some of the dependencies, etc.

3 Likes

First of all I would like to announce that I have written a (somewhat extensive) post about using VMs with cockpit on top of Rockstor

Feel free to copy & modify this guide in case you would like to add a section about VMs to the Rockstor documentation.


I agree that it is inconsequential & I only noticed it because cockpit warned me about a failing service with a red exclamation mark:

As it is the only failing service, I will probably simply disable it to get rid of the warning :sweat_smile:


Thank you for pointing that out, I was not aware of this feature but it sounds great for my purpose.

Although, as I do have to write the mount unit regardless of external scripts, I will probably switch to another mounting location (probably a subfolder of /mnt) to not entangle my script with the rockstor scipts.

1 Like

@simon-77 Hello again,
re:

You may also need an After = i.e. such as we do for our systemd services:

This can help to serialise the services. Let us know if this works. I don’t think the mount option suggested will work as we don’t register our bootstrap service as a systemd mount service as such. And I would have thought the indicated RequiresMountsFor= would reference only systemd mounted mount points. But I don’t actually know.

But @Hooverdan issue link suggested the following addition to their docs:

Systemd requires a corresponding mount unit file to support the unit where RequiresMountsFor reference is placed. Systemd doesn’t automatically infer the mount points out of its control.

So it may be we should ourselves be instantiating systemd mount unit files dynamically as we create/mount at least Pools. More food for thought. If folks could chip in on this one with more knowledge/corrections etc that would be great. We do try to fit-in re systemd as much as we can. Hence the more recent addition of our rockstor-build service.

@simon-77 Re your Cockpit how-to: nice. And this may well make for a good howto in our docs. Care to present it via a PR as there would then be proper attribution. But I do have one reservation. I think I remember reading something about openSUSE dropping support for Cockpit: something to do with some share library or other that they are no longer willing to maintain. That might be important context to properly discern re a pending docs howto. My apologies for not being able to look this up currently: pulled in many directions etc.

Hope that helps, at least with some context. And once you have a working systemd options set to ensure our mounts are all-done: that would also make for a nice little how-to as this is likely going to be super useful for others wanting the same.

1 Like

Re:

The following seems to boast about it’s inclusion: not sure what I was remembering/miss-remembering then:

The inclusion of the Cockpit[1] package in openSUSE Leap 15.6 represents a significant enhancement in system and container management capabilities for users. This integration into Leap 15.6 improves usability and access as well as providing a link between advanced system administration and user-friendly operations from the web browser. The addition underscores openSUSE’s commitment to providing powerful tools that cater to both professionals and hobbyists. Leap does not come with a SELinux policy, so SELinux capablities for Cockpit are not functioning.

2 Likes

Thank you @phillxnet for you extensive answer as well :blush:

Regarding my systemd unit file isse …

I have to admit that I “solved” it in the meantime by mounting the required filesystems under /mnt separately.

Regardless of the startup sequence dependency, I think it is actually the neater way that my backup script is not tapping into the /mnt2 structure.

Although this sounds great in the first place, I have do admit that my expertise regarding systemd is very limited - I actually didn’t know about the mount files at all before @Hooverdan pointed that out.

My only thought on this is, that systemd always seemed like a “static” configuration for me (which requires reboots for certain changes to be effective, …) and I therefore don’t know how easy or complicated it is to dynamically modify these - especially as the Pools are only discovered dynamically when I understood it correctly.

1 Like

@simon-77 Re:

Now fixed in the config-backup restore code against @Hooverdan spin-off issue as a result of your reporting here:

The following testing rpm now has this and a goodly number of other fixes:

Thanks again for your detailed reporting here.

Hope that helps.

2 Likes