Adding storage to Rock-on resets configuration

Hi all.

I added a new storage mapping to my Sonarr and Radarr Rock-ons which resulted in the config for both the applications being reset.

Since I can’t remove a storage mapping once it’s in place that’s not an option and restarting the Rock-on doesn’t help.

Why does this happen?

Thanks.

@MainrtNr5 Hello again:
Similar to your previous question on re-configuring, it’s an uninstall - re-install Rock-on thing. This helps us to keep the UI simple and works well as the Rock-on then gets re-established from the get-go. baring the existing config with the common shares used during the re-install of course.

Although I think I may not have quite grasped your actual question here.

Could you elaborate on your UI issue here. Also @Flox is more up on the internals of the Rock-ons currently so you may get a more authoritative answer in time.

Sketch out the path taken in the Web-UI and where it seemingly took a turn you didn’t agree with. Adding a share to a Rock-on does stop and start the rock-on but it doesn’t re-set it’s other shares. But it may be that your particular mapping caused that by overriding some config element with the new path established to map the new share.

Apologies for not quite getting your meaning on this one.

Full details on how our existing Rock-ons work internally has been excellently documented by @Flox in our technical wiki entry here:

and

That may help with what’s going on differently to what you are expecting. Unless we have a bug and in which case great: we can narrow that down and sort it to be as intended.

Hope that helps.

1 Like

Hi @MainrtNr5,

If I understand correctly, you added storage to both of these rock-ons and you are now experiencing difficulties with both of these, is that correct? The first thing that would come to mind to explain such issue would be that the new storage was added with a path that conflicts with an existing path in each of these Rock-ons. I could be completely wrong, of course, but it all depends on the path that was chosen for the “rock-on directory” field (see below):
https://rockstor.com/docs/interface/overview.html#add-storage

I would thus try to use a path that is unlikely to exist in either Rock-on. For instance, you can use something like /rock-media-share or anything like that is almost certain not to exist in a Rock-on.

Yes that is a current limitation: the only way to remove an added storage is to uninstall and reinstall the Rock-on. If you reuse the same shares and config for the reinstall, you will be in the same spot as before and not loose anything. You can then re-add a storage with the “rock-on directory” of your choice. See the relevant section of our docs:
https://rockstor.com/docs/interface/overview.html#uninstall-of-a-rock-on

We do have plans to improve this, of course, so that one can remove an added storage without needing to uninstall/reinstall the Rock-on; so far, this is possible only for rocknets.

Hope this helps,

1 Like

Hi @phillxnet!

Well, that doesn’t seem to be working as intended. :slight_smile:

Since I can’t explicitly specify a config share for Radarr and Sonarr I have to rely on Rockstor keeping track of the config for these Rock-ons which it doesn’t seem to do after mounting new storage.

Simply restarting the Rock-on doesn’t affect the config but as soon as I add more storage it is reset. I could replicate the problem by adding an existing share (used for another Rockon) using a random path.

I don’t have an issue with the Rockstor UI (at least not in this case but that’s another post :slight_smile:), it’s the fact that the config for these Rock-ons was reset that’s annoying.

Also, the shared Rock-ons share is brand new as I had to destroy the filesystem after one of my disks failed, but that’s on btrfs and not on you :grin:, so there should be no “residual” config screwing things up.

Hi @Flox!

Yes and no. They still start and run, but my config was reset after I added the storage.

In this case I mounted my Downloaded share as SSDdownload - which should be safe to use - to the container which resulted in the config for Radarr being reset.

Here’s my current config for Radarr (including the random path I used to replicate the problem):
image
Sonarr also lost its config when I mounted the Downloaded share as SSDdownload.

I’ll try and spend some time these holidays reading up on containers on Linux, and the way Rockstor does things, to see if I can’t be of a bit more help. No promises though, time flies - as I’m sure you’re more than well enough aware of. :grin:

2 Likes

@MainrtNr5 Hello again.

I can chip in on this bit:

Rockstor doesn’t save any settings for a Rock-on. Only those associated with the shares it uses. Each Rock-on is a little different but they are all essentially the same form the point of view that they store their settings and/or data in the shares you attribute to them. Some will store both config and data in the same share. Some have not state saved at all, i.e. the simpler ones such as http->https redirect. So don’t expect any config to be staved other than in the shares you attribute to them. The Rock-ons root is just to host their ‘system’ bit. Their data and/or config resides exclusively in the share(s) you assign them. Hence new share equals no prior setting.

Hope that helps at least with this bit.

1 Like

Hi @MainrtNr5,

Thanks a lot for the additional information and screenshot… it does make me wonder about one thing: maybe you simply cropped the screenshot but your list of Shares is not looking right to me. The /config share indeed seems to be missing.

The Radarr rock-on defines 3 volumes:

  • /config
  • /movies
  • /downloads

…defined here:

As a result, Rockstor should ask you to pick 3 shares during the Radarr Rock-on install wizard. The resulting should look something like this:

Now, if you indeed somehow do not have that /config volume defined, that would explain why your Radarr config does not stick. Would you thus be able to confirm whether or not you indeed have that volume defined in the Rock-on settings before we go any further in trying to figure that one out?

That’s related to a big difference in what a Rock-on restart and adding more storage trigger under the hood. Indeed, a restart is just triggering a docker stop <container> when you turn the Rock-on OFF, followed by docker start <container> when you turn the Rock-on back ON. Adding more storage, on the other hand, triggers the removal of the container(s), following by its re-creation with the new settings that include the newly-added storage. This is required because Docker does not support binding a new volume live. You can read more about this in our documentation if interested: Rock-ons (Docker Plugins) — Rockstor documentation

Let us know how your Rock-on settings summary look like so that we can make sure your /config volume is indeed defined; we should be able to troubleshoot further after that.

Thanks!

4 Likes

Hi @phillxnet!

That’s how I understood it as well but since the Radarr and Sonarr Rock-ons didn’t allow me to specify a config share I assumed (which you never should do) that these we’re different.

Hi @Flox!

Nope, that’s all there is. There’s a bigger issue though. This is the result I got when I did an update on the Rock-ons to make sure that I had the latest versions:

Traceback:
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/storageadmin/views/rockon.py”, line 132, in post
self._create_update_meta(r, rockons[r])
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py”, line 145, in inner
return func(*args, **kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/rockon.py”, line 310, in _create_update_meta
handle_exception(Exception(e_msg), self.request)
File “/opt/rockstor/src/rockstor/storageadmin/util.py”, line 48, in handle_exception
status_code=status_code, detail=e_msg, trace=traceback.format_exc()
RockStorAPIException: [‘Cannot add/remove volume definitions of the container (sabnzb) as it belongs to an installed Rock-on (sabnzb). Uninstall it first and try again.’, ‘Traceback (most recent call last):\n File “/opt/rockstor/src/rockstor/storageadmin/views/rockon.py”, line 132, in post\n self._create_update_meta(r, rockons[r])\n File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py”, line 145, in inner\n return func(*args, **kwargs)\n File “/opt/rockstor/src/rockstor/storageadmin/views/rockon.py”, line 310, in _create_update_meta\n handle_exception(Exception(e_msg), self.request)\n File “/opt/rockstor/src/rockstor/storageadmin/util.py”, line 48, in handle_exception\n status_code=status_code, detail=e_msg, trace=traceback.format_exc()\nRockStorAPIException: ['Cannot add/remove volume definitions of the container (Sonarr) as it belongs to an installed Rock-on (Sonarr). Uninstall it first and try again.', 'Traceback (most recent call last):\n File “/opt/rockstor/src/rockstor/storageadmin/views/rockon.py”, line 132, in post\n self._create_update_meta(r, rockons[r])\n File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py”, line 145, in inner\n return func(*args, **kwargs)\n File “/opt/rockstor/src/rockstor/storageadmin/views/rockon.py”, line 310, in _create_update_meta\n handle_exception(Exception(e_msg), self.request)\n File “/opt/rockstor/src/rockstor/storageadmin/util.py”, line 48, in handle_exception\n status_code=status_code, detail=e_msg, trace=traceback.format_exc()\nRockStorAPIException: [\'Cannot add/remove volume definitions of the container (plex-linuxserver.io) as it belongs to an installed Rock-on (Plex). Uninstall it first and try again.\', \'Traceback (most recent call last):\\n File “/opt/rockstor/eggs/gunicorn-19.7.1-py2.7.egg/gunicorn/workers/sync.py”, line 68, in run_for_one\\n self.accept(listener)\\n File “/opt/rockstor/eggs/gunicorn-19.7.1-py2.7.egg/gunicorn/workers/sync.py”, line 27, in accept\\n client, addr = listener.accept()\\n File “/usr/lib64/python2.7/socket.py”, line 206, in accept\\n sock, addr = self._sock.accept()\\nerror: [Errno 11] Resource temporarily unavailable\\n\']\n']\n’]

I’ll run the “Clean database of broken Rock-ons” script once more, I probably missed something when cleaning up after my broken filesystem.

Thanks so far for all the help, I’ll get back to you when I’ve cleaned up the database.

[Edit]

The “Rock-ons database” was already clean so running the script didn’t do anything. Something did something though. When I did a re-install of Radarr I did get the chance to specify the correct folders and this is how the mapping looks now:
image
When installing Sonarr, it’s still broken unfortunately:
image
I’ll configure Radarr and see if it behaves as expected but since Sonarr can’t even be installed properly I’ll leave it alone until we figure out why it’s acting up.

1 Like

@MainrtNr5 Thanks for he updates.
Re:

Another brute force approach, which you have likely already used before, is to start over with a fresh Rock-ons-root: Rock-ons (Docker Plugins) — Rockstor documentation

I.e. after cleaning / wiping all you can, turn off the Rock-ons service and create a new share for the rock-ons-root and configure it within the Rock-ons service. Then when you turn Rock-ons back on again it’s in a whole new world. You can then delete the old rock-ons-root. This way you freshly pull all the docker images. Definitely a brute force approach but I’ve seen this as a requirement when things get really messed up. We are on-going with improvements robustness wise on the Rock-ons but given config and data are in other shares this is usually not nearly as disruptive as it initially appears.

Also remember that snapshots within the rock-ons-root are actually the docker images themselves. We currently don’t guard/warn against this but in time. This is good to know as one can easily break rock-ons / docker by cleaning rock-ons-root snapshots. Kind of cutting the legs off the Rock-ons while they are running.

Could you create a GitHub issue for this here:

That way we have attribution. It may well be we have a duff Rock-on and it needs fixing or deleting. We did cull quite a few just recently and it’s something we had previously neglected. We really want to get to having only working rock-ons but there are many moving parts so reports of duff ones are most welcome.

Hope that helps.

2 Likes

@MainrtNr5 Re:

We have the following issue for this:

Just for context.

2 Likes

The thing is that I just did that when I had to rebuild my RAID after a failed disk so I’d rather keep troubleshooting a bit more methodical, that should be of more use to you guys as well. :slight_smile:

Yeah, I know about not messing about with snapshots and the Rock-on root just so that I don’t break anything by mistake.

I’ll create an issue for the Sonarr install.

[Edit] Nope, there’ll be no new issue created because now I get the option to add all the required shares:
image
I wish I had a good explanation to why it works now but I don’t. :thinking:

Let me know if you want me to do any extra digging around in logs or elsewhere.

3 Likes