So in the previous 15.3 V4.5.5 version, I had set up two Rockons. One custom and Transmission.
After a fresh reinstall to 15.4 V4.5.5 version, I had imported the existing data disks & pools & shares into Rockstor, then proceeded to enable the Rockons service.
After its activation, the two Rockons are now running, however they are not showing in the Installed list in WebUI.
I am not that bothered by the custom one, but the Transmission one should probably show up, no?
@aremiaskfa Helllo again, and thanks for all the reporting.
When on imports a pool it does not auto configure the Rock-ons. That would be a config of the system, and Rock-ons are in fact intended to be restored via the config restore process:
“Configuration Backup and Restore” : https://rockstor.com/docs/interface/system/config_backup.html
I.e. “Rock-on configuration”.
When you "… enable[ed] the Rockons service.
You re-instated the existing Rock-ons root and so it did it’s thing with the existing Rock-ons on disk, but we can’t work from them up to the Web-UI state. And our config restore does re-install these. What you have now is the equivalent of non Rockstor managed dockers running as they would (mostly) if we had manually installed them. In those cases they are considered as running outside of Rockstor: not rockstor managed. Keep in mind that like a goodly number of btrfs features and linux features come to think of it, we only support what we have setup, or had restored. So your ‘old’ docker images are now not considered as managed by the new Web-UI.
So in short, importing the pool does as it states, vols/subvols/snapshots.
Restoring config should re-instate Rock-ons, assuming your system disk wasn’t used for the Rock-ons root or any shares of course. And if the suggested dedicated shares were used it bodes better for the success on this front.
The Fix, if you don’t have a saved config, is to simply re-install them choosing the exact same settings/shares/etc. They will then be momentarily un-installed then re-installed but as managed docker images (Rock-ons).
Thanks for given further testing on the new installer front by the way, much appreciated. I’m assuming this is what mean here:
But the new installer is currently 4.5.4-0 [EDIT: sorry I was thinking of 4.5.6-0] as it just missed the 4.5.5-0 [EDIT: proposed 4.5.6-0] so do make sure to upgrade to that (which I’m assuming you did actually) in testing as we move towards the next stable release.
This is not stated in the documentation at all. If not having Rockon configs on the system drive is crucial for recovery purposes then:
This should be clearly stated in the documentation. The documentation does state that a dedicated share should be created, but it doesn’t say to not use the system drive.
Maybe the Wizard should not list shares created on the system drive when adding a Rockon.
It might also be a good idea to mention this in the “Data-Loss prevention” documentation.
I can do some work in this area if I find the time this week.
The Fix mentioned by @phillxnet works. Both Rockons are now managed with all previous data & config intact.
RE documentation update: +1. However, since people don’t read the manual, I suggest putting a warning banner in the WebUI whenever something important needs to be noticed by the end-user.
I don’t know, if this this important enough to warrant such a banner, but I know, I would be grateful, if a “in case the backup cfg cannot be restored” tooltip with @phillxnet’s Fix would be present somewhere on the Backup Config page.
@stitch10925 Great to hear you are having a go at the docs. Always improvements to be made there.
Re:
on
Note that all Rock-ons will then be installed into this shared area but each will remain independent
Yes. If the guidelines for using data and config etc shares are followed. From memory this relates to the Rock-ons-root where we store the running code (replacable via download/re-install) bit of each Rock-on. They can easily share this bit as the docker images (using our btrfs backend) each know the subvols/snapshots they are composed of. When docker uses a btrfs back-end, such as how we run it, it uses snapshots to manage the layers of the various docker images. Take a look at the following issue we have open on how we fail to inform folks about this:
Copying in here the included tec reference for this:
By science, I mean the docker system manages it’s own image layers, we just have to provide it the background storage. Plus well formed docker images are etherial bar their intended config/data persistent storage volumes: the bit we try to persuade folks to keep independent of one another, and intra docker/Rock-on independent also.
In fact, if two images share a base layer image, I think the local snapshot/subvol is shared between them. So combining the non persistent etherial elements of multiple Rock-ons via the rock-ons-root can actually end up saving space and memory.
I think we need to establish that folks needn’t worry about the base architecture of the btrfs backend of docker as that is all we are using here via the Rock-ons root.
Hope that answers your question. And thanks again for taking a look at the docs. Keep in mind also, some what self-referencing in the context of the area you are contributing to, we have a docker image, setup by @Flox, that can, in-turn, enable you to run a fully configured version of Sphinx appropriate for testing our docs. Take a look at the following doc section for instructions:
I know how the Docker image layering works, but the section this quote comes from mainly talks about where Rock-on data “lives” and shares. So to me saying:
Note that all Rock-ons will then be installed into this shared area but each will remain independent
Sounds to me like the Rock-on data and configuration is kept separate somehow, which, if I’m correct, is not the case. If it is, then why is it suggested to have a share for each Rock-on and why was there a discussion about having sub-folders in a Rock-on share to keep config data separated? Or am I just really confused right now?
P.S.:
I created following PR to update the documentation:
@stitch10925 Hello again, I think we are getting there:
Re:
But it is the case, That is why we tell folks to create dedicated shares named such-and-such like plex-config for example to store such things as the plex-config. That is the persistent storage mapped by docker/Rock-ons to store the config in this case seperately from 1. other rock-ons, and 2. seperately from the appliance part of the docker install that constitutes the Rock-on.
I.e.
rock-ons-root for install of docker applications, all of them.
individual and dedicated shares for each of the mapped volumes as suggested by every mouse-over on every Rock-on with this capability (almost all of them).
To seperate concerns between persistent config/data and the etherial/replaceable download of the docker image (Rock-ons-root). The latter is non-unique. So we suggest folks create dedicated shares (subvols) to hold the specific installs configuration to enable the persistent storage required by most of the Rock-ons (docker instances). What maybe is being confused here is docker program content and docker data/config storage. We maybe need a “Don’t re-use the rock-ons-root” notice or a fence/prevention to that effect as it seems like you are thinking folks should use the rock-ons-root for everything not just the storage of the Rock-ons program part.
That is a further nuance that I am reluctant to address because we have an instance here of exactly the confusion that arrises even before we introduce yet another layer (sub dir within multi-use shares). Which incidentally we have now blocked as of 4.5.6-0. To specifically avoid folks using one share for multiple purposes within Rock-ons setup.
It is always difficult to plot a cause between flexibility and usability. That is what we are trying to do with Rockstor.
Flexibility - use a base linux and configure via near infinite command line.
Usability - can’t be doing with 1. - I accept some limitations.
At least one of us, and likely all of use actually. Hence having folks contribute the the Docs such as you have done, much appreciated. I’ll try and take a look later, but from the discussion here so far we are still not quite eye-to-eye on how best to describe this so that we both understand what each is talking about. Maybe you pr helps in that regard, I’ll take a look.
Keep in mind we are trying to steer folks to best practice, but we also have a technical user base: we are after all a DIY NAS. Folks mainly install this OS and configure it themselves. The are thus of some technical prowess. And along with it like/expect flexibility. It’s quite the challenge and I think the docs is the place to hash this out, followed closely by developer docs written in a non exclusive language, followed a little way back by well documented code.
Hope that helps and I’ll try and take a look at the doc pr soon if someone doesn’t beat me to it.