Rockon share required for config? By design? discuss.. ;-)

So Ive been playing with rockons and i find it curious that each docker volume has to be a share, rather than a generic and more flexible file system path.

Eg syncthing, i need a share for config and a share for the shared folder.

That stops me having:

  • a single share for syncthing with ‘config/’ and ‘shared/’ inside.
  • a share for all rockons called ‘all-rockon-data’ and having ‘syncthing/config’ and ‘syncthing/shared’ inside as well as all my other rockon config directories.

If i have 10 rockon’s installed, i probably have 12-15 shares that i need to make? That seems annoying to backup, as rather than just backing up ‘syncthing/’ or 'all-rockon-data/’, i now have to backup both syncthing-config and syncthing-shared shares.

If you are going to go the force everything to be a share route, why dont you auto create them when the rockon is installed? These config shares cant be shared with other rockons or anything else, so by design you seem to be forcing people to go through a 2 step process, with no choice but to make a new share and then link it to config/ (or whatever). So why not just auto create them?

(I get that syncthing-shared might be different, but it i guess im talking more about the individual config share for every single rockon that needs to save state.)

Funny you should say that; it’s on the cards. But we have bigger fish to fry and are a small team, at least of regular committers.

But it’s still important to keep seperate shares as otherwise one Rock-on may interfeere with another in unpridicatble ways, especially now we have dozens of them. Plus each docker image rather assumes it’s not sharing it’s date/config with any other docker container.

However the core point is approaching/improving the usability. See the following forum thread/post:


from September 2015 !!

Which was issued on GitHub, prior to completion of discussion (and initially without attribution) here:

Thanks again for your import. Much appreciated. Pull requests are welcome and we have yet to get you current one in so apologies for that. Should be next in the list for attention and likely merge as it goes.

Again we are a small team taking on a lot of shifting sands. But once we have our Rockstor 4 out-of-the-door and addressed the majority of our tecnical debt (update our Django and do the Python 2 to 3 migration), we can again start to address directly our feature / usability improvements.

Cheers.

1 Like

(Please just say so if im being annoying posting/reporting all of these thoughts. I tend to point things out because im an old school unix sysadmin who just cant hold back his 2cents, but im not a fanatic about the ideas/observations. Im 100% ok with ‘we are too busy’ or ‘wont do it’ and you dont have to explain. You have a groovy project and id hate to take time away from that in any way. You folks rock! Im just an old git giving it a whirl.)

“Rockons may interfere” / “not sharing data/config”. Yep. But /mnt2/all-dockers/syncthing and /mnt2/all-dockers/redis are not going to collide either. Its not the share that is stopping the collision, its a full filesystem path (which is independent of share). I dont see what you gain by making them seperate shares (other than maybe hard link’ed directories dont work maybe? not sure with btrfs) vs just seperate directories. They only need their own directory to mount as a volume inside docker, that doenst have to be a seperate share.

If you are going ponder autocreating shares for each rockon’s config, why not consider making a config-per-config// folder inside the share you have for holding rockon state? That solves the problem (each rockon has its own config folder) and yet it doenst create lots of shares (in fact it removes some of the user input you need).

#include warning <i havnt read the code so im making assumptions here>

Im actually interested in the idea of looking at making the docker stuff docker swarm compatable and it made me wonder how tightly the docker code is integrated into your ui/control suite. I was thinking that it seemed like it would be interesting to have an api/modular plugin system for the container controller. (Ah la webmin or something like it where you have a ‘front end’ which only does ui stuff and a set of plugins that do the work.)

In that way you could have it ‘hands off’ from the implementation details. (Im all for many small apps working together vs one unified app… i did say i was old school unix, right? grep pipe though word count kinda old school… haha.) I was pondering just how much the ui needs to know about docker or what info it needs to pass in to let a docker module get its work done.

This is just me pondering quietly while pondering ripping the rockon config info down and seeing if i can get it playing with docker swarm. :wink:

Eg, a rockstor box with lots of storage and exported shares. A pile of rockon-workers who pull their config from that.

Syncthing? That config is found at [rockstor:share]+/syncthing/config and media is saved to [rockstor:homeshare]+/blah. Logs go to [local:logs]+/syncthing

DNS? Config for that says [clone_on_start:rockstor:share]+/dns. Logs saved to [local:logs]+/dns (no mounts or external stuff needed)

rockon-workers could pull the info they need (rockons, shares, usernames/uids) from the rockstor box and maybe even autoconfigure. The same code would work on the rockstor machine. It would just use an api for pulling the data from the rockstor-queen box.

(Yeah, this is pie in the sky thinking that would be heaps of work. haha. But im a dreamer. Id love to be able to plug in a couple of raspberry pi’s as backup dhcp and dns for my network and i was pondering that as a use for swarm mode :slight_smile: )

@DrC No worries on ideas, always welcome. But yes it can take time to review/reference previous posts issue on what has been discussed and planned over the years. We need some kind of a road-map so folks can see. But again this is all time consuming. Oh well, bit by bit.

The main reason we don’t, at least yet, manage directories as a ‘unit’ is because we don’t yet have an extension to our subvolume management system that everything is based on. Plus adding that degree of flexibility leads to many more possible scenarios that are very likely to introduce a how host of new bugs and our current aim it to get everything we currently have working on our new OS via the “Built on openSUSE” effort of the Rockstor 4 release. This as it happens has also, accidentally, headed off the far more recent CentOS change of stance. So that’s a win we wern’t expecting when we embarked on this. There after we need to address the tecnical dept that the OS move has only compounded by diverting our limited contribution ‘power’. But it’s all good and we are getting there. However I’m super keen to not have significant functional additions (read features) until we have moved our Django and Python over.

Another element here is that a btrfs subvolume is little more expensive than a directory, bar the indicated overhead of UI management. And we get seperation of concerns by default which is always nice. a subvolume in btrfs shares it’s meta-data with the parent pool so it’s not en entirely seperate filesystem, but it looks rather like it. So a bit of bests of both worlds here. And again, flexibility brings exponential complexity and we need to debug all that we have before introducing more complexity or the project will become unmanageable and no good to anyone.

Not that much actually as our existing UI is just a docker wrapper at heart. But Web-UI and wrapper are tightly coupled via our existing database within Django. This is a system we already have in development and we have many rock-ons that depend upon it that have taken years to be contributed. To start over and adopt an external project that is not matched to our ‘ways’ is potentially very complex and ultimately may never be a good fit. However there is nothing stopping folks running other docker managers on the base OS re run on. But to expect intergration into our btrfs subvolume structure is unrealistic without more years of slow contribution and itteration. But just fine for advanced users who can configure the more advanced (read flexible) specialist docker managers to do their bidding within the limitations of the subvolume structure that Rockstor itself works within. I.e. all mounts within /mnt2 and a limited depth there in with snapshots etc. We need a tecnical doc for this really and don’t as yet have an updated one: again contributions are welcome to our docs on this front for those willing to put in the time required to explore and explain our own limitations. The simplest route to this is to watch what happens on disk when one does such things as subvol create, clone create snapshot create, smb shadow compatibility via snapshots etc.

Interesting. But is there a wide spread desire for this capability within our user base. And any complex code extension we take on are likely going to have to be maintained indefinitely there after so will have to come with excemplary docs such as we have with our current rock-ons (but only recently) thank to @Flox. Again the doc you you need to reference, and it in turn references the relevant code, is the following wiki entry in the forum:

What is deseptively difficult here is how easy it looks from the Web-UI what we already do. Take a closer look at the code and keep in mind that we have to continue to support such intricate mechanisms as our replication system if we are to add further complexity to our base unit of the btrfs subvolume. Sometimes however a little change in just the right place can yield some unexpected flexibility and we have seen that. This all depends, as you likely appreciate, on our base design being robust enough to handle these extension. I am of the opinion that we must first propagate our existing btrfs-in-partition to the system drive first. This is a non trivial endeavour but will unify and simplify our currently two disparat manegement systems re partitions/subvolumes. We must also enable base features such the ability to re-name pools and subvolumes. A far more requested feature than docker swarm and potentially relevant to our entire user base. But don’t get me wrong, I’m all for exploration but we do rather have our hands full with CentOS having dropped btrfs from even a technical preview dictating that we must then jump ship (OS) and abandon our then ongoing technical dept endeavour. So it’s a frustrating time to have so many feature requests. But again if you can become familiar with the code and our ways then dandy. We have more hands and that has to help. But realise that any major feature / capability extension is expected to be drawn out in detail via a wiki article prior to code contribution as there is then the posibility of wider community involvement that may or may not take place to help refine the ideas as presented. However on the other hand we are rather a do-ocracy here so you likely get to chose the exact implementation if you are the one standing up to implement it. But again we can’t take on much more major changes as we will soon have to convert all code to Python 3 in addition to the Django changes that may be involved.

So I say take a look at the code itself, we strive to have it self documented where possible and the forum wiki entries may also help in some places, i.e. UPS / Docker etc, and a likely initial up-shoot is “So that’s not right even as-is” which would be fantastic as then you help to ensure our existing framework can more easily handle the grander ideas we all would like to see implemented in a sustainable manner. My own favourite on this front would be GlusterFS via an easy setup across multiple Rockstor instances. Although a stepping stone on that front would be across multiple pools first. But if done right that then least to across Rockstor hosted pools and the ability to down a machine in a cluster for maintenance and have it re-assemble / re-intergrate into the cluster filesystem upon re-connection. We have to first fully enable our core competencies (ideally) first, such as share (subvol) and pool renaming. And we have made some progress on the latter but the former still mounts my name which is a broken approach if we introduce renaming. But the latter now mounts by subvol id (in later 3 and all 4 variants anyway).

Sounds a little like my ideas re clustering capability doesn’t it. I.e. in scope and broad terms involving wavy hands. You should start contributing, oh wait you have :). Nice. Again I’ll get to that pr soon.

You know we now run on Pi4’s I take it. See the Pi4 profiles in:

Nice to hear your enthusiasm and apologies to be the stick in the mud here. And any familiarity you can gain with the code is most likely to breed at lest some fixes of our existing systems so do jump in to anywhere you find interesting. I just can’t promise much hand holding for a while. Bu the code is really not in that bad a shape really, just Python 2 and older Django. However at least our oldest dependency recently got replace by our newest (django-ztaksd to huey in 4.0.6) so we are getting there.

Hope that helps, at least with some context and links.

@Flox is our Rock-on / docker wrapper main maintainer so I would ultimately back their chosen direction of this front, thought I would expect an open discussion in the interim as it goes.

2 Likes