@DrC No worries on ideas, always welcome. But yes it can take time to review/reference previous posts issue on what has been discussed and planned over the years. We need some kind of a road-map so folks can see. But again this is all time consuming. Oh well, bit by bit.
The main reason we don’t, at least yet, manage directories as a ‘unit’ is because we don’t yet have an extension to our subvolume management system that everything is based on. Plus adding that degree of flexibility leads to many more possible scenarios that are very likely to introduce a how host of new bugs and our current aim it to get everything we currently have working on our new OS via the “Built on openSUSE” effort of the Rockstor 4 release. This as it happens has also, accidentally, headed off the far more recent CentOS change of stance. So that’s a win we wern’t expecting when we embarked on this. There after we need to address the tecnical dept that the OS move has only compounded by diverting our limited contribution ‘power’. But it’s all good and we are getting there. However I’m super keen to not have significant functional additions (read features) until we have moved our Django and Python over.
Another element here is that a btrfs subvolume is little more expensive than a directory, bar the indicated overhead of UI management. And we get seperation of concerns by default which is always nice. a subvolume in btrfs shares it’s meta-data with the parent pool so it’s not en entirely seperate filesystem, but it looks rather like it. So a bit of bests of both worlds here. And again, flexibility brings exponential complexity and we need to debug all that we have before introducing more complexity or the project will become unmanageable and no good to anyone.
Not that much actually as our existing UI is just a docker wrapper at heart. But Web-UI and wrapper are tightly coupled via our existing database within Django. This is a system we already have in development and we have many rock-ons that depend upon it that have taken years to be contributed. To start over and adopt an external project that is not matched to our ‘ways’ is potentially very complex and ultimately may never be a good fit. However there is nothing stopping folks running other docker managers on the base OS re run on. But to expect intergration into our btrfs subvolume structure is unrealistic without more years of slow contribution and itteration. But just fine for advanced users who can configure the more advanced (read flexible) specialist docker managers to do their bidding within the limitations of the subvolume structure that Rockstor itself works within. I.e. all mounts within /mnt2 and a limited depth there in with snapshots etc. We need a tecnical doc for this really and don’t as yet have an updated one: again contributions are welcome to our docs on this front for those willing to put in the time required to explore and explain our own limitations. The simplest route to this is to watch what happens on disk when one does such things as subvol create, clone create snapshot create, smb shadow compatibility via snapshots etc.
Interesting. But is there a wide spread desire for this capability within our user base. And any complex code extension we take on are likely going to have to be maintained indefinitely there after so will have to come with excemplary docs such as we have with our current rock-ons (but only recently) thank to @Flox. Again the doc you you need to reference, and it in turn references the relevant code, is the following wiki entry in the forum:
What is deseptively difficult here is how easy it looks from the Web-UI what we already do. Take a closer look at the code and keep in mind that we have to continue to support such intricate mechanisms as our replication system if we are to add further complexity to our base unit of the btrfs subvolume. Sometimes however a little change in just the right place can yield some unexpected flexibility and we have seen that. This all depends, as you likely appreciate, on our base design being robust enough to handle these extension. I am of the opinion that we must first propagate our existing btrfs-in-partition to the system drive first. This is a non trivial endeavour but will unify and simplify our currently two disparat manegement systems re partitions/subvolumes. We must also enable base features such the ability to re-name pools and subvolumes. A far more requested feature than docker swarm and potentially relevant to our entire user base. But don’t get me wrong, I’m all for exploration but we do rather have our hands full with CentOS having dropped btrfs from even a technical preview dictating that we must then jump ship (OS) and abandon our then ongoing technical dept endeavour. So it’s a frustrating time to have so many feature requests. But again if you can become familiar with the code and our ways then dandy. We have more hands and that has to help. But realise that any major feature / capability extension is expected to be drawn out in detail via a wiki article prior to code contribution as there is then the posibility of wider community involvement that may or may not take place to help refine the ideas as presented. However on the other hand we are rather a do-ocracy here so you likely get to chose the exact implementation if you are the one standing up to implement it. But again we can’t take on much more major changes as we will soon have to convert all code to Python 3 in addition to the Django changes that may be involved.
So I say take a look at the code itself, we strive to have it self documented where possible and the forum wiki entries may also help in some places, i.e. UPS / Docker etc, and a likely initial up-shoot is “So that’s not right even as-is” which would be fantastic as then you help to ensure our existing framework can more easily handle the grander ideas we all would like to see implemented in a sustainable manner. My own favourite on this front would be GlusterFS via an easy setup across multiple Rockstor instances. Although a stepping stone on that front would be across multiple pools first. But if done right that then least to across Rockstor hosted pools and the ability to down a machine in a cluster for maintenance and have it re-assemble / re-intergrate into the cluster filesystem upon re-connection. We have to first fully enable our core competencies (ideally) first, such as share (subvol) and pool renaming. And we have made some progress on the latter but the former still mounts my name which is a broken approach if we introduce renaming. But the latter now mounts by subvol id (in later 3 and all 4 variants anyway).
Sounds a little like my ideas re clustering capability doesn’t it. I.e. in scope and broad terms involving wavy hands. You should start contributing, oh wait you have :). Nice. Again I’ll get to that pr soon.
You know we now run on Pi4’s I take it. See the Pi4 profiles in:
Nice to hear your enthusiasm and apologies to be the stick in the mud here. And any familiarity you can gain with the code is most likely to breed at lest some fixes of our existing systems so do jump in to anywhere you find interesting. I just can’t promise much hand holding for a while. Bu the code is really not in that bad a shape really, just Python 2 and older Django. However at least our oldest dependency recently got replace by our newest (django-ztaksd to huey in 4.0.6) so we are getting there.
Hope that helps, at least with some context and links.
@Flox is our Rock-on / docker wrapper main maintainer so I would ultimately back their chosen direction of this front, thought I would expect an open discussion in the interim as it goes.