@kageurufu Hello again.
Re:
I think I’d rather have them as commented blocks actually as we already have quite a few profiles and there is quite a lot of ‘stuff’ that goes with an entire profile. Plus if it’s a profile, as you way, it’s a little too easy to just ‘select’ on the command line and then not realise quite how out-of-scope that profile has taken you. Take a look at the definition and I think you will see what I mean.
Any command that runs outside of btrfs limitations will likely throw a non rc=0 anyway so the Web-UI should indicate the command that threw this error. I.e. if the Web-UI doesn’t catch that you want to remove a disk below the minimum count for example, the Web-UI will normally have a friendly pop-up / form error explaining this. But this bit will be broken. However the then nonsense command should then throw an error at the command level and you will then get the less friendly btrfs “I can’t remove a disk from this btrfs level” type think. Which is sounds like you will be OK with anyway.
Re:
Not quite a see before run but does increase log level and so some commands that are otherwise silent will show up explicitly in the logs:
/opt/rockstor/bin/debug-mode
currently debug flag is False
Usage: /opt/rockstor/bin/debug-mode [-h] [ON|OFF]
A dry run facility is going to be tricky as many of our ‘single’ Web-UI actions actually encompass quite a few sequential btrfs commands. You can take a look by refreshing the ‘shares’ for example while the debug mode is on.
Yes, hopefully:
Doc entry first: “Developers” subsection of “Contributing to Rockstor - Overview”
https://rockstor.com/docs/contribute/contribute.html#developers
and our in need of a little attention Wiki guide: “Built on openSUSE dev notes and status”
But they are really only if you are contemplating contributing (that would be nice).
N.B. all source builds consider any rpm version an update and will necessarily wipe all settings.
And there is not update mechanism from a source build to another source build other than via the db wipe and re-establish itself as an rpm install. A source install is an unknown so we don’t support any update from any version to any other. It will show itself up as UNKNOWN VERSION or the like.
OK, we definitely fail on that front. The django config we use has a single postgres db backend and so there is currently no way for two instances to co-exist. At least no without eating each others db. Could be done but would need a lot of work to differentiate each db. Also the following db bug/in-elegance would need to be addressed first:
https://github.com/rockstor/rockstor-core/issues/2076
That way, once we add db names to our ‘versions’ a reset in a testing would only wipe the testing related database with all that instances settings in. The problem will likely also reach into contention on devices etc also. But that could be partly approached with expectation, i.e. “… we don’t support concurrent versions running …” type thing. So mainly a db thing I think. No plans for this however and we have quite a lot on our plate so not likely to appear any time soon. Although pull requests are welcome. However keep in mind that we use a now unmaintained build system that is also python 2 only so we have much larger fish to fry so configs within this build system to approach this could well end be being deleted by those larger fish.
I’d keep an eye on this stuff thought. We concentrate on data raid levels currently and mostly (but not entirely) go with the metadata default. Good to have the feedback here thought.
Agreed and thanks for the input. However I think the initial step before this would be to surface within the Web-UI the metadata level as well as our current data level first. That way we have user visible feedback first before we start adding fences etc. Also note that a partial balance can leave some parts at the old raid level. Such situations may help to throw spanners in the works. I.e.:
Best we break it down to as manageable steps as possible. It took us quite some time to reach our current ‘fence/constrain’ level and more variables are likely to throw way more permutations up. Lets ear-mark this feature (awareness / surfacing of data & metadata raid levels) into our next testing channel shall we. If we are lucky it should slot in nicely. But given we are about to release our long awaited next Stable it’s just not appropriate just yet.
Nice. Also note that we have some other basic restrictions also, like no two subvols may have the same name (system wide actually). We have yet more ‘deep’ improvements to do on that front, i.e. bring our pool uuid tracking/management up to a state where it can replace our ‘by name’ approach. We are a little furtther along on the subvol on that front but it’s still in need of some improvement. But at least we mount by subvol id now :). It used to be by name !!
We should probably discuss this in it’s own forum thread actually as that sounds like something we can just set to be ignored. Take a looks at the following pull request:
https://github.com/rockstor/rockstor-core/pull/2270
and issue:
and initiating forum thread:
We already had a system pool specific ‘filter’ for subvols we don’t want to surface. You may well just be able to edit one or the other to have these weird subvols ‘go away’ from the Web-UI. Let us know what works for you and it may well be something we can add if it’s likely others will run into this. Just editing the python file with those exclusions in place and restarting the system should do it. But we can work through this specifically in a specific thread if you fancy. Should be an easy fix, if still at the simple code edit stage.
Thanks for the engagement, and keep us informed of your adventures here. New testing channel/branch to start shortly so we can begin throwing stuff at that soon.
Hope that helps.