@PDXUser , I can chip in on this one. @Flox has essentially got to the root cause of your issue. You are using the ‘root’ share instead of creating one specifically for the purpose. This is a usability bug and is now fixed in our latest code which is actually in stable not testing, see the intro in the following thread:
Intro
As this thread is starting retroactively in this now long release cycle I’ll first give some context and move to detailing only the most recent releases. The canonical reference for all code changes is in our open source GitHub repo:
This is the ‘rockstor’ package origin. This does not include the Rock-on definition files .
An attempt is made to keep GitHub Release / git tags in line with package release versions but due to a variety of reasons one can lag the other for short periods.
…
in short testing stopped at 3.9.1-16 which was equivalent to what would have been 3.9.2-0 stable. We are now on 3.9.2-48 stable as of 2 days ago.
However if you did in fact update to stable:
PDXUser:
upgraded to latest,
Then it is best to confirm your actual running version via
yum info rockstor
As going from testing to stable can end up showing a stable version (3.9.2-#) in the Web-UI but actually not installing it. But this doesn’t happen if you go straight to stable from the iso.
In stable channel releases it is no longer possible to select the ‘root’ share as it no longer appears in the Web-UI. We showed it previously (so appears in older testing) but due to work involved in moving to openSUSE a bug was found that where by this was not /root, as many would reasonably suspect, but actually ‘/’ and of course this is very bad. Especially if a rock-on is given that as a ‘share’ to use. Hence the error @Flox wondered about and hence your system breaking when you gave a Rock-on (docker) direct access to the root of your system.
This removal of surfacing root was in the following changes:
rockstor:master
← phillxnet:1931_add_root_subvol_exclusion_mechanism
opened 06:35PM - 08 Jun 18 UTC
Our current share/clone subvol surfacing mechanism assumes a top level dir exist… s for a given pool (vol). This assumption is most notably flawed in the case of a system root ('/') as by definition it is the top level and has no '/' + 'dir-name' counterpart. However our current default system subol arrangement has '/' in a subvol named 'root' which coincidentally has the FHS counterpart subdir of '/root' and current btrfs commands return similarly for a subdirectory if that subdirectory is not also a subvol in it’s own right, which '/root' in our case is not. The mechanism chosen to initially address this issue is to exclude the Rockstor native re-mounting (in /mnt2) of known root subvolumes: ie: 'root', '@' and or subvols configured to be the default pool (vol) mount point (btrfs subvol get-default /) ie. '/@/.snapshots/1/snapshot' (ie a default snapper root config).
The subvol exclusion mechanism, as instantiated in this commit/pr, also serves to surface subvols of excluded subvols; ie '@/home' where @ is itself a subvol: a more common arrangement in modern root on btrfs configurations. This is due to our current share/clone single depth subvol consideration.
A wrinkle in subvol path exclusion practice is that when the '/' subvolume does not share a direct common subvol as parent, such as in a default snapper root config boot to snapshot arrangement, the path expressed for '/home' is then eg '@/home' as opposed to our previously expected and relative to shared direct parent (@) 'home'. This is accounted for, in this commit's / pr's first pass treatment, by simply doubling up relevant exclusions and hard wiring a blind stripping of leading '@/' chars: a sub optimal but proven functional (by included unit tests) fix. It is suggested that this approach be iteratively
improvement upon.
Summary:
1. Establish a named root subvol exclusion list.
2. Add a function to identify the default subvolid for the pool hosting ‘/’.
3. Reference the above 2 mechanisms to 'skip' consideration of associated subvols, by path and or by id respectively by way of (1.) and (2.) above (debug logged).
4. Refactor shares_info() (creating get_property() and snapshot_idmap()) to help remove code duplication and improve code clarity clone identification logic, and testability.
5. Add a redirect to '/' in cases where we are examining an excluded subvol which has no /mnt2 counterpart.
6. Brute force conversion to currently recognized relative paths, ie strip @/ chars.
7. Normalise use of toggle_path_rw() for immutable flag manipulation.
8. Remove redundant/unused function, See Note * for details.
9. Removed an outer scope variable name collision.
10. Account for upstream parent-child column inversion - group_is_assigned().
Note that 10 in above is an unrelated change to issue #1931 but is trivial in nature: upstream issue reference included in code comments.
Note *
Removed subvol_list_helper() and sole remaining user. Introduced 4 years 8 months ago in #105 to mitigate an rc=19 subvol list issue. This helper was not universally employed and we have had no reports of it's recent occurrence; hence removal. Comment left by prior sole use location, and related remaining rc = 19 exception clause.
Fixes #1931
Please see referenced issue text for differently worded exposition of the referenced root subvol anomaly re subvol path and FHS dir name collision.
@schakrava Ready for review.
Testing:
The following elements (indicated by their names) were created (and later deleted), via the Web-UI, and observed to be persistent post page refresh ('refresh-share-state' / 'refresh-snapshot-state') and post reboot ('bootstrap') on current (legacy) and proposed root subvol arrangements; proposed subvol arrangements considered installs with and without snapper root config.
system pool:
sys-share
sys-share-snap
sys-share-snap-rw - clone from this - clone-from-snap
sys-share-snap-rw-visible
clone-sys-share
data pool, ie rock-pool:
rock-share
rock-share-snap
rock-share-snap-rw - clone from this - clone-from-rock-share-snap
rock-share-snap-rw-visible
A full data-pool share replication cycle ( > 5 rep tasks) was also tested successfully between a legacy root arrangement and a proposed system subvol arrangement (snapper root config active).
Additional unit test output:
...
test_parse_snap_details (fs.tests.test_btrfs.BTRFSTests) ... ok
...
test_snapshot_idmap_home_rollback (fs.tests.test_btrfs.BTRFSTests) ... ok
test_snapshot_idmap_home_rollback_snap (fs.tests.test_btrfs.BTRFSTests) ... ok
test_snapshot_idmap_mid_replication (fs.tests.test_btrfs.BTRFSTests) ... ok
test_snapshot_idmap_no_snaps (fs.tests.test_btrfs.BTRFSTests) ... ok
test_snapshot_idmap_snapper_root (fs.tests.test_btrfs.BTRFSTests) ... ok
...
test_get_property_all (fs.tests.test_btrfs.BTRFSTests) ... ok
test_get_property_compression (fs.tests.test_btrfs.BTRFSTests) ... ok
test_get_property_ro (fs.tests.test_btrfs.BTRFSTests) ... ok
...
test_shares_info_legacy_system_pool_fresh (fs.tests.test_btrfs.BTRFSTests) ... ok
test_shares_info_legacy_system_pool_used (fs.tests.test_btrfs.BTRFSTests) ... ok
test_shares_info_system_pool_post_btrfs_subvol_list_path_changes (fs.tests.test_btrfs.BTRFSTests) ... ok
test_shares_info_system_pool_used (fs.tests.test_btrfs.BTRFSTests) ... ok
...
test_get_snap_2 (fs.tests.test_btrfs.BTRFSTests) ... ok
test_get_snap_legacy (fs.tests.test_btrfs.BTRFSTests) ... ok
Caveats:
We have some additional hard wiring added by this pr ie within some references to 'system': these are to be re-visited / addressed in related later pr's sharing the same project level goal. I will endeavour to reference this pr upon their submission. These references should not affect our existing installs function.
against the following issue:
opened 07:23PM - 25 May 18 UTC
closed 12:50PM - 13 Jun 18 UTC
Currently we assume that a system pool subvol is represented by a top level dire… ctory in '/' corresponding to the subvols's name. This is not a universally safe assumption and most notably falls over in our current code / linux base for the root subvol. This appropriately, though arbitrarily, named subvol has, as a consequence of directly holding the root filesystem, the assumed /root path component within it. However this represents an arbitrary name collision between the /root directory and the subvol name 'root'. This subvol/dir name collision effectively covers up an issue specific to the pool containing the system and particularly that subvol mounted (via fstab) at /.
Essentially where as the 'home' subvol is mounted at /home, the root subvol is not mounted at '/root' but at '/' but the name collision of the already mounted root fs having, as standard, a /root directory within it's fs, conceals this anomaly specific to an already mounted root who's subvol name coincides with an existing top level directory within '/'.
As this anomaly is understood only to exist on the system pool (the one containing the linux '/' install) it is proposed that we add an exclusion mechanism for this and other 'sensitive' system subvols. This effectively removes the anomaly dependency and, in collaboration with the changes introduced in:
"improve subvol mount code robustness. Fixes #1923" for issue #1924
adds the ability to deal with the more common arrangement, for btrfs root installs, of having a root fs consisting of an arbitrarily named subvol (ie '@') which itself has subvols separating the various concerns of a root fs.
So by adding a subvol exclusion mechanism to the system pool (at the base level of current code) we are then able to process the more common arrangement of a root fs that is composed of a collection of sub-subvols. But this anomaly exclusion mechanism will remove, on our current linux base, the surfacing of the root subvol which is only successfully interpreted by way of the aforementioned '/root' dir name collision and the fact that, currently, a btrfs command executed on a subdirectory of a subvol returns the same result as if it were given the top directory of that subvol mount point, ie within our current code, when processing the subvol named 'root' we execute the following:
storageadmin/views/share_helpers - import_shares() we have, roughly via debug messaging:
---- Share name = root.
Updating pre-existing same pool db share entry.
Running command: /sbin/btrfs subvolume list /mnt2/system
Running command: /sbin/btrfs qgroup show /mnt2/system/root
Note in the last command instance we execute our qgroup show command assuming we are referencing the root subvol mount point when in fact we are referencing an arbitrary directory name that coincides with our subvol name that happens to exist within our intended subvol. But as the following 2 commands are equivalent we have our silent anomaly:
```
/sbin/btrfs qgroup show /mnt2/system/root
/sbin/btrfs qgroup show /mnt2/system
```
by way of comparison our home subvol is correctly referenced by it's path thus:
---- Share name = home.
Updating pre-existing same pool db share entry.
/sbin/btrfs subvolume list /mnt2/system
/sbin/btrfs qgroup show /mnt2/system/home
Where we see the '/mnt2/system/home' correctly corresponds to it's top level path.
So this issues proposal, and proof of concept pr (to follow), is to address the anomaly detailed above, that of the false assumption that for the actual fs root subvol there is a corresponding top level directory contained within, by not surfacing the 'root' subvol within the Web-UI by way of an exclusion mechanism that in turn allows the surfacing of system pool sub-subvols, which are inherently more informative / appropriate than the entire '/' anyway. This same subvol exclusion mechanism, proposed for and only tested on the os pool, also allows for the exclusion of other noisy/irrelevant, and potentially problematic for re-mount sub-subvols such as 'tmp', 'boot/grub2/x86_64-efi', and 'var'.
It was a bad show on our part but has now been fixed since stable release version 3.9.2-24 released June 2018.
So in short we previously shouldn’t have surfaced ‘/’ offered as “root” share name: which we don’t in latest code. And the UI should have guided you better on creating specific shares for each rock-ons requirements, maybe those tooltips should be text under/against each text box (@Flox your thoughts on this idea would be welcome).
I.e. in the Rock-on sonar definition we have:
"8989": {
"description": "Sonarr WebUI port. Suggested default: 8989",
"host_default": 8989,
"label": "WebUI port",
"protocol": "tcp",
"ui": true
}
},
"volumes": {
"/config": {
"description": "Choose a Share for Sonarr configuration. Eg: create a Share called sonarr-config for this purpose alone.",
"label": "Config Storage"
},
"/tv": {
"description": "Choose a Share for Sonarr media library Eg: create a Share called Sonarr-library for this purpose alone. You can also assign other media Shares on the system after installation.",
"label": "Media Library"
},
"/downloads": {
"description": "Choose a Share for Sonarr downloads. Eg: create a Share called Sonarr-downloads for this purpose alone.",
"label": "Download Storage"
}
This advise shows as tooltips and advises in turn to create the following shares:
sonarr-config
Sonarr-library
Sonarr-downloads
This is crucial advise and if not followed, and on older testing code, can lead to catastrophes if the then offered ‘root’ is selected. Apologies for our slow progress in releasing iso’s with this fix in but we are hoping to resurrect the testing channel once we re-establish ourselves on openSUSE and have build up more resources. There is also planned a fresh image based iso for the openSUSE installs when they are ready. This will of course have the latest code on release.
There is also the Rock-ons (Docker Plugins) doc section as you may also have not followed the advise there to create a share specifically for the Rock-ons system component.
Much usability work for us to do on our side but I would invite you to read the Rock-on install wizard tool-tips as presented and to peruse that doc section linked and then re-install, as you system is likely toast, and give it another go.
Let us know how it goes if you do end up giving us another trial. Thanks in any case as your input here has helped to reinforce a usability shortfall that those tooltips represent as if one doesn’t look at them (easily missed) then it does lead to a less desirable state; especially with our older code releases.
Hope that helps and good luck in your NAS experiments going forward.