Questions about system options

Where does Rockstor store the information about “pools”? (fstab does not show any volumes/subvolumes beside root and home)
Right now Rockstor seems to show metadata raid only, and it looks to me, like it knows only raid levels that handle data and metadata equally. (I would be happy to contribute here if necessary), any plans to fix this?
here some cli output:
[root@rock barrow]# btrfs fi df . Data, RAID0: total=4.00GiB, used=2.54GiB System, RAID1: total=8.00MiB, used=16.00KiB Metadata, RAID1: total=1.00GiB, used=3.17MiB GlobalReserve, single: total=16.00MiB, used=0.00B

Why (and how can I change this?) does rockstor not show all mount options of the root volume? (fstab has noatime, and compression=lzo but shows none).
Right now I try to “co-manage” the btrfs volumes via ssh and some info would be really nice.
Or is there a source for that info, which I missed?

What I also miss are balance filters (I could help here too)

And another little hope - btrfs english, istead of zfs english (volumes instead of pools, etc.)

@Karlodun Welcome to the Rockstor community forum.

Wow that’s a lot of questions in one. I’ll take a shot at what I can:

Between refreshes of disk and pool scans it is stored in our django database.

Rockstor does all the mounting of all managed pools (I’m a little weak on the pool mechanisms exactly and due to work more in that area soon).

Depends on the following code as to what level is ‘seen’:

Note however that we still have some dead (but mostly harmless) code in there (ie testing against ‘DUP’ after .lower()), see my code review just prior to that commit being merged (along with some PEP8 style comments which were later addressed by forum member @sfranzen in #1460:

Given we simply sit on top of the btrfs commands Rockstor does on occasions fail to notice disparities in raid level but mostly when those levels are separately managed as there is the assumption, for simplicity of code and presentation, that both are the same. However we have recently also had a departure from that in our single profile creation via pr:

and it’s addition to add_pool(pool, disks):

which addressed:

Cool, and as to plans; I suspect there are now :slight_smile: (hopefully): assuming you are happy to submit an issue detailing your findings and work against that issue in a pr. Any help would be much appreciated and most welcome. Please see
Contributing to Rockstor - Overview and more specifically the Developers subsection.

Not sure but it may be down to a root re-mount that was added quite a while ago that I think is probably due for removal as it adds quite a bit to boot up time and that, in my opinion, adds additional complexity for limited gain. I’m just not keen on the idea of a root remount myself. Especially given the maturity of systemd and our CentOS’s use of old systemd at that. The remount was added in part to address:

which was handled by commit:

However it has been noted since that indication of ssd or ortherwise ie rotation is not always that reliable, another twist to our thread.

Re above question sub-section:

You would have to follow the code through and find where this is getting dropped. (Sorry but I don’t have the time for that exercise right now and don’t know off the top of my head).[quote=“Karlodun, post:1, topic:3231”]
… and some info would be really nice.
[/quote]

Yes more UI info is often nice but there is also great utility is presenting only what is virtually always wanted and hiding most of what is mostly not wanted. I currently see this simplified presentation as one of Rockstor’s greatest strengths and it is a challenge in deed to acomodate all requests for all features of all filesystems in one UI. So a focus on what is mostly used most of the time is definitely one I think we should try and maintain. I.e complexity/flexibility/usability balance. Otherwise we become like virtually all other NAS solutions: everything and the kitchen sink and a real pain to navigate / understand; especially for the non technical. Ultimately the command line is always going to be more flexible and, more complex to use (intertwined concepts).

Yes that would be a nice addition and again if you are game to have a go then that would be great. And in the interests of the afformentioned complexity / clutter / usability elements it would be nice to maintain a ‘Sensible Default’ which we actually currently don’t have, ie we currently balance ‘full on’ which is rather over the top.

Yes I think the boat has sailed on that one. Try doing a “grep -R ‘pool’” on the source tree. Pretty deeply embedded in file names / structure / variables / database fields / names etc. Oh and I quite like the pool-of-drives concept myself. A volume just doesn’t do the same for me. And if we change now the disparity between UI and underpinnings would be more trouble / confusion than I think it is worth.

Your multiple offers to help are most generous. I would say pick an existing issue or create a nice clean new one, and dive in with a pull request.

If you end up looking into the raid level reporting you should find the current unit test case for the same of interest:

That is you could add your observed failing “btrfs fi df” output to the test cases and make your changes against the associated method guided by the tests failure or otherwise.

To run the tests you should be able to do a:

cd /opt/rockstor
./bin/test --settings=test-settings -v 2 -p test_btrfs*
test_balance_status_cancel_requested (fs.tests.test_btrfs.BTRFSTests) ... ok
test_balance_status_in_progress (fs.tests.test_btrfs.BTRFSTests) ... ok
test_balance_status_pause_requested (fs.tests.test_btrfs.BTRFSTests) ... ok
test_balance_status_paused (fs.tests.test_btrfs.BTRFSTests)
Test to see if balance_status() correctly identifies a Paused balance ... ok
test_get_pool_raid_levels_identification (fs.tests.test_btrfs.BTRFSTests)
Presents the raid identification function with example data and compares ... ok
test_is_subvol_exists (fs.tests.test_btrfs.BTRFSTests) ... ok
test_is_subvol_nonexistent (fs.tests.test_btrfs.BTRFSTests) ... ok
test_share_id (fs.tests.test_btrfs.BTRFSTests) ... ok
test_volume_usage (fs.tests.test_btrfs.BTRFSTests) ... ok

----------------------------------------------------------------------
Ran 9 tests in 0.022s

OK

We have a way to go on the test coverage but at least we have one for the raid level reporting :slight_smile: and there is a will to expand this coverage from multiple developers so hopefully all in good time.

All current Rockstor developers are on this forum and all, as you might have imagined, are quite well occupied.

But do keep in mind that Rockstor is appliance orientated and so usability is a key goal.

Hope that helps and that you have fun with your chosen task / issue. Also note the wiki section of this forum which is intended as an aid to onboarding of developers and a place for those with domain specific knowledge to contribute on appropriate methods and the like.

Thanks for comprehensive answer, I’ll work into it!