Usage data on pools and shares not updated

Thank you Suman. The pool size is now correctly reported. But I’m afraid the share usage is still wrong.
Pool size shows 589 GB used:


But share usage is reported as 28 KB in the “Shares” view:

Whereas the Dashboard widget reports 153 MB share usage.

Pool usage is correct now.
Share usage is still incorrect.

Keep up the good work!

@rockstoruser Your readings do look a bit odd, specifically your root in the share summary screen shows only 153MB when it should be 1.3 to 2.2 GB ie look at @rgettler second screen grab and @herbert first screen grab.

Did a little test here (version 3.8-11.03) and I’m afraid I can’t reproduce the findings of inaccurate space reporting, with the proviso that it was necessary to refresh the browser for this to be updated; including on the dashboard. And to continue the theme of this being a highly graphical thread I’ll post my findings in case I’ve missed something obvious.

Test environment is pre and post a single OSX Time Machine backup (of about 1.5 GB new data). Same arrangement used to create the AFP docs page as it goes. Real HW with 3 160GB drives in a Raid1 pool.
Note that the MacBackups share is not used in this experiment.

Notice the change in the Backups share on the time_machine_pool.

Shares summary before:-

Shares summary after:-

Pools summary before:-

Pools summary after:-

And finally the Usage and Top Shares widgets on the dash after the above:-

Minors discrepancies but nothing as major such as a 153MB root. Not sure if the above is of any help but putting here as an example of what I see.

I wonder if this is related to share ownership or something? @rockstoruser who owns your shares? Long shot but can’t thing of anything else currently.

1 Like

Thanks @phillxnet!

@rockstoruser, perhaps it was an issue of browser refresh? In case of Pool usage, it is updated instantly when the table is displayed. But for Share usage, it’s asynchronously updated by the backend. So as you revisit the Shares table(not even browser refresh), the usage should eventually(within, say 1-2 minutes) update. The reason why Share usage update is asynchronous is because there could be many many more Shares and it could take a long response time to update them synchronously whenever user visits the table. On the other hand, usually number of Pools is rather small on any system, so that tends to be pretty quick. Anyways, please report back your findings.

Even after waiting for hours, the share usage doesn’t change.
The TimeMachine backup is 587 G if I do a “du” on the command line. Still showing 0 Bytes used in the Web UI.

[root@rockstor mnt2]# du -hs TimeMachine/
587G    TimeMachine/
[root@rockstor mnt2]# 

Usage of the “root” share is also wrong in the WebUI. Maybe it’s easier to reinstall. What do you think?

Hello, I’m on 3.8-12.08 - just updated today and I’m seeing the same issue. My shares shows the right info, but my Pool shows nothing as being stored there.

It’s not working for me…

See this youtube vid I created. I mention the issue at 11m 11s

1 Like

I’ve had this issue ever since I’ve been running Rockstor as well (at least 6 month’s, possibly longer). Am running stable at the moment. Currently got 12 drives, each 4TB, setup in raid6.

Rockstor version: 3.8-13
Linux: 4.4.5-1.el7.elrepo.x86_64

btrfs fi show output:

Label: ‘rockstor_rockstor’ uuid: 316c1ade-0b12-4dfd-8768-361792c73894
Total devices 1 FS bytes used 2.31GiB
devid 1 size 224.52GiB used 33.02GiB path /dev/sde3

Label: ‘storage’ uuid: 3944e3a0-3cd7-445e-83da-f78aa096df59
Total devices 12 FS bytes used 18.49TiB
devid 1 size 3.64TiB used 1.86TiB path /dev/sdc
devid 2 size 3.64TiB used 1.86TiB path /dev/sdd
devid 3 size 3.64TiB used 1.86TiB path /dev/sdf
devid 4 size 3.64TiB used 1.86TiB path /dev/sdg
devid 5 size 3.64TiB used 1.86TiB path /dev/sdh
devid 6 size 3.64TiB used 1.86TiB path /dev/sdi
devid 7 size 3.64TiB used 1.86TiB path /dev/sda
devid 8 size 3.64TiB used 1.86TiB path /dev/sdb
devid 9 size 3.64TiB used 1.86TiB path /dev/sdl
devid 10 size 3.64TiB used 1.86TiB path /dev/sdm
devid 11 size 3.64TiB used 1.86TiB path /dev/sdk
devid 12 size 3.64TiB used 1.86TiB path /dev/sdj

df -h on a server that has the storage share mounted;

Filesystem Size Used Avail Use% Mounted on
192.168.1.201:/export/storageshare 44T 19T 22T 47% /storage

Screenshot of disks page, pools page, shares page and total capacity widget: http://imgur.com/a/jkeC7

Most of the time mine’s completely inaccurate, I’d put it down to either Raid5/6 or Snapshots.

I’m running RAID 1 and I have the same issue. Rockstor Dashboard shows zero usage on a share that has over 1TB of data in it. I don’t know what the problem is. It makes it difficult for me to tell when to add new drives to the array, I have to really keep track. I’m running the most current build, not using stable updates yet.

Some of the problems in this thread are unfortunately inherent to the use of btrfs. For example, output from the system commands df and du will generally be different from that of btrfsprogs btrfs filesystem df/du. The reason is that filesystem metadata, RAID levels and subvolumes all make the concept of “free space” much more complicated, also depending on whether you look at a whole filesystem or an individual share. The developers themselves explain this better in their FAQ section.

There is another issue with “zero usage” showing after making changes to a btrfs pool; this particular one can be resolved by performing btrfs quota rescan /path/to/pool.

At the moment, the usage reported for pools should be the most accurate. I’ve started an attempt to get a bit more sense into the share usage as well, with some changes to the shares list view in the most recent (3.8-14.10) version. Notably though, I haven’t touched the dashboard view yet. I’m keeping a list of current problems in this github issue:

Feel free to comment or contribute there as well.

3 Likes

Good enough! As long as there is somewhere I can find information about my space remaining, I’ll be able to get by. Share data doesn’t matter that much because there is no share size enforcement anyhow, so all I need to know is when to add more drives.

Is there any way to get the correctly reported size from the Pool page to show in the dashboard? That would be super helpful, even if it is only a temporary solution.

1 Like

That’s a good idea and shouldn’t be too hard, I’ll look into it.

Actually it should be possible to implement a finer grained size enforcement by making each share a subvolume and using btrfs’s quota groups, but that would require some more work.

Fantastic! Thanks. Please check in if that change works its way into a future patch.

I’ve submitted a pull request with the new dashboard widget:

If you want to test the changes even before they get accepted, you can download the two files to the proper location on your system, followed by:

# cd /opt/rockstor
# bin/buildout install collectstatic
3 Likes

That’s awesome! Thanks so much. This is a simple, but very helpful change until BTRFS is 100% functional on the reporting side of things.

Hi @sfranzen,
had a quick test on your pool widget and it seems ok, but here is my question:

What do you think about merging pool & share widgets together??? IMHO it could be better

Flyer/Mirko

Thanks for the zero usage fix.

This has made my dashboard show real / realistic values again :slight_smile:

Hey Mirko,

Well, I think several parts of the dashboard could do with some touch-ups; I just copy/pasted this thing together quickly to provide at least something. I’m currently looking at a suggestion from another user to do the dashboard in Patternfly. If that’s going to work, all widgets will have to be rewritten anyway.

Hi @sfranzen, can I assume you won’t move to Patternfly in the short time??
I think it can be quite easy (Patternfly made over Bootstrap, already running on Rockstor), but meanwhile I’m working on Dashboard widgets and this

Current dashboard crashes are because of D3.js library (not the library itself, but our use of it with 1 sec dynamic data updates on graphs) so I’m having some tests to move from d3.js SVG to chart.js canvas ( Nice post about SVG vs canvas & performance )

To check it, try this : https://jsfiddle.net/u5aanta8/29/ ( a little jsfiddle I’ve done to emulate Rockstor graphs, you can leave it open a long time without performance issues :slight_smile: )

Flyer

1 Like

@sfranzen Just chipping in here:

There is also an outstanding dashboard patch / enhancement by way of this pr:


Not sure if it’s really relevant if re-writes were planned but popping in here just in case.
I submitted it to partially repair some aesthetics I broke on a previous pr, some discussion in there also.
The approach there is a little fragile still, especially with long serial numbers, so might be best if that whole widget was re-thought anyway, ie only shows top 3 or 5 after patch depending on current widget size anyway.

@Flyer Name dropping you as I know you are already aware of this pr and as you indicate working in this area.

I know that the dashboard ‘as is’ takes a larger amount of client CPU than I would have thought necessary but I’m pretty green on js / web programming so might be a red herring.