Repeated qgroup error in Rockstor Log

Although I’m not having any issues with my Rockstor 4, I decided to play admin and have a quick nose in the logs this morning while waiting for my toaster to do its thing.
I came across this repeated ‘error’:

[08/Dec/2020 08:01:44] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/sbin/btrfs', 'qgroup', 'show', '/mnt2/ROOT/rockon/btrfs/subvolumes/cc4846ad408f378b2311d854fca88fcfc6b05547c1b20cca2515cf7fe00ca230']. output: [''] error: ["ERROR: can't list qgroups: quotas not enabled", '']

This error is logged repeatedly, every few seconds.

Do I need to be concerned about this? Should I spend time Troubleshooting, or not worry?

EDIT: So, looking through the log it is basically reporting this ‘error’ for everything under the ‘rockon’ share on the ‘ROOT’ pool, for which quotas are disabled.
It doesn’t report the same for the ‘home’ share on the same pool though.
Rockon service is disabled after I experimented with a few of them.



Hello again, @GeoffA ,
I unfortunately do not have time to go into details at the moment, but wanted to briefly answer your questions below:

No need to worry as this is expected behavior when quotas are disabled. You see this repeatedly in the logs as this share scan procedure is part of the process to get share size (I believe), which is run at regular intervals.

Sorry for the very brief and rushed answer, but know that there’s nothing to worry about here.


Thanks @Flox I appreciate your response. I shall not worry further about it :slight_smile:

That’s my slight programming OCD from the old days I guess: Errors in my logs? NEVER!




Yeah I understand and agree… There might be room for improvement here to limit this kind of things. Maybe we shouldn’t throw an error like that when quotas can’t be scanned when quotas are disabled. As long as we still catch and report genuine quotas scan errors of/when they happen, reducing this kind of noise can only help.


@GeoffA and @Flox Re:

Yes me too. This one is down to me I’m afraid. Way back Rockstor considered quotas disabled as a show stopper. In the interim we have become compliant/functional with this state but haven’t quite moved to not logging wildly and it’s probably time we moved this log entry to debug only one. It can still be useful but yet, spamming logs with errors is not a good show really. Sorry folks, all in good time and I’m still not entirely happy with our quota disabled capabilities. Mainly in how we re-create our quota groups when quotas are re-enabled. So such messages still have a place but yes, probably due to be moved to debug really.
@GeoffA Thanks for yet another prompt. If you fancy making a GitHub issue for this it may well catch someones eye and given it’s a cosmetic nice-to-have, bar excessive log writes, it could be a nice beginner patch, i.e. move such messages from info to debug.

I’d also really like a Web-UI way to initiate debug message capability but that would of course be another issue :).

Exactly. And missing them as they are hidden in debug mode makes for slower discover. Hence the desire to allow debug log enablement from Web-UI mention. So many things but all in good time hopefully.




Although not quite sure what made me think ROOT quotas are disabled by default :slight_smile:

Yes, all I could think was the docker-ce thing. Also the default snapper system depends on quotas to do it’s thing and we run with an only slightly modified setting of this boot to snapshot setup. See:

for how we tweak the snapper config to hopefully require less space than by default. OpenSUSE had quite some comeback on root pools running out of space, hence dropping the defaults just a tad. We should keep an eye on this however. And there may well be ramifications of no quotas enabled on the system pool as a result. Not that up on snapper behaviour re quotas disabled so hopefully folks will chip in as stuff is discovered/uncovered.


1 Like


I’ve now prepared a pull request to address the excessive quotas disabled logging discussed in this thread and the GitHub issue you opened. Post review we should be able to pop this one in as it simply removed the explicit logging of the original btrfs ‘error’ messages which it turns out we already mostly catch anyway. At least that is my impression having just looked at it.



@phillxnet sounds good to me. I realise it was just a minor issue, but good to get it nailed nevertheless. Nice one :+1: