Somethings broken the Mounts?

Not sure what, maybe an update gone bad or something but anyway noticed all my shares are showing as empty in rockstor.

Looks like /mnt2/sharename hasn’t mounted for some reason,

Thankfully the Pool itself looks ok e.g /mnt2/poolname/sharename has the expected data.

Any ideas?

Ah it looks like disabling quotas (which really speeds up a rebalance) breaks rockstor

24/Oct/2016 13:24:54] ERROR [storageadmin.middleware:36] Error running a command. cmd = [’/sbin/btrfs’, ‘qgroup’, ‘show’, ‘/mnt2/Pool1’]. rc = 1. stdout = [’’]. stderr = [“ERROR: can’t perform the search - No such file or directory”, “ERROR: can’t list qgroups: No such file or directory”, ‘’]
Traceback (most recent call last):
File “/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/core/handlers/”, line 112, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File “/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/views/decorators/”, line 57, in wrapped_view
return view_func(*args, **kwargs)
File “/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/views/generic/”, line 69, in view
return self.dispatch(request, *args, **kwargs)
File “/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/”, line 452, in dispatch
response = self.handle_exception(exc)
File “/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/”, line 449, in dispatch
response = handler(request, *args, **kwargs)
File “/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/”, line 371, in inner
return func(*args, **kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/”, line 263, in post
File “/opt/rockstor/src/rockstor/storageadmin/views/”, line 144, in import_snapshots
rusage, eusage = share_usage(share.pool, snaps_d[s][0])
File “/opt/rockstor/src/rockstor/fs/”, line 635, in share_usage
out, err, rc = run_command(cmd, log=True)
File “/opt/rockstor/src/rockstor/system/”, line 98, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = [’/sbin/btrfs’, ‘qgroup’, ‘show’, ‘/mnt2/Pool1’]. rc = 1. stdout = [’’]. stderr = [“ERROR: can’t perform the search - No such file or directory”, “ERROR: can’t list qgroups: No such file or directory”, ‘’]
[24/Oct/2016 13:24:54] ERROR [smart_manager.data_collector:620] Failed to update snapshot state… exception: [‘Internal Server Error: No JSON object could be decoded’]
[root@BN-NAS1 log]#

Edit: Yep re-enabling quotas fixes it.


Looks like the update to 3.8.15-0 has broken something else as they’ve gone again.

Edit: Weird enabled quotas incase they’d somehow disabled and restarted the rockstor service and nothing… rebooted the entire VM and they are back.

1 Like

We really need to look at the current status of quota code and improve things in the next cycle. There are improvements reported in 4.8 kernel(should be out in testing soon, via elrepo) and I see a new patch even as of last week, so not sure what action we can take. But I plan to test the behaviour on 4.8 and encourage others(certainly @phillxnet @Flyer @sfranzen :slight_smile: ) to do so as well.

Where do I re-enable quotas?

btrfs quota enable /mnt2/PoolName

You will probably find the pool name is the only thing still mounted

could we enable over provision on quotas ? Right now I’ve ended up on one server at work with setup where I’ve setup the pool with one share and filled it 50%, then went on to create another share … I could not assign quota higher than current free space, which in my mind is a bit risky setup - for now everything works OK, but when first share will become empty and second share will get to 80% of pool size and quotas will be reintroduced this server might “experience unexpected critical behaviour” (or colloquially blow up)

@suman Also can we have rockstor deal with quotas being disabled totally?

On my machine (couple cores of a c2750) having quotas enabled has a massive performance impact if performing any operation that involves moving/deleting a lot of data.

For instance a Re-balance with quotas on takes weeks but with them off it takes a matter of hours

1 Like

Thanks @Dragon2611, this is very helpful. Let’s see if 4.8 kernel is better and also what we can do about cleanly and easily disabling quotas if the user wants to.

@suman it’s BTRFS-Transaction eating CPU cycles (probably re-calculating quota usage), problem is it blocks other I/O when doing it so leads to connections timing out

It’s probably less of a problem on beefier CPU’s but given individual C7250 cores are quite weak (around 500 passmarks) and it appears to be single threaded.

I think it’s why my previous re-balance from Raid5 to Raid1 took weeks, and it was why my Raid1 re-balance after adding a disk was taking days, If I recall it was about 3 days for the first ~50% and after disabling quotas at a suggestion of someone here the last 50% took under 8hours this was with a couple TB of data present.