Disk quota exceeded (122) with share at 45% usage

I have a share that shows the following in the web UI:

Space free - 851.15 GB
Space used - 684.85 GB
Size 1.50 TB

But when I try to write any more files I get a “Disk quota exceeded (122)” error… If I increase the size then it will let me write more to the share, but something is wrong… I have completely reinstalled Rockstor and the behavior is the same…

$ btrfs fi show /mnt2/old/
Label: 'mirrored_pool'  uuid: 96f6a581-fa4d-47c2-8fc1-a4ff60b7a46b
        Total devices 2 FS bytes used 685.76GiB
        devid    1 size 3.64TiB used 688.03GiB path /dev/sda
        devid    2 size 3.64TiB used 688.01GiB path /dev/sdb


$ btrfs fi df /mnt2/old/
Data, RAID1: total=686.00GiB, used=684.85GiB
Data, single: total=8.00MiB, used=0.00B
System, RAID1: total=8.00MiB, used=112.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, RAID1: total=2.00GiB, used=939.19MiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=320.00MiB, used=0.00B


$ btrfs qgroup show -pcre /mnt2/old/
qgroupid         rfer         excl     max_rfer     max_excl parent  child
--------         ----         ----     --------     -------- ------  -----
0/5          16.00KiB     16.00KiB         none         none ---     ---
0/258        16.00KiB     16.00KiB         none         none 2015/1  ---
0/259       684.85GiB    684.85GiB         none         none 2015/2  ---
2015/1       16.00KiB     16.00KiB      1.00TiB         none ---     0/258
2015/2      684.85GiB    684.85GiB      1.50TiB         none ---     0/259

@jaredrsowers Thanks for clearly describing the behavior. I have observed the exact same behavior a few times, though I was not able to reproduce it on demand or consistently. Increasing the size of the Share does temporarily fix the problem. Some of you may have noticed that our forum was down for a bit earlier today and it was because of this problem.

It’s been a couple of kernel versions since I tried to reproduce this on demand. So, I’ll try again with the latest kernel tomorrow. On a related note, I saw that a few qgroup related patches were submitted recently. If they make it into 4.2, it would be nice.

Thanks for the quick reply. This is just for my home NAS that I just built, and I can just increase the size for now, but it would be nice to have it behave as expected. If you need any more from me, let me know.

@suman @jaredrsowers

I will start my test server and will let you know the outcome.

Edit: yip, @jaredrsowers is using a raid configuration, perhaps he can confirm which raid configuration he is using but it looks like a raid 1, looking at his lost. This is because data is written twice, hence the lost of space, I did also some testing with multi disks in a pool and no raid and then the ‘lost’ is quite smaller, but still progressing the more you store the more you lose. But it needs space for metadata and it looks like some space is reserved for checksums. All with all it seems correct behaviour for this btfrs file system. Don’t forget with any raid config there is a (sometimes huge) overhead because the data is written more times and depending on the raid configuration it can cost you a lot of space. Anyway I have no more time to test any further but perhaps some one can add more test results with their raid configuration.

2GB -> 1.72GB
5GB -> 3.48GB
10GB -> 8.08GB
20GB -> 17.6GB

Some more info: This is because RAID-1 stores two copies of every byte written to it, so
to store the 14.65 GiB of data in this filesystem, the 15 GiB of data
allocation actually takes up 30 GiB of space on the disks. Taking
account of this, we find that the total allocation is: (15.00 + 0.008 +
1.00) * 2, which is 32.02 GiB. The GlobalReserve can be ignored in this
calculation.

Found this article which explains it a bit better than I do.

1 Like

Yes, I’m using RAID1, and that is what I suspected…

From the article:

Why is free space so complicated?

You might think, “My whole disk is RAID-1, so why can’t you just divide everything by 2 and give me a sensible value in df?”.
If everything is RAID-1 (or RAID-0, or in general all the same RAID level), then yes, we could give a sane and consistent value from df. However, we have plans to allow per-subvolume and per-file RAID levels. In this case, it becomes impossible to give a sensible estimate as to how much space there is left.
For example, if you have one subvolume as “single”, and one as RAID-1, then the first subvolume will consume raw storage at the rate of one byte for each byte of data written. The second subvolume will take two bytes of raw data for each byte of data written. So, if we have 30GiB of raw space available, we could store 30GiB of data on the first subvolume, or 15GiB of data on the second, and there is no way of knowing which it will be until the user writes that data.
So, in general, it is impossible to give an accurate estimate of the amount of free space on any btrfs filesystem. Yes, this sucks. If you have a really good idea for how to make it simple for users to understand how much space they’ve got left, please do let us know, but also please be aware that the finest minds in btrfs development have been thinking about this problem for at least a couple of years, and we haven’t found a simple solution yet.

1 Like

@jaredrsowers, yeah, same would happen when you have a hardware raid, only differences is then that the raid controller shows the available space after the raid is initialised, with Rockstor and a raid you need to make your own calculation depending on the raid I assume twice the space you really want . Hence I prefer more my hardware raid and create a single disk. But I can imagine if you have 5 different hard disks then it is nice to create just one big pool and let the btrfs system do the work. But obviously that’s not what I want. :wink:

Thank you @jaredrsowers and @TheRavenKing for all your input. I’ve tested and found that while we won’t be able to show the exact usage at this time(for all the reasons explained on btrfs wiki mentioned earlier in the post), we can improve reporting logic in Rockstor. Here’s the issue for it. It’s labeled critical, so we’ll get to it very soon.

I’m having the same issue, “disk quota exceeded” and I can’t upload to the NAS BTRFS partition. What can I do until You guys fix it ?

There’s no one simple answer to this. It depends on the specifics of your situation. One obvious check I’d recommend is systemctl status -l rockstor-bootstrap. If that shows running and no errors, you can try restarting it systemctl restart rockstor-bootstrap. That will adjust qgroups if needed(90% of the time, this is not necessary, but doesn’t hurt).

Beyond that, you’d have to troubleshoot a bit deeper with btrfs.

Thanks suman. After a reboot things work again, even though systemctl reports this:
● rockstor-bootstrap.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)