I have a share that shows the following in the web UI:
Space free - 851.15 GB
Space used - 684.85 GB
Size 1.50 TB
But when I try to write any more files I get a “Disk quota exceeded (122)” error… If I increase the size then it will let me write more to the share, but something is wrong… I have completely reinstalled Rockstor and the behavior is the same…
@jaredrsowers Thanks for clearly describing the behavior. I have observed the exact same behavior a few times, though I was not able to reproduce it on demand or consistently. Increasing the size of the Share does temporarily fix the problem. Some of you may have noticed that our forum was down for a bit earlier today and it was because of this problem.
It’s been a couple of kernel versions since I tried to reproduce this on demand. So, I’ll try again with the latest kernel tomorrow. On a related note, I saw that a few qgroup related patches were submitted recently. If they make it into 4.2, it would be nice.
Thanks for the quick reply. This is just for my home NAS that I just built, and I can just increase the size for now, but it would be nice to have it behave as expected. If you need any more from me, let me know.
I will start my test server and will let you know the outcome.
Edit: yip, @jaredrsowers is using a raid configuration, perhaps he can confirm which raid configuration he is using but it looks like a raid 1, looking at his lost. This is because data is written twice, hence the lost of space, I did also some testing with multi disks in a pool and no raid and then the ‘lost’ is quite smaller, but still progressing the more you store the more you lose. But it needs space for metadata and it looks like some space is reserved for checksums. All with all it seems correct behaviour for this btfrs file system. Don’t forget with any raid config there is a (sometimes huge) overhead because the data is written more times and depending on the raid configuration it can cost you a lot of space. Anyway I have no more time to test any further but perhaps some one can add more test results with their raid configuration.
Some more info: This is because RAID-1 stores two copies of every byte written to it, so
to store the 14.65 GiB of data in this filesystem, the 15 GiB of data
allocation actually takes up 30 GiB of space on the disks. Taking
account of this, we find that the total allocation is: (15.00 + 0.008 +
1.00) * 2, which is 32.02 GiB. The GlobalReserve can be ignored in this
calculation.
Yes, I’m using RAID1, and that is what I suspected…
From the article:
Why is free space so complicated?
You might think, “My whole disk is RAID-1, so why can’t you just divide everything by 2 and give me a sensible value in df?”.
If everything is RAID-1 (or RAID-0, or in general all the same RAID level), then yes, we could give a sane and consistent value from df. However, we have plans to allow per-subvolume and per-file RAID levels. In this case, it becomes impossible to give a sensible estimate as to how much space there is left.
For example, if you have one subvolume as “single”, and one as RAID-1, then the first subvolume will consume raw storage at the rate of one byte for each byte of data written. The second subvolume will take two bytes of raw data for each byte of data written. So, if we have 30GiB of raw space available, we could store 30GiB of data on the first subvolume, or 15GiB of data on the second, and there is no way of knowing which it will be until the user writes that data.
So, in general, it is impossible to give an accurate estimate of the amount of free space on any btrfs filesystem. Yes, this sucks. If you have a really good idea for how to make it simple for users to understand how much space they’ve got left, please do let us know, but also please be aware that the finest minds in btrfs development have been thinking about this problem for at least a couple of years, and we haven’t found a simple solution yet.
@jaredrsowers, yeah, same would happen when you have a hardware raid, only differences is then that the raid controller shows the available space after the raid is initialised, with Rockstor and a raid you need to make your own calculation depending on the raid I assume twice the space you really want . Hence I prefer more my hardware raid and create a single disk. But I can imagine if you have 5 different hard disks then it is nice to create just one big pool and let the btrfs system do the work. But obviously that’s not what I want.
Thank you @jaredrsowers and @TheRavenKing for all your input. I’ve tested and found that while we won’t be able to show the exact usage at this time(for all the reasons explained on btrfs wiki mentioned earlier in the post), we can improve reporting logic in Rockstor. Here’s the issue for it. It’s labeled critical, so we’ll get to it very soon.
There’s no one simple answer to this. It depends on the specifics of your situation. One obvious check I’d recommend is systemctl status -l rockstor-bootstrap. If that shows running and no errors, you can try restarting it systemctl restart rockstor-bootstrap. That will adjust qgroups if needed(90% of the time, this is not necessary, but doesn’t hurt).
Beyond that, you’d have to troubleshoot a bit deeper with btrfs.
Thanks suman. After a reboot things work again, even though systemctl reports this:
● rockstor-bootstrap.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)