Syncthing or transmission not access to web interface

Nice find, thank you! I guess I haven’t been using Syncthing heavily. My logs are only 9.6k. I wonder why you have so much log data. do a “docker logs syncthing” and see if there is any useful information. You can also open that json.log file for more clues.

Here’s the temporary workaround: echo “” > 94c3198928ec48c7007b27e9c2e44164fc45e30fee66c0e7975b261fee48ad4d-json.log. That will truncate the log file. You might want to turn the Rock-on off, but looks like it crashed anyway.

A more permanent fix is coming in the next update. see https://github.com/rockstor/rockstor-core/issues/723

I knew it’s about time users run into this issue. First, let me give out the fix. Execute this script:

export DJANGO_SETTINGS_MODULE="settings"; /opt/rockstor/bin/qgroup-clean

It will list all qgroups not in use and delete them. It could free up a LOT of space depending on your scenario. I think that + truncating the json log file(from my previous commend) should fix your problems.

I have a proper fix in the works, but it will take some time. I may choose to wait for 4.2 kernel as there could be very useful qgroup fixes in it. Here’s the issue for your reference: https://github.com/rockstor/rockstor-core/issues/687

I tried this and it made no difference.

I then rebooted my Rockstor server and still same result.

If I upload logs can you please help me to diagnose this problem I dont see why a 250GB subvolume should be recieving these errors when it has ~6MB used.

What commands can I use to view the used quota amounts?

Could it be another volume that has the quota exceeded?

What logs can I upload in order to have you assist me to diagnose this issue?

Thanks

Chris

I just noticed when I click on my pool I get this error

@sirhcjw Happy to help, but I request that you provide as much data as you can in your comments. When you ran the qgroup-clean, what was the output? You can run it again and report back.

Also, did you truncate the json log file of syncthing container?

Yes, this is a known issue. Your pool is there, just a silly bug, will get fixed in the next update. https://github.com/rockstor/rockstor-core/issues/720

It listed a bunch of numbers eg: 1440 etc and said deleting not in use.

Yes I have been truncating the large log.

I prefer to cat /dev/null > logfilename.log

Here is the output if btrfs qgroup show run against the subvolume that is saying quota exceeded.

[root@backup ~]# btrfs qgroup show /mnt2/chris-workstation
WARNING: Qgroup data inconsistent, rescan recommended
qgroupid rfer excl


0/5 16.00KiB 16.00KiB
0/1092 525.52GiB 525.52GiB
0/1433 59.09GiB 59.09GiB
0/1434 16.00KiB 16.00KiB
0/1440 5.28MiB 5.28MiB
0/1446 119.30MiB 119.30MiB
0/1940 909.63MiB 909.63MiB
0/1941 63.88MiB 80.00KiB
0/1942 63.88MiB 80.00KiB
0/1943 83.80MiB 1.90MiB
0/1944 141.29MiB 1.40MiB
0/1945 178.88MiB 80.00KiB
0/1946 178.88MiB 80.00KiB
0/1947 197.11MiB 416.00KiB
0/1948 225.54MiB 80.00KiB
0/1949 225.54MiB 80.00KiB
0/1950 225.54MiB 80.00KiB
0/1951 225.54MiB 80.00KiB
0/1952 225.54MiB 80.00KiB
0/1953 225.54MiB 80.00KiB
0/1954 225.54MiB 80.00KiB
0/1955 225.54MiB 80.00KiB
0/1956 225.54MiB 80.00KiB
0/1957 225.54MiB 80.00KiB
0/1958 227.38MiB 192.00KiB
0/1959 227.41MiB 112.00KiB
0/1960 233.12MiB 80.00KiB
0/1961 233.12MiB 80.00KiB
0/1962 233.12MiB 80.00KiB
0/1963 233.12MiB 80.00KiB
0/1964 233.12MiB 80.00KiB
0/1965 233.12MiB 80.00KiB
0/1966 233.12MiB 80.00KiB
0/1971 233.12MiB 112.00KiB
0/1972 233.12MiB 496.00KiB
[root@backup ~]#