Syncthing or transmission not access to web interface

@sirhcjw I see that sickbeard is on our list. It is going to get easier and easier to add more Rock-Ons soon. We’d love for you to contribute apps that you like. For now though, I suggest you play with docker separately from Rockstor. We plan to add documentation soon about how to add new Rock-Ons.

Here’s the update I’ve been waiting to post. So with 3.8-2 update, the Rock-on system is significantly improved. @sirhcjw please uninstall your inaccessible Syncthing and Transmission Rock-ons. Then click the update button to pull new metadata. Then install your Rock-ons again. Please read the excellent documentation contributed by @phillxnet

http://rockstor.com/docs/rockons.html

Can someone point me to a manual rock-on clean up procedure I have syncthing in a broken state even though I have blown away the shares for rock-on’s and syncthing config when I re-enable rock-on’s it thinks syncthing is still installed and sits there indefinitely trying to start it.

Can you provide a screenshot?

[root@backup ~]# ls -al /mnt2/rockons/
total 12
drwxr-xr-x 1 root root 138 Jul 8 19:34 .
drwxr-xr-x 1 root root 186 Jul 8 22:15 …
drwx------ 1 root root 0 Jul 8 19:34 btrfs
drwx------ 1 root root 0 Jul 8 19:34 containers
drwx------ 1 root root 0 Jul 8 19:34 graph
drwx------ 1 root root 32 Jul 8 19:34 init
-rw-r–r-- 1 root root 5120 Jul 8 19:34 linkgraph.db
-rw------- 1 root root 19 Jul 8 19:34 repositories-btrfs
drwx------ 1 root root 0 Jul 8 19:34 tmp
drwx------ 1 root root 0 Jul 8 19:34 trust
drwx------ 1 root root 0 Jul 8 19:34 volumes

[root@backup ~]# ls -al /mnt2/rockons/containers/
total 0
drwx------ 1 root root 0 Jul 8 19:34 .
drwxr-xr-x 1 root root 138 Jul 8 19:34 …
[root@backup ~]#

There must be something in the database.

I had a hdd problem which turned out to be a faulty cable and I think something got corrupted with syncthing.

I could not remove syncthing so I blew away all the rockon and syncthing shares and then recreated them to remove all the config.but the problem persists.I guess worst case I could do a fresh install and import the pool but that would be a last resort in my opinion.

Try running this script: https://gist.github.com/schakrava/f1d313e3552215eb810c

It will remove all your Rock-on meta data. Then click update button and Syncthing should show in “All” tab.

If you have manually altered the Rock-on root share, I suggest you start with a new one. Please read the documentation to help clarify things. http://rockstor.com/docs/docker-based-rock-ons/overview.html

Let me know.

Thanks that fixed it.

I switched off rock-ons
changed the default rock-on share to home
deleted the share I was using for rock-ons
ran your script
recreated the rock-on share
changed the default rock-on share to the new share I had just created
switched on rock-ons and it was fixed.

Thanks again.

Cool. Be sure to share your Syncthing experience, tips and expertise with others on Syncthing Rock-on

Bad news now syncthing has crashed

Syncthing

Continuous File Synchronization

Current status: exitcode: 1 error:

So the log file for the syncthing docker container is 7.8GB

[root@backup 94c3198928ec48c7007b27e9c2e44164fc45e30fee66c0e7975b261fee48ad4d]# ls -lah
total 7.8G
drwx------ 1 root root 278 Jul 9 17:20 .
drwx------ 1 root root 128 Jul 9 13:46 …
-rw------- 1 root root 7.8G Jul 9 15:11 94c3198928ec48c7007b27e9c2e44164fc45e30fee66c0e7975b261fee48ad4d-json.log
-rw-r–r-- 1 root root 3.2K Jul 9 15:11 config.json
-rw-r–r-- 1 root root 930 Jul 9 15:11 hostconfig.json
-rw-r–r-- 1 root root 13 Jul 9 13:46 hostname
-rw-r–r-- 1 root root 174 Jul 9 13:46 hosts
-rw-r–r-- 1 root root 71 Jul 9 13:46 resolv.conf
-rw------- 1 root root 71 Jul 9 13:46 resolv.conf.hash

Not sure if that is causing me issues or not?

ok so I have worked out that the logging for the syncthing rock-on is extremely verbose.

It is generating about 160MB of logs every 1 minute.

So it crashed because I only had a 10gb share for it.

Is it possible to fix this as anyone who tries to use it will have it work for a while and then crash like mine did.

and now the other problem is even thow I have mapped a 200GB share to syncthing to hold the files I am trying to sync all I get is “: disk quota exceeded” for every file it tries to create.

Nice find, thank you! I guess I haven’t been using Syncthing heavily. My logs are only 9.6k. I wonder why you have so much log data. do a “docker logs syncthing” and see if there is any useful information. You can also open that json.log file for more clues.

Here’s the temporary workaround: echo “” > 94c3198928ec48c7007b27e9c2e44164fc45e30fee66c0e7975b261fee48ad4d-json.log. That will truncate the log file. You might want to turn the Rock-on off, but looks like it crashed anyway.

A more permanent fix is coming in the next update. see https://github.com/rockstor/rockstor-core/issues/723

I knew it’s about time users run into this issue. First, let me give out the fix. Execute this script:

export DJANGO_SETTINGS_MODULE="settings"; /opt/rockstor/bin/qgroup-clean

It will list all qgroups not in use and delete them. It could free up a LOT of space depending on your scenario. I think that + truncating the json log file(from my previous commend) should fix your problems.

I have a proper fix in the works, but it will take some time. I may choose to wait for 4.2 kernel as there could be very useful qgroup fixes in it. Here’s the issue for your reference: https://github.com/rockstor/rockstor-core/issues/687

I tried this and it made no difference.

I then rebooted my Rockstor server and still same result.

If I upload logs can you please help me to diagnose this problem I dont see why a 250GB subvolume should be recieving these errors when it has ~6MB used.

What commands can I use to view the used quota amounts?

Could it be another volume that has the quota exceeded?

What logs can I upload in order to have you assist me to diagnose this issue?

Thanks

Chris

I just noticed when I click on my pool I get this error

@sirhcjw Happy to help, but I request that you provide as much data as you can in your comments. When you ran the qgroup-clean, what was the output? You can run it again and report back.

Also, did you truncate the json log file of syncthing container?

Yes, this is a known issue. Your pool is there, just a silly bug, will get fixed in the next update. https://github.com/rockstor/rockstor-core/issues/720

It listed a bunch of numbers eg: 1440 etc and said deleting not in use.

Yes I have been truncating the large log.

I prefer to cat /dev/null > logfilename.log

Here is the output if btrfs qgroup show run against the subvolume that is saying quota exceeded.

[root@backup ~]# btrfs qgroup show /mnt2/chris-workstation
WARNING: Qgroup data inconsistent, rescan recommended
qgroupid rfer excl


0/5 16.00KiB 16.00KiB
0/1092 525.52GiB 525.52GiB
0/1433 59.09GiB 59.09GiB
0/1434 16.00KiB 16.00KiB
0/1440 5.28MiB 5.28MiB
0/1446 119.30MiB 119.30MiB
0/1940 909.63MiB 909.63MiB
0/1941 63.88MiB 80.00KiB
0/1942 63.88MiB 80.00KiB
0/1943 83.80MiB 1.90MiB
0/1944 141.29MiB 1.40MiB
0/1945 178.88MiB 80.00KiB
0/1946 178.88MiB 80.00KiB
0/1947 197.11MiB 416.00KiB
0/1948 225.54MiB 80.00KiB
0/1949 225.54MiB 80.00KiB
0/1950 225.54MiB 80.00KiB
0/1951 225.54MiB 80.00KiB
0/1952 225.54MiB 80.00KiB
0/1953 225.54MiB 80.00KiB
0/1954 225.54MiB 80.00KiB
0/1955 225.54MiB 80.00KiB
0/1956 225.54MiB 80.00KiB
0/1957 225.54MiB 80.00KiB
0/1958 227.38MiB 192.00KiB
0/1959 227.41MiB 112.00KiB
0/1960 233.12MiB 80.00KiB
0/1961 233.12MiB 80.00KiB
0/1962 233.12MiB 80.00KiB
0/1963 233.12MiB 80.00KiB
0/1964 233.12MiB 80.00KiB
0/1965 233.12MiB 80.00KiB
0/1966 233.12MiB 80.00KiB
0/1971 233.12MiB 112.00KiB
0/1972 233.12MiB 496.00KiB
[root@backup ~]#