Error installing Syncthing, plex and transmission rock-on

Hi,

I got a newly installed server and am trying to install all rockons to try them. The only that works at the moment is btsync.
Syncthing is the main one I want to try and I’m focusing there.
The image gets downloaded and installed, I can start it and enter it with “docker exec -lt bash” and when I then run ps aux I get this:
**root@25fa6348bff5:/home/syncthing# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.2 0.0 20044 2776 ? Ss 12:34 0:00 /bin/bash /start.sh
root 8 45.6 0.0 12760 2004 ? R 12:34 0:05 chown -R syncthing:syncthing /home/syncthing
root 9 1.3 0.0 20216 3024 ? Ss 12:34 0:00 bash
root 13 0.0 0.0 17492 2068 ? R+ 12:34 0:00 ps aux
**

After a few seconds I get kicked out and then the docker stops.
I’m kind of new to docker but am working with IT daily - please help me troubleshoot.

This is what the command “docker ps” shows when run once per 2 seconds:

**CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25fa6348bff5 istepanov/syncthing “/start.sh” About an hour ago Up 26 seconds 0.0.0.0:8384->8384/tcp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:21025->21025/udp syncthing
513ed88aaf86 aostanin/btsync “/start.sh” 17 hours ago Up 17 hours 0.0.0.0:3369->3369/tcp, 0.0.0.0:8888->8888/tcp, 0.0.0.0:3369->3369/udp btsync
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25fa6348bff5 istepanov/syncthing “/start.sh” About an hour ago Up 28 seconds 0.0.0.0:8384->8384/tcp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:21025->21025/udp syncthing
513ed88aaf86 aostanin/btsync “/start.sh” 17 hours ago Up 17 hours 0.0.0.0:3369->3369/tcp, 0.0.0.0:3369->3369/udp, 0.0.0.0:8888->8888/tcp btsync
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25fa6348bff5 istepanov/syncthing “/start.sh” About an hour ago Restarting (1) 1 seconds ago 0.0.0.0:8384->8384/tcp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:21025->21025/udp syncthing
513ed88aaf86 aostanin/btsync “/start.sh” 17 hours ago Up 17 hours 0.0.0.0:3369->3369/tcp, 0.0.0.0:8888->8888/tcp, 0.0.0.0:3369->3369/udp btsync
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25fa6348bff5 istepanov/syncthing “/start.sh” About an hour ago Restarting (1) 4 seconds ago 0.0.0.0:8384->8384/tcp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:21025->21025/udp syncthing
513ed88aaf86 aostanin/btsync “/start.sh” 17 hours ago Up 17 hours 0.0.0.0:3369->3369/tcp, 0.0.0.0:3369->3369/udp, 0.0.0.0:8888->8888/tcp btsync
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25fa6348bff5 istepanov/syncthing “/start.sh” About an hour ago Up Less than a second 0.0.0.0:8384->8384/tcp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:21025->21025/udp syncthing
513ed88aaf86 aostanin/btsync “/start.sh” 17 hours ago Up 17 hours 0.0.0.0:3369->3369/tcp, 0.0.0.0:8888->8888/tcp, 0.0.0.0:3369->3369/udp btsync
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25fa6348bff5 istepanov/syncthing “/start.sh” About an hour ago Up 2 seconds 0.0.0.0:8384->8384/tcp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:21025->21025/udp syncthing
513ed88aaf86 aostanin/btsync “/start.sh” 17 hours ago Up 17 hours 0.0.0.0:3369->3369/tcp, 0.0.0.0:3369->3369/udp, 0.0.0.0:8888->8888/tcp btsync
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25fa6348bff5 istepanov/syncthing “/start.sh” About an hour ago Up 4 seconds 0.0.0.0:8384->8384/tcp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:21025->21025/udp syncthing
513ed88aaf86 aostanin/btsync “/start.sh” 17 hours ago Up 17 hours 0.0.0.0:3369->3369/tcp, 0.0.0.0:8888->8888/tcp, 0.0.0.0:3369->3369/udp btsync
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25fa6348bff5 istepanov/syncthing “/start.sh” About an hour ago Up 6 seconds 0.0.0.0:8384->8384/tcp, 0.0.0.0:21025->21025/udp, 0.0.0.0:22000->22000/tcp syncthing
513ed88aaf86 aostanin/btsync “/start.sh” 17 hours ago Up 17 hours 0.0.0.0:3369->3369/tcp, 0.0.0.0:8888->8888/tcp, 0.0.0.0:3369->3369/udp btsync
**

Scratch that… But it looks like there’s something with the startupscript…

This: $HOME/syncthing -generate="$CONFIG_FOLDER"
Is treated like:
/root/syncthing -generate="$CONFIG_FOLDER"

And
exec su - syncthing -c $HOME/syncthing
Sometimes exec like:
exec su - syncthing -c /root/syncthing

Since $HOME is an env-variable maybe?

Think I found it…
The start.sh should be updated from:
**
#!/bin/bash

set -e

HOME=eval echo ~syncthing
CONFIG_FOLDER="$HOME/.config/syncthing"
CONFIG_FILE="$CONFIG_FOLDER/config.xml"

if [ ! -f “$CONFIG_FILE” ]; then
$HOME/syncthing -generate="$CONFIG_FOLDER"
fi

xmlstarlet ed -L -u “/configuration/gui/address” -v “0.0.0.0:8384” "$CONFIG_FILE"
chown -R syncthing:syncthing “$HOME”

exec su - syncthing -c $HOME/syncthing

**

to:
**
#!/bin/bash

set -e

HOME=eval echo ~syncthing
CONFIG_FOLDER="$HOME/.config/syncthing"
CONFIG_FILE="$CONFIG_FOLDER/config.xml"

if [ ! -f “$CONFIG_FILE” ]; then
$HOME/syncthing -generate="$CONFIG_FOLDER"
fi

xmlstarlet ed -L -u “/configuration/gui/address” -v “0.0.0.0:8384” "$CONFIG_FILE"
chown -R syncthing:syncthing “$HOME”

exec su - syncthing -c ‘$HOME/syncthing’

**

(Notice the ‘’ on " the last line)

Removed all snapshots and all rockons to try to redo it all.
Now I get this when trying to reinstall syncthing:
==> /opt/rockstor/var/log/rockstor.log <==
[13/Nov/2015 16:39:48] DEBUG [storageadmin.views.rockon_helpers:81] Attempted to remove a container(syncthing). out: [’’] err: [‘Error response from daemon: no such id: syncthing’, ‘Error: failed to remove containers: [syncthing]’, ‘’] rc: 1.
[13/Nov/2015 16:39:51] DEBUG [storageadmin.views.rockon_helpers:129] exception while installing the rockon
[13/Nov/2015 16:39:51] ERROR [storageadmin.views.rockon_helpers:130] Error running a command. cmd = [’/usr/bin/docker’, ‘run’, ‘–log-driver=syslog’, ‘-d’, ‘–restart=on-failure:5’, ‘–name’, u’syncthing’, ‘-v’, u’/mnt2/syncthing:/home/syncthing/Sync’, ‘-v’, u’/mnt2/syncthing:/home/syncthing/.config/syncthing’, ‘-p’, u’22000:22000/tcp’, ‘-p’, u’8384:8384/tcp’, ‘-p’, u’21025:21025/udp’, u’istepanov/syncthing’]. rc = 1. stdout = [’’]. stderr = [‘Error response from daemon: stat /mnt2/rockons-ssd/btrfs/subvolumes/2a21accd3df962e4327bc4238363e790d7c38e4459fd311792a62d0710c88a9a: no such file or directory’, ‘’]
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py”, line 127, in install
globals()’%s_install’ % rockon.name.lower()
File “/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py”, line 236, in syncthing_install
return generic_install(rockon)
File “/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py”, line 191, in generic_install
run_command(cmd)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 84, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = [’/usr/bin/docker’, ‘run’, ‘–log-driver=syslog’, ‘-d’, ‘–restart=on-failure:5’, ‘–name’, u’syncthing’, ‘-v’, u’/mnt2/syncthing:/home/syncthing/Sync’, ‘-v’, u’/mnt2/syncthing:/home/syncthing/.config/syncthing’, ‘-p’, u’22000:22000/tcp’, ‘-p’, u’8384:8384/tcp’, ‘-p’, u’21025:21025/udp’, u’istepanov/syncthing’]. rc = 1. stdout = [’’]. stderr = [‘Error response from daemon: stat /mnt2/rockons-ssd/btrfs/subvolumes/2a21accd3df962e4327bc4238363e790d7c38e4459fd311792a62d0710c88a9a: no such file or directory’, ‘’]

Same with All other rockons that had a snapshot

Ok… So here’s what I did…
Installed syncthing and btsync and plex.

Syncthing and plex failes to start.

running “vi start.sh” on the start.sh for that btrfs-root for syncthing

stops and remove all rock-ons

remove all snapshots from the webui

no tries to install rockons but that failes with above error (prev entry).

When trying to remove everything related to docker I get:
[root@alma ~]# for id in $(docker ps -aq); do docker rm $id; done
Error response from daemon: Driver btrfs failed to remove root filesystem 5e6bd82712b48455a1d0c57a5838963c81c1c065434bef2e5cea5bfe94c0e309: stat /mnt2/rockons-ssd/btrfs/subvolumes/5e6bd82712b48455a1d0c57a5838963c81c1c065434bef2e5cea5bfe94c0e309: no such file or directory
Error: failed to remove containers: [5e6bd82712b4]
Error response from daemon: Driver btrfs failed to remove root filesystem 945667178a12d7067ea026a1758d9429cd5b1fdf1318c97894b0dc961ee82376: stat /mnt2/rockons-ssd/btrfs/subvolumes/945667178a12d7067ea026a1758d9429cd5b1fdf1318c97894b0dc961ee82376: no such file or directory
Error: failed to remove containers: [945667178a12]
Error response from daemon: Driver btrfs failed to remove root filesystem 74f899afbef0e40e8d5fde03fbf7b326124054d9ec280a58f689bbfb14d8171c: stat /mnt2/rockons-ssd/btrfs/subvolumes/74f899afbef0e40e8d5fde03fbf7b326124054d9ec280a58f689bbfb14d8171c: no such file or directory
Error: failed to remove containers: [74f899afbef0]
Error response from daemon: Driver btrfs failed to remove root filesystem 523490cd1e35641ba339105fc0207637a005ef97b367d856ac29afeb50637238: stat /mnt2/rockons-ssd/btrfs/subvolumes/523490cd1e35641ba339105fc0207637a005ef97b367d856ac29afeb50637238: no such file or directory
Error: failed to remove containers: [523490cd1e35]
Error response from daemon: Driver btrfs failed to remove root filesystem 5a0e02be3f6b201718614cec01841044f17996a803302304290f9b09c12f5c69: stat /mnt2/rockons-ssd/btrfs/subvolumes/5a0e02be3f6b201718614cec01841044f17996a803302304290f9b09c12f5c69: no such file or directory
Error: failed to remove containers: [5a0e02be3f6b]
Error response from daemon: Driver btrfs failed to remove root filesystem 9ce2ef7933bf281b149c65aadb181169a1c9acf07b15e929fc5328f73d4b520d: stat /mnt2/rockons-ssd/btrfs/subvolumes/9ce2ef7933bf281b149c65aadb181169a1c9acf07b15e929fc5328f73d4b520d: no such file or directory
Error: failed to remove containers: [9ce2ef7933bf]
Error response from daemon: Driver btrfs failed to remove root filesystem 4ebd764b7dd07a335e62c502a45f507bafc6a85260985e1bf4f65e91c21aa22a: stat /mnt2/rockons-ssd/btrfs/subvolumes/4ebd764b7dd07a335e62c502a45f507bafc6a85260985e1bf4f65e91c21aa22a: no such file or directory
Error: failed to remove containers: [4ebd764b7dd0]
Error response from daemon: Driver btrfs failed to remove root filesystem 55d431a96381544f816096a317e7b61d1a5df4b7863fe95a133be7c3efdebd43: stat /mnt2/rockons-ssd/btrfs/subvolumes/55d431a96381544f816096a317e7b61d1a5df4b7863fe95a133be7c3efdebd43: no such file or directory
Error: failed to remove containers: [55d431a96381]
Error response from daemon: Driver btrfs failed to remove root filesystem 57ae50f2a6b54e396791ac58601d8c4561108fa7d64bec259be7f1e48400e34c: stat /mnt2/rockons-ssd/btrfs/subvolumes/57ae50f2a6b54e396791ac58601d8c4561108fa7d64bec259be7f1e48400e34c: no such file or directory
Error: failed to remove containers: [57ae50f2a6b5]
Error response from daemon: Driver btrfs failed to remove root filesystem c4a4fb813ba22013f2635373c8c73f7f31d4ec6ec1b4603f07fb90fafd6c7f3d: stat /mnt2/rockons-ssd/btrfs/subvolumes/c4a4fb813ba22013f2635373c8c73f7f31d4ec6ec1b4603f07fb90fafd6c7f3d: no such file or directory
Error: failed to remove containers: [c4a4fb813ba2]

Ok, so the issue with subvolumes was solved by:
docker ps -a (to list all docker containers)
docker rm
Get a complaint about a nonexistent subvolume.
btrfs subvolume create
btrfs subvolume create -init
docker rm

Repeat above for all containers.

Now I got syncthing! (wohoo)… But! The issue seems to be that I can’t have more than one rockon at the time - feature or bug?

Hardware is:
Macbook pro, 8core I7 - 8GB ram - 128GB ssd - 2tb USB3 wd external.

1 Like

I kinda assumed having the same problem as you, because I also deleted the rock on share manually and couldn’t install any new ones. But in my case it was something different - I had a share/pool that I couldn’t delete in my database because of a bug and after reading your post managed to find the log files, which said:

tail /opt/rockstor/var/log/rockstor.log: CommandException: Error running a command. cmd = ['/sbin/btrfs', 'subvolume', 'list', '-u', '-p', '-q', u'/mnt2/redundant_data']. rc = 1. stdout = ['']. stderr = ["ERROR: can't access '/mnt2/redundant_data'", ''

Since I recently found a workaround to the deleting pools-bug I managed to remove the offending pool and voilà- I can install rock-ons again.

It’s a mystery why it wouldn’t work though, because it worked for quite a while even with the offending pool in the database. And, to my knowledge, the rock-on didn’t have any business accessing that pool.

Thanks for pushing me into the right direction!

PS: Interesting choice of hardware to run a NAS distribution on…

Hi Joachim,
thank you very much for sharing your resolution! Could you please just write what you mean with btrfs subvolume create -init?
There does not seem to be an argument -init.

Just figured it out.
If you try to remove a container and docker complains:

[root@sigurd]# docker ps --filter status=dead --filter status=exited -aq | xargs docker rm -v
Error response from daemon: Driver btrfs failed to remove root filesystem ecfa71dec9721e5644dc9a74168fcc141f58273f2821657108646e48b3471604: stat /mnt2/rockstor_addons/btrfs/subvolumes/ecfa71dec9721e5644dc9a74168fcc141f58273f2821657108646e48b3471604: no such file or directory

Then you have to do this:

btrfs subvolume create /mnt2/rockstor_addons/btrfs/subvolumes/ecfa71dec9721e5644dc9a74168fcc141f58273f2821657108646e48b3471604
btrfs subvolume create /mnt2/rockstor_addons/btrfs/subvolumes/ecfa71dec9721e5644dc9a74168fcc141f58273f2821657108646e48b3471604-init

And now the removal command works.

1 Like