Rock-ons not starting nor stopping


after upgrading to x.10 and restarting my server (which I did to resolve the issue) my rock-ons can not be started nor stopped. I am stuck at this screen since hours. Also turn off and on of the rock-on service doesn’t fix it.

thanks for helping

same here, the way I fixed it was to uninstall plex shut down the rockons, turn it back on then reinstall plex server everything went back to normal…I did not loose data…

ok, I can give it a try, but deleting something with data in it makes me a little nervous :smiley:

I’m having the same issue as well, I tried uninstalling plex but it just says its starting, I tried turning off rockons and rebooting, but when i turn rockons back on it still says plex is starting. Any other ideas?

try using chrome browser, it has issues with firefox and other browsers. I had an issue where the dashboard would not work in firefox after the upgrade. I tryed it with chrome and it worked fine.
I cleared the cache in fire fox and everything went back to normal…

Did it work for you?

Nope. I’m using Chrome. I even cleared out all the cache from chrome. Plex and Transmission still say they are starting.

same here rockons stopping and starting since days and also a reboot has not changed anything on the state of it.

Sorry guys, I am working on this. Any chance you could e-mail logs to @herbert?

One thing I am noticing consistently is that rock-on/docker service itself sometime takes a long time to start and while it’s in pending state, if you start again, it further confuses it. For now, if you get an unknown internal error when starting the rock-on server, just wait and monitor the system->services page and the service should turn on eventually. You can also check with systemctl status docker.

Rock-on improvements are coming, appreciate your patience.

any other idea how to fix this ?
I am stuck and the services are also not running ?

tried to stop docker from command line and got this error, but df -h shows enough space. much more confused now :smiley:

[root@Homeserver ~]# systemctl stop docker
Error: No space left on device
[root@Homeserver ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  369M  3.6G  10% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sde3       104G   20G   83G  20% /
tmpfs           3.9G  8.0K  3.9G   1% /tmp
/dev/sde3       104G   20G   83G  20% /home
/dev/sde1       477M  180M  268M  41% /boot
/dev/sdd        8.7T  4.6T  1.9T  71% /mnt2/p1_r1
/dev/sde3       104G   20G   83G  20% /mnt2/rockstor_rockstor
/dev/sde3       104G   20G   83G  20% /mnt2/plex_config
/dev/sdd        8.7T  4.6T  1.9T  71% /mnt2/btsync_data
/dev/sdd        8.7T  4.6T  1.9T  71% /mnt2/data
/dev/sdd        8.7T  4.6T  1.9T  71% /mnt2/media
/dev/sde3       104G   20G   83G  20% /mnt2/rock-ons
tmpfs           799M     0  799M   0% /run/user/0

hey @herbert! Assuming your pool’s name is p1_r1, can you provide output of…

  1. btrfs fi show p1_r1
  2. btrfs fi df /mnt2/p1_r1
  3. btrfs fi usage /mnt2/p1_r1

I deleted all snapshots created on the root pool /mnt2/rockstor_rockstor and now everything works fine. Which means no error on starting and stopping the docker service.
Still all my docker containers are stuck in now 3 states :smiley: stopping, starting and uninstalling :smiley:

Also funny information, I had my latest snapshot auto-created on the /mnt2/rockstor_rockstor pool on sept 2015. Still no knew one appeared and as far as I understood they should be created quite often automatically.

btrfs fi show p1_r1
Label: 'p1_r1'  uuid: f7ed3e88-c3fc-404a-bc58-e1c862bbb9ec
        Total devices 4 FS bytes used 4.52TiB
        devid    1 size 2.73TiB used 2.22TiB path /dev/sdd
        devid    2 size 2.73TiB used 2.22TiB path /dev/sdc
        devid    3 size 465.76GiB used 464.76GiB path /dev/sda
        devid    4 size 2.73TiB used 2.22TiB path /dev/sdb

btrfs fi df /mnt2/p1_r1/
Data, RAID5: total=4.89TiB, used=4.51TiB
System, RAID5: total=96.00MiB, used=464.00KiB
Metadata, RAID5: total=8.06GiB, used=5.61GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

btrfs fi usage /mnt2/p1_r1
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
Device size:                   8.64TiB
Device allocated:                0.00B
Device unallocated:            8.64TiB
Device missing:                  0.00B
Used:                            0.00B
Free (estimated):                0.00B      (min: 8.00EiB)
Data ratio:                       0.00
Metadata ratio:                   0.00
Global reserve:              512.00MiB      (used: 0.00B)

Data,RAID5: Size:4.89TiB, Used:4.51TiB
   /dev/sda      464.04GiB
   /dev/sdb        2.22TiB
   /dev/sdc        2.22TiB
   /dev/sdd        2.22TiB

Metadata,RAID5: Size:8.06GiB, Used:5.61GiB
   /dev/sda      704.00MiB
   /dev/sdb        3.69GiB
   /dev/sdc        3.69GiB
   /dev/sdd        3.69GiB

System,RAID5: Size:96.00MiB, Used:464.00KiB
   /dev/sda       32.00MiB
   /dev/sdb       32.00MiB
   /dev/sdc       32.00MiB
   /dev/sdd       32.00MiB

   /dev/sda        1.00GiB
   /dev/sdb      517.76GiB
   /dev/sdc      517.76GiB
   /dev/sdd      517.76GiB

hope this helps :slight_smile: anyhow

hi, I updated to 3.8.11 and rock-ons still not working bupt give me an error instead :slightly_smiling: a little progress already :smiley:

but I do not understand what to do now


BitTorrent Sync

Current status: exitcode: 137 error: Error getting container 2f7545ce0d96f9a68176d7b1ae6471349616f965872073268d3b3b5ba7ad14c7 from driver btrfs: stat /mnt2/rock-ons/btrfs/subvolumes/2f7545ce0d96f9a68176d7b1ae6471349616f965872073268d3b3b5ba7ad14c7: no such file or directory


Plex media server

Current status: exitcode: 128 error: Error getting container 6e6e3ac25fec4e9dea2f03b276efba9c7ef475f98b97c655e08cce3c360d47e4 from driver btrfs: stat /mnt2/rock-ons/btrfs/subvolumes/6e6e3ac25fec4e9dea2f03b276efba9c7ef475f98b97c655e08cce3c360d47e4: no such file or directory

need help because those rockons are now offline since weeks and I need them. otherwise I have to move on to something else :frowning:

I also tried to uninstall the plex rockon and install it again, outcome is another error

[23/Jan/2016 11:39:20] DEBUG [storageadmin.views.rockon_helpers:70] Attempted to remove a container(plex). out: [’’] err: [‘Error response from daemon: no such id: plex’, ‘Error: failed to remove containers: [plex]’, ‘’] rc: 1.
[23/Jan/2016 11:39:20] DEBUG [storageadmin.views.rockon_helpers:127] exception while installing the Rockon(3)
[23/Jan/2016 11:39:20] ERROR [storageadmin.views.rockon_helpers:128] Error running a command. cmd = [’/usr/bin/docker’, ‘run’, ‘–log-driver=syslog’, ‘-d’, ‘–restart=on-failure:5’, ‘–name’, u’plex’, ‘-v’, u’/mnt2/plex_config:/config’, ‘-v’, u’/mnt2/data:/data’, ‘-p’, u’32400:32400/tcp’, u’–net=host’, u’timhaak/plex’]. rc = 1. stdout = [’’]. stderr = [‘Error response from daemon: could not find image: no such id: be5ded807b71da5754d164ec46bfbfc48280f821a5a5aaf64e7478b2e0bbf662’, ‘’]
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/storageadmin/views/”, line 125, in install
globals().get(’%s_install’ %, generic_install)(rockon)
File “/opt/rockstor/src/rockstor/storageadmin/views/”, line 214, in generic_install
File “/opt/rockstor/src/rockstor/system/”, line 89, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = [’/usr/bin/docker’, ‘run’, ‘–log-driver=syslog’, ‘-d’, ‘–restart=on-failure:5’, ‘–name’, u’plex’, ‘-v’, u’/mnt2/plex_config:/config’, ‘-v’, u’/mnt2/data:/data’, ‘-p’, u’32400:32400/tcp’, u’–net=host’, u’timhaak/plex’]. rc = 1. stdout = [’’]. stderr = [‘Error response from daemon: could not find image: no such id: be5ded807b71da5754d164ec46bfbfc48280f821a5a5aaf64e7478b2e0bbf662’, ‘’]

To be honest, I am little unsatisfied with what I rockstor provides. the rockons are far away from stable and up to date and this the main reason I started to pay for rockstor but still it feels like a second citizen.

hey @herbert, I applaud you for trying a few variations and reporting back very useful information.

The fact that you are not seeing the eternal spinning gear is indeed a good sign :smile:

The “could not find image” error indicates that something went wrong in the Rock-On root share. May be some snapshots got deleted? Since all the application data of your Rock-Ons is mapped via Shares, there is no harm in just starting over with a new root share. All your app data should still be there(you may want to start fresh with config shares, however… this is app specific and not really in our control).

These are the steps I’d follow (also read this doc for general tips about the root share, best practices etc…)

  1. Uninstall all Rock-Ons.
  2. Turn off the Rock-On service
  3. Create a new Share to use as Rock-On root share.
  4. Configure the service to use the new root share
  5. Delete the old root share(you’ll need to delete all the snapshots inside it first)
  6. Start the service
  7. Click Update so all app profiles are updated.
  8. Install your Rock-Ons. Use the same Share mappings to retain app data.

Regarding your last comment, your frustration is understandable. There is volatility with the Rock-Ons framework which we addressed quite a bit in .11 cycle. One of the key changes that I think is directly related to your problem is how we ship docker binary. It is after all third party software and rapidly evolving. Previously we used to deploy/redeploy from the upstream. Now, we are regulating that just like we do the elrepo kernel. Any updates to docker go through the testing channel first. The other thing to note is that each Rock-On is backed by third party docker images. The core team tests as much of these as possible, but it is a lot of work, it’s not the core of Rockstor and we shouldn’t be assumed as the maintainers of individual Docker images. Our goal is to provide a robust and generic way to host and manage these apps via the Rock-On framework. Anyway, thanks for your patience and report back how it goes!


thanks for helping this fixed it, btsync is up and running :slightly_smiling:
plex is on its way to start as well :smile:

one issue I do have, I cannot delete the old rock-ons share, deleted all snahpshots, webui says its not empty. deleted the folders in the directory via CLI and get stuck here:

rm: cannot remove ‘btrfs/subvolumes/428b411c28f0c33e561a95400a729552db578aee0553f87053b96fc0008cca6a’: Operation not permitted
rm: cannot remove ‘btrfs/subvolumes/d3a1f33e8a5a513092f01bb7eb1c2abf4d711e5105390a3fe1ae2248cfde1391’: Operation not permitted
rm: cannot remove ‘btrfs/subvolumes/cf2616975b4a3cba083ca99bc3f0bf25f5f528c3c52be1596b30f60b0b1c37ff’: Operation not permitted
rm: cannot remove ‘btrfs/subvolumes/843e2bded49837e4846422f3a82a67be3ccc46c3e636e03d8d946c57564468ba’: Operation not permitted

nevertheless, I am still waiting an easier way to integrate standard docker images into the rockstor architecture which makes it easier to move away from the “outdated” rock-ons.

which brings me to the next part, I know rock-ons is not your core thing, but it is there an this means they should also be handled properly, especially because you now have paying customers.
for the btsync rock-on, it is so outdated that I cannot use any new features or make some settings which are important for my setup.

I know I could update the docker on my own on my system, but this is not what I expect from the system. I still love the idea of centos with btrfs and a great webui, so I gonna stick at rockstor at least for another while to see whats happening. nevertheless, I already went to the unraid homepage to see what they did and thought about moving back :frowning:

but not yet - I am too curios to see what you are going to accomplish.

Right. I do know this issue. You can delete these Snapshots with btrfs subvol delte /mnt2/[your_pool]/[rock-on_root_share]/btrfs/subvolumes/* – you get the idea. After that, you should be able to delete the Share. I’ve also created an issue to improve support for scenarios like these. So stay tuned.

I am really not interested in just putting a GUI over Docker. But we are doing something more higher level and solution-y. Take a look at rockon-registry repo. Your feedback has been very useful so far and I encourage you to continue to engage and help us improve swiftly.

I’ve tried to engage the Bittorrent Team on this and they showed some interest but I don’t see them actively maintaining the image and what they currently produced for Rockstor is limited and closed. Perhaps you can voice your concerns on their forum and if there’s enough support, it may just push them enough.

Haha, thanks! Could you let us know a list of things you think Rockstor should do better vs unraid? perhaps, in a separate forum topic?

getting rid of the subvol helped and I could delete the share. thanks

I like the idea of having something more high level than just a docker gui, but this also means somebody has to create the necessary files and make the needed adoptions and updates to your approach. which more or less every other NAS system tries to solve as well. the outcome is having outdated docker images running, nobody can easily change or update.

having a system in place which allows to run standard docker images “as well” would make it much easier for users to get the needed features on rockstor. there are already a lot management tools out there, why not integrate one of them as the community “unstable” approach and Rock-ons could still exist as validated and stable solutions for rockstor.
Most people I know choose their NAS in the home environment based on the available plugins.

about unraid vs rockstor :wink: well I start a discussion about it :wink:

1 Like

I talked to the btsync team and they said there is already an official docker image. is it now possible to use it on our rockstor systems?


So do I. But as a bit of a stopgap in the mean time, you could consider installing dockerui:

docker run -d -p 9000:9000 --privileged
–name=dockerui -v /var/run/docker.sock:/var/run/docker.sock

Will expose on port 9000 a nice UI to examine the state of your Docker containers and perform administrative tasks. Not secured, though, so only for trusted environments behind a firewall.