I’ve been using the stable branch of Rockstor for a number of months and everything was working perfectly until yesterday. The Rock-on service appears to have stopped and i’m unable to re-start it. When i click the toggle to start the service the UI flickers / appears to refresh but the service is still switched off. I have tried rebooting the system a number of times and even updated to the latest development update to see if that helps but neither worked.
Any advice would be well received and greatly appreciated.
Hi @Hendricks,
Not sure I can help fix whatever needs to be fixed, but maybe you could paste here what the logs say when you try to turn the Rock-on service on. I would believe rockstor.log (/opt/rocktor/var/log/rockstor.log) and journalctl -xe should be indicative of what is failing when you do that, and help narrow down the issue.
From your original post, it seems you didn’t make any change to your system (hardware or software) that could have coincided with this problem, am I correct?
Aug 08 06:52:16 barlinas systemd[1]: docker.service failed.
**Aug 08 06:52:36 barlinas kernel: BTRFS info (device sdb): found 126 extents **
**Aug 08 06:52:36 barlinas kernel: BTRFS info (device sdb): relocating block group 1770629627904 flags data|raid1 **
**Aug 08 06:52:44 barlinas systemd[1]: Created slice user-1000.slice. **
**-- Subject: Unit user-1000.slice has finished start-up **
**-- Defined-By: systemd **
**-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel **
**-- **
**-- Unit user-1000.slice has finished starting up. **
**-- **
**-- The start-up result is done. **
**Aug 08 06:52:44 barlinas systemd[1]: Starting user-1000.slice. **
**-- Subject: Unit user-1000.slice has begun start-up **
**-- Defined-By: systemd **
**-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel **
**-- **
**-- Unit user-1000.slice has begun starting up. **
**Aug 08 06:52:44 barlinas systemd-logind[2582]: New session 17 of user Hendricks. **
**-- Subject: A new session 17 has been created for user Hendricks **
**-- Defined-By: systemd **
**-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel **
**-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat **
**-- **
**-- A new session with the ID 17 has been created for the user Hendricks. **
**-- **
**-- The leading process of the session is 30422. **
**Aug 08 06:52:44 barlinas systemd[1]: Started Session 17 of user Hendricks. **
**-- Subject: Unit session-17.scope has finished start-up **
**-- Defined-By: systemd **
**-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel **
**-- **
**-- Unit session-17.scope has finished starting up. **
**-- **
**-- The start-up result is done. **
**Aug 08 06:52:44 barlinas systemd[1]: Starting Session 17 of user Hendricks. **
**-- Subject: Unit session-17.scope has begun start-up **
**-- Defined-By: systemd **
**-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel **
**-- **
**-- Unit session-17.scope has begun starting up. **
**Aug 08 06:52:44 barlinas login[30422]: pam_unix(remote:session): session opened for user Hendricks by SHELLINABOX(uid=0) **
**Aug 08 06:52:44 barlinas login[30422]: LOGIN ON pts/0 BY Hendricks FROM 127.0.0.1 **
**Aug 08 06:52:49 barlinas su[30528]: (to root) Hendricks on pts/0 ** Aug 08 06:52:49 barlinas su[30528]: pam_unix(su:session): session opened for user root by Hendricks(uid=1000)
Thanks for the logs, they may help th experts around here to find out the cause of the issue.
To confirm, did you capture them right after trying to turn on the Rock-on service?
My knowledge is limited so the only route I can think for trying to fix your issue would be to delete your rockons-root share and create it again, then re-configuring the Rock-on service with this newly-created share and try turning it on again. It may mean that you will have to reinstall Plex, however, so maybe someone else can provide a better solution in the meantime.
*Disclaimer: I’m not a developer, just a Rockstor user with some Linux knowledge
The rockstor log shows that the docker (rockon) service is not running, however it doesn’t appear to show anything about when it attempted to start the docker service, only when it tried to use it. (When attempting to query status of containers)
Your journalctl information appears to be too late.
Please open two shells on the system simultaneously, on the first run (and capture the output of):
journalctl -fu docker.service
And then on the second, attempt to restart docker manually, with:
systemctl restart docker.service
Provide the logs from the journalctl command for analysis.
[root@nas ~]# sudo systemctl start docker Failed to start docker.service: Unit not found
Stop and status commands are recognised. Steps to reinstall docker don’t work, perhaps because Centos is end-of-life now (Centos linux: 4.12.4-1). Is there any point in trying to fix this, or should I make the jump to the latest SUSE release candidate?
Definitely. We can no longer build rpms for the legacy CentOS variant anyway and although it’s taken way longer to release our next Stable we are pretty much there now. Make yourself a new installer via:
and then at least any reports you make can help to fix current releases rather than having to hand patch older installs which is all than can be done for our CentOS based release. Plus now we have the new DIY installer if you need a custom install it’s only an edit away. I.e. it’s entirely possible to build an installer with your custom Rock-on pre-installed. Assuming of course we can’t merge it in the official rock-ons repo that is.
Anything in the /root directory within the installer repo will override / add to the equivalent directory within the resulting install. Not required but another string to our bow as it were for those withing to speed up their own bare metal recovery scenarios and run custom rock-ons that haven’t been merged into:
Do report you findings, in a fresh forum post, with regard to the install build process as we are super keen to have it well known and as accessible as possible. It’s always handy, with every changing hardware, to be able to boot from the latest kernel in upstream for example to aid installer compatibility. Plus any DIY installer created via that repo has all pending updates pre-installed which can save a tone of time on getting a system setup and updated.