You can safely just delete that hdparm service file, if that is the trouble all you will lose is your drive spin down settings which you can just re-enter for the remaining drives. We definitely have a clean up issue to address here.
Let us know if deleting that rockstor hdparm systemd service file does the trick.
thanks for replying. It’s been taking me a while to get back up to speed since Christmas.
I deleted rockstor-hdparm.service and the rockstor service seems to start, but now nginx is failing on startup.
systemctl status -l rockstor.service
Jan 30 17:13:26 rocky supervisord[3518]: 2021-01-30 17:13:26,002 INFO spawned: 'nginx' with pid 3687
Jan 30 17:13:26 rocky supervisord[3518]: 2021-01-30 17:13:26,010 INFO exited: nginx (exit status 1; not expected)
Jan 30 17:13:28 rocky supervisord[3518]: 2021-01-30 17:13:28,013 INFO success: gunicorn entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
Jan 30 17:13:29 rocky supervisord[3518]: 2021-01-30 17:13:29,016 INFO spawned: 'nginx' with pid 3688
Jan 30 17:13:29 rocky supervisord[3518]: 2021-01-30 17:13:29,024 INFO exited: nginx (exit status 1; not expected)
Jan 30 17:13:30 rocky supervisord[3518]: 2021-01-30 17:13:30,025 INFO gave up: nginx entered FATAL state, too many start retries too quickly
I look at the nginxerror.log:
2021/01/30 17:13:22 [emerg] 3535#0: no "events" section in configuration
2021/01/30 17:13:23 [emerg] 3593#0: no "events" section in configuration
2021/01/30 17:13:26 [emerg] 3687#0: no "events" section in configuration
2021/01/30 17:13:29 [emerg] 3688#0: no "events" section in configuration
If the nginx config is in /etc/nginx/nginx.conf
...
events {
worker_connections 1024;
}
...
So it looks ok. Is nginx being launched with a config located somewhere else?
There is a link to the hdparms.service file in /etc/systemd/system/basic.target.wants and suspends.target.wants, should I remove those too?
@HarryHUK Thanks for the update, and glad your back up and running.
And do add to that issue if you manage to track down why this happened in your case. It’s very strange. I suspect we have a race condition of some sort and with very specific timing we end up not updating that file and leaving it blank! Not ideal.