Can't access WebUI all of a sudden

Had to take Rockstor down for some power related reconfiguration. Since rebooting, everything is working fine, SSH, RockOns, SMB, I can access it all, but I can’t access the web ui at all. I’ve done a port scan and 443 is not open, I don’t know what has gone wrong but anyway I can fix this from the CLI?

Thanks

Here’s the output of systemctl status Rockstor, not sure if this has anything to do with it:

● rockstor.service - RockStor startup script
Loaded: loaded (/etc/systemd/system/rockstor.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2018-06-15 09:15:00 BST; 4min 16s ago
Main PID: 5460 (supervisord)
CGroup: /system.slice/rockstor.service
├─5460 /usr/bin/python /opt/rockstor/bin/supervisord -c /opt/rockstor/etc/supervisord.conf
├─5472 /usr/bin/python /opt/rockstor/bin/gunicorn --bind=127.0.0.1:8000 --pid=/run/gunicorn.pid --workers=2 --log-file=/opt/rockstor/var/log/gunicorn.log --pythonpath=/…
├─5473 /usr/bin/python /opt/rockstor/bin/data-collector
├─5474 /usr/bin/python2.7 /opt/rockstor/bin/django ztaskd --noreload --replayfailed -f /opt/rockstor/var/log/ztask.log
├─5489 /usr/bin/python /opt/rockstor/bin/gunicorn --bind=127.0.0.1:8000 --pid=/run/gunicorn.pid --workers=2 --log-file=/opt/rockstor/var/log/gunicorn.log --pythonpath=/…
└─5490 /usr/bin/python /opt/rockstor/bin/gunicorn --bind=127.0.0.1:8000 --pid=/run/gunicorn.pid --workers=2 --log-file=/opt/rockstor/var/log/gunicorn.log --pythonpath=/…

Jun 15 09:15:03 cedarave-nas supervisord[5460]: 2018-06-15 09:15:03,151 INFO spawned: ‘nginx’ with pid 5499
Jun 15 09:15:03 cedarave-nas supervisord[5460]: 2018-06-15 09:15:03,177 INFO exited: nginx (exit status 1; not expected)
Jun 15 09:15:04 cedarave-nas supervisord[5460]: 2018-06-15 09:15:04,178 INFO success: data-collector entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
Jun 15 09:15:04 cedarave-nas supervisord[5460]: 2018-06-15 09:15:04,178 INFO success: ztask-daemon entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
Jun 15 09:15:05 cedarave-nas supervisord[5460]: 2018-06-15 09:15:05,182 INFO spawned: ‘nginx’ with pid 5524
Jun 15 09:15:05 cedarave-nas supervisord[5460]: 2018-06-15 09:15:05,208 INFO exited: nginx (exit status 1; not expected)
Jun 15 09:15:07 cedarave-nas supervisord[5460]: 2018-06-15 09:15:07,211 INFO success: gunicorn entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
Jun 15 09:15:08 cedarave-nas supervisord[5460]: 2018-06-15 09:15:08,214 INFO spawned: ‘nginx’ with pid 5693
Jun 15 09:15:08 cedarave-nas supervisord[5460]: 2018-06-15 09:15:08,238 INFO exited: nginx (exit status 1; not expected)
Jun 15 09:15:09 cedarave-nas supervisord[5460]: 2018-06-15 09:15:09,240 INFO gave up: nginx entered FATAL state, too many start retries too quickly

well I did a bit more searching and I managed to fix it by using the workaround in this post:

the ngnix.conf file was empty, why would this have been?

Thanks

@Technoholic Glad you got it sorted and found the post helpful

Still haven’t gotten to the bottom of this as I’ve only seen it very rarely and have no reproducer to investigate. Bit of a strange one really, it may be something to do with how we create the file from that template. There are a few other changes in the works that may end up fixing this as a side effect but it’s definitely ‘a thing’.

Thanks for reporting your findings and further confirming the work around.