Rockstor WebUI Connection Refused

Hi everybody,

Having an issue accessing the webui. I have a bookmark for the webui that worked previously.

https://192.168.1.121/#home

However google chrome on windows 10 refuses to connect to this web address. All the rockons work, the samba shares seem to working as well as ssh connection.

192.168.1.121 refused to connect.

Any ideas, I have looked at quite a few webui connection threads.

Has the system been restarted or updated at all since it was working (and before it broke)?

If you’re familiar with SSH, I would suggest connecting to the system via SSH and restarting the UI service with:

systemctl restart rockstor

Failing that, post the results of:

netstat -tulpn | grep ":443"
systemctl status rockstor
journalctl -n 100 --no-pager

Cheers,

Haioken

1 Like

Definitely been manually restarted a bunch. Sometimes it requires 2 restarts to get the webui or something working. Rockstor as a whole seems to not love being shut down even via webui shutdown command. As for updates, I have the update stable subscription and auto update enabled, I’m unsure if recently it has failed and I missed that somehow.

Tried, same response and installed net-tools to check as requested:

netstat -tulpn | grep “:443”

returned blank

systemctl status rockstor

returned active running, but when i reviewed full logs using -l switch i got:

[root@warehouse13 ~]# systemctl status rockstor -l
● rockstor.service - RockStor startup script
Loaded: loaded (/etc/systemd/system/rockstor.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2017-11-29 16:06:05 CST; 21min ago
Main PID: 31554 (supervisord)
CGroup: /system.slice/rockstor.service
├─31554 /usr/bin/python /opt/rockstor/bin/supervisord -c /opt/rockstor/etc/supervisord.conf
├─31567 /usr/bin/python /opt/rockstor/bin/gunicorn --bind=127.0.0.1:8000 --pid=/run/gunicorn.pid --workers=2 --log-file=/opt/rockstor/var/log/gunicorn.log --pythonpath=/opt/rockstor/src/rockstor --timeout=120 --graceful-timeout=120 wsgi:application
├─31568 /usr/bin/python /opt/rockstor/bin/data-collector
├─31569 /usr/bin/python2.7 /opt/rockstor/bin/django ztaskd --noreload --replayfailed -f /opt/rockstor/var/log/ztask.log
├─31582 /usr/bin/python /opt/rockstor/bin/gunicorn --bind=127.0.0.1:8000 --pid=/run/gunicorn.pid --workers=2 --log-file=/opt/rockstor/var/log/gunicorn.log --pythonpath=/opt/rockstor/src/rockstor --timeout=120 --graceful-timeout=120 wsgi:application
└─31589 /usr/bin/python /opt/rockstor/bin/gunicorn --bind=127.0.0.1:8000 --pid=/run/gunicorn.pid --workers=2 --log-file=/opt/rockstor/var/log/gunicorn.log --pythonpath=/opt/rockstor/src/rockstor --timeout=120 --graceful-timeout=120 wsgi:application

Nov 29 16:06:07 warehouse13 supervisord[31554]: 2017-11-29 16:06:07,918 INFO spawned: ‘nginx’ with pid 31602
Nov 29 16:06:07 warehouse13 supervisord[31554]: 2017-11-29 16:06:07,928 INFO exited: nginx (exit status 1; not expected)
Nov 29 16:06:08 warehouse13 supervisord[31554]: 2017-11-29 16:06:08,929 INFO success: data-collector entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
Nov 29 16:06:08 warehouse13 supervisord[31554]: 2017-11-29 16:06:08,929 INFO success: ztask-daemon entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
Nov 29 16:06:09 warehouse13 supervisord[31554]: 2017-11-29 16:06:09,931 INFO spawned: ‘nginx’ with pid 31606
Nov 29 16:06:09 warehouse13 supervisord[31554]: 2017-11-29 16:06:09,940 INFO exited: nginx (exit status 1; not expected)
Nov 29 16:06:11 warehouse13 supervisord[31554]: 2017-11-29 16:06:11,943 INFO success: gunicorn entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
Nov 29 16:06:12 warehouse13 supervisord[31554]: 2017-11-29 16:06:12,946 INFO spawned: ‘nginx’ with pid 31629
Nov 29 16:06:12 warehouse13 supervisord[31554]: 2017-11-29 16:06:12,957 INFO exited: nginx (exit status 1; not expected)
Nov 29 16:06:13 warehouse13 supervisord[31554]: 2017-11-29 16:06:13,958 INFO gave up: nginx entered FATAL state, too many start retries too quickly

I’m using dropbox rockon and it tends to clog my journalctl logs so I didn’t see anything useful other than dropbox uploads.

@coleberhorst

Rockstor as a whole seems to not love being shut down even via webui shutdown command

I’ve not experienced any problem like this, mine restarts quite happily every time.
I suggest when the chance arises, temporarily disable the rockons service, restart and perform:

journalctl -b

to find out what’s going on there.

Meanwhile, it looks like nginx is not starting. (Way to state the obvious, right?)
Instinctually, I think it’s probably an nginx config issue, unfortunately, as gunicorn appears to be handling the nginx service, systemctl isn’t getting the information directly.

You can test the config with:

nginx -t -c /opt/rockstor/etc/nginx/nginx.conf

There might be some hints in

/opt/rockstor/var/log/gunicorn.log
/opt/rockstor/var/log/rockstor.log

[root@warehouse13 ~]# nginx -t -c /opt/rockstor/etc/nginx/nginx.conf
nginx: the configuration file /opt/rockstor/etc/nginx/nginx.conf syntax is ok
nginx: [emerg] no “events” section in configuration
nginx: configuration file /opt/rockstor/etc/nginx/nginx.conf test failed

I tried stopping the docker service and it did help the logs be easier to read.

Guinicorn seemed fine:

[2017-11-29 16:06:05 +0000] [2920] [INFO] Shutting down: Master
[2017-11-29 16:06:07 +0000] [31567] [INFO] Starting gunicorn 19.7.1
[2017-11-29 16:06:07 +0000] [31567] [INFO] Listening at: http://127.0.0.1:8000 (31567)
[2017-11-29 16:06:07 +0000] [31567] [INFO] Using worker: sync
[2017-11-29 16:06:07 +0000] [31582] [INFO] Booting worker with pid: 31582
[2017-11-29 16:06:07 +0000] [31589] [INFO] Booting worker with pid: 31589
[2017-11-30 16:34:11 +0000] [31589] [INFO] Worker exiting (pid: 31589)
[2017-11-30 16:34:11 +0000] [31582] [INFO] Worker exiting (pid: 31582)
[2017-11-30 16:34:11 +0000] [31567] [INFO] Handling signal: term
[2017-11-30 16:34:11 +0000] [31567] [INFO] Shutting down: Master
[2017-11-30 16:34:54 +0000] [2921] [INFO] Starting gunicorn 19.7.1
[2017-11-30 16:34:54 +0000] [2921] [INFO] Listening at: http://127.0.0.1:8000 (2921)
[2017-11-30 16:34:54 +0000] [2921] [INFO] Using worker: sync
[2017-11-30 16:34:54 +0000] [2936] [INFO] Booting worker with pid: 2936
[2017-11-30 16:34:54 +0000] [2937] [INFO] Booting worker with pid: 2937

rockstor.log showed a recent error but I think that was just it complaining about me shutting it down via ssh shutdown without access to web gui:

[29/Jun/2017 15:30:36] ERROR [storageadmin.util:44] exception: Share(proj) cannot be deleted as it has snapshots. Delete snapshots and try again
Traceback (most recent call last):
File “/opt/rockstor/eggs/gunicorn-0.16.1-py2.7.egg/gunicorn/workers/sync.py”, line 34, in run
client, addr = self.socket.accept()
File “/usr/lib64/python2.7/socket.py”, line 202, in accept
sock, addr = self._sock.accept()
error: [Errno 11] Resource temporarily unavailable
[29/Jun/2017 15:30:36] DEBUG [storageadmin.util:45] Current Rockstor version: 3.9.0-0
Package upgrade
Package upgrade
Package upgrade
[28/Nov/2017 02:40:42] ERROR [storageadmin.middleware:32] Exception occured while processing a request. Path: /api/commands/refresh-snapshot-state method: POST
[28/Nov/2017 02:40:42] ERROR [storageadmin.util:44] exception: Failed to shutdown the system due to a low level error: Error running a command. cmd = /usr/sbin/shutdown -h now. rc = -15. stdout = [’’]. stderr = [’’]
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/storageadmin/views/command.py”, line 245, in post
system_shutdown(delay)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 1139, in system_shutdown
return run_command([SHUTDOWN, ‘-h’, delta])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 121, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/sbin/shutdown -h now. rc = -15. stdout = [’’]. stderr = [’’]
[28/Nov/2017 02:40:42] ERROR [storageadmin.middleware:33] Error running a command. cmd = /sbin/btrfs property get /mnt2/media/rockon-service/btrfs/subvolumes/a8c5da35b35b087a7bd657e450cee3e0f110a171ba3afb8f9b3de1aaeb14b8df. rc = -15. stdout = [’’]. stderr = [’’]
Traceback (most recent call last):
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/handlers/base.py”, line 132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/views/decorators/csrf.py”, line 58, in wrapped_view
return view_func(*args, **kwargs)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/views/generic/base.py”, line 71, in view
return self.dispatch(request, *args, **kwargs)
File “/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py”, line 452, in dispatch
response = self.handle_exception(exc)
File “/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py”, line 449, in dispatch
response = handler(request, *args, **kwargs)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py”, line 145, in inner
return func(*args, **kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/command.py”, line 323, in post
import_snapshots(share)
File “/opt/rockstor/src/rockstor/storageadmin/views/share_helpers.py”, line 157, in import_snapshots
share.name)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 518, in snaps_info
snap_name, writable = parse_snap_details(mnt_pt, fields)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 480, in parse_snap_details
’%s/%s’ % (mnt_pt, fields[-1])])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 121, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /sbin/btrfs property get /mnt2/media/rockon-service/btrfs/subvolumes/a8c5da35b35b087a7bd657e450cee3e0f110a171ba3afb8f9b3de1aaeb14b8df. rc = -15. stdout = [’’]. stderr = [’’]

@coleberhorst

You’re right, nothing useful there.
It looks like supervisord is used for managing the nginx process.
Could you please try restarting the rockstor service again and posting the recent contents of:

/opt/rockstor/var/logs/supervisord_nginx_stderr.log
/opt/rockstor/var/logs/supervisord_nginx_stdout.log

Cheers,

Haioken

Ok stopped docker servicer and then restarted rockstor service:

Here’s the cat of stderr:

[root@warehouse13 log]# cat supervisord_nginx_stderr.log
Package upgrade
Package upgrade
Package upgrade
Package upgrade
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration
nginx: [emerg] no “events” section in configuration

Here’s the cat of stdout:

[root@warehouse13 log]# cat supervisord_nginx_stdout.log
Package upgrade
Package upgrade
Package upgrade
Package upgrade

Also here’s supervisord.log which might have some clue, definitely shows nginx failure:

2017-11-30 16:34:11,838 WARN received SIGTERM indicating exit request
2017-11-30 16:34:11,838 INFO waiting for data-collector, gunicorn, ztask-daemon to die
2017-11-30 16:34:11,841 INFO exited: data-collector (terminated by SIGTERM; not expected)
2017-11-30 16:34:11,846 INFO stopped: ztask-daemon (terminated by SIGTERM)
2017-11-30 16:34:11,896 INFO exited: gunicorn (exit status 0; expected)
2017-11-30 16:34:52,895 CRIT Supervisor running as root (no user in config file)
2017-11-30 16:34:52,904 INFO RPC interface ‘supervisor’ initialized
2017-11-30 16:34:52,904 CRIT Server ‘unix_http_server’ running without any HTTP authentication checking
2017-11-30 16:34:52,904 INFO supervisord started with pid 2908
2017-11-30 16:34:53,906 INFO spawned: ‘nginx’ with pid 2920
2017-11-30 16:34:53,907 INFO spawned: ‘gunicorn’ with pid 2921
2017-11-30 16:34:53,908 INFO spawned: ‘data-collector’ with pid 2922
2017-11-30 16:34:53,909 INFO spawned: ‘ztask-daemon’ with pid 2923
2017-11-30 16:34:53,917 INFO exited: nginx (exit status 1; not expected)
2017-11-30 16:34:54,919 INFO spawned: ‘nginx’ with pid 2957
2017-11-30 16:34:54,932 INFO exited: nginx (exit status 1; not expected)
2017-11-30 16:34:55,933 INFO success: data-collector entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
2017-11-30 16:34:55,933 INFO success: ztask-daemon entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
2017-11-30 16:34:56,936 INFO spawned: ‘nginx’ with pid 3049
2017-11-30 16:34:56,949 INFO exited: nginx (exit status 1; not expected)
2017-11-30 16:34:58,951 INFO success: gunicorn entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2017-11-30 16:34:59,954 INFO spawned: ‘nginx’ with pid 3050
2017-11-30 16:34:59,967 INFO exited: nginx (exit status 1; not expected)
2017-11-30 16:35:00,968 INFO gave up: nginx entered FATAL state, too many start retries too quickly
2017-12-01 00:19:03,080 WARN received SIGTERM indicating exit request
2017-12-01 00:19:03,080 INFO waiting for data-collector, gunicorn, ztask-daemon to die
2017-12-01 00:19:03,082 INFO exited: data-collector (terminated by SIGTERM; not expected)
2017-12-01 00:19:03,082 INFO exited: ztask-daemon (terminated by SIGTERM; not expected)
2017-12-01 00:19:03,134 INFO exited: gunicorn (exit status 0; expected)
2017-12-01 00:19:03,296 CRIT Supervisor running as root (no user in config file)
2017-12-01 00:19:03,303 INFO RPC interface ‘supervisor’ initialized
2017-12-01 00:19:03,304 CRIT Server ‘unix_http_server’ running without any HTTP authentication checking
2017-12-01 00:19:03,304 INFO supervisord started with pid 18593
2017-12-01 00:19:04,306 INFO spawned: ‘nginx’ with pid 18605
2017-12-01 00:19:04,308 INFO spawned: ‘gunicorn’ with pid 18606
2017-12-01 00:19:04,310 INFO spawned: ‘data-collector’ with pid 18607
2017-12-01 00:19:04,312 INFO spawned: ‘ztask-daemon’ with pid 18608
2017-12-01 00:19:04,319 INFO exited: nginx (exit status 1; not expected)
2017-12-01 00:19:05,322 INFO spawned: ‘nginx’ with pid 18641
2017-12-01 00:19:05,333 INFO exited: nginx (exit status 1; not expected)
2017-12-01 00:19:06,335 INFO success: data-collector entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
2017-12-01 00:19:06,335 INFO success: ztask-daemon entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
2017-12-01 00:19:07,337 INFO spawned: ‘nginx’ with pid 18746
2017-12-01 00:19:07,349 INFO exited: nginx (exit status 1; not expected)
2017-12-01 00:19:09,352 INFO success: gunicorn entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2017-12-01 00:19:10,354 INFO spawned: ‘nginx’ with pid 19067
2017-12-01 00:19:10,369 INFO exited: nginx (exit status 1; not expected)
2017-12-01 00:19:11,370 INFO gave up: nginx entered FATAL state, too many start retries too quickly

The other logs seemed to have nothing useful.

Looks like your nginx config is missing or broken.
Easiest fix is probably to reinstall rockstor package:

yum reinstall rockstor

Not sure if that might bork the stored ui data tho (shares, pools, rockon status, etc). Personally, it’s what I’d do at that point, if I didn’t have a config backup.

@Haioken

Could @coleberhorst report be another instance of the rather rare blank nginx config? Ie:

https://github.com/rockstor/rockstor-core/issues/1833

In which case there is a shorter and simpler fix that simply re-creates, via a sed command, the following file:

/opt/rockstor/etc/nginx/nginx.conf

Looks like it to me given @coleberhorst’s supplied log entry:

I still don’t know how this comes about but at least that is worth a try.

@coleberhorst if you fancy looking in that file to see if it’s blank you may very well find the confirmed work around via the first sed command (27 Sep) in that issue gets you up and running again. Let us know how it goes.

Linking to a possibly related forum thread by @petermc that lead to the referenced issue:

Visiting that thread should also yield a copy of said sed along with an explanation of it’s function :slight_smile:.

Hope that helps.

Ah hell, I forgot the templates.

Yeah, do what Phil said ;o)

1 Like

As usual @phillxnet sweeps in to save my bacon, thanks to @Haioken and @phillxnet the blank nginx fix worked well!

If there’s anything I can provide to help with the bug fix lemme know!

1 Like

@coleberhorst Glad your sorted now and well done for persevering, but @Haioken pretty much sorted this one via good diagnostic method and subsequent command suggestions:

I’d just like to know how this happens in the first place, probably some race in how we manage / edit / create that file. Oh well, all in good time.

Thanks for the update and thanks to @Haioken for an exemplary diagnosis.

1 Like

@phillxnet No doubt I won’t experience the issue, however I’ve started an auditd watch on my own one to see what might be writing to it.

auditctl -w /opt/rockstor/etc/nginx/nginx.conf -p w -k nginx_watch

The options I’m using are as follows:

-w <PATH>     - Monitor a filesystem object at <PATH>
-p [r|w|x|a]  - Monitor for (read, write, execute, attribute)
-k <KEY>      - Log events under <KEY>

From this we can determine changes are made to the file by periodically checking:

auseach -k nginx_watch

Might be something to consider adding to the systemd units for a few core files if this issue continues to present itself - auditctl is fantastic, but it can seriously dog things down if used to much (or on anything that triggers the monitored action too frequently), as it blocks any of the monitored action until it’s finished logging the previous one.
IE: Don’t set read on something that’s read every few microseconds.

Here is the output I have from ausearch after creating the watch and touching the file:

----
time->Sat Dec  2 14:58:49 2017
type=CONFIG_CHANGE msg=audit(1512187129.270:242908): auid=0 ses=29900 op=add_rule key="nginx_watch" list=4 res=1
----
time->Sat Dec  2 15:08:30 2017
type=PROCTITLE msg=audit(1512187710.273:243099): proctitle=746F756368002F6F70742F726F636B73746F722F6574632F6E67696E782F6E67696E782E636F6E66
type=PATH msg=audit(1512187710.273:243099): item=1 name="/opt/rockstor/etc/nginx/nginx.conf" inode=303331 dev=00:27 mode=0100600 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL
type=PATH msg=audit(1512187710.273:243099): item=0 name="/opt/rockstor/etc/nginx/" inode=49961 dev=00:27 mode=040755 ouid=0 ogid=0 rdev=00:00 nametype=PARENT
type=CWD msg=audit(1512187710.273:243099): cwd="/root"
type=SYSCALL msg=audit(1512187710.273:243099): arch=c000003e syscall=2 success=yes exit=3 a0=7ffe09bf3861 a1=941 a2=1b6 a3=7ffe09bf29c0 items=2 ppid=9624 pid=20397 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=29900 comm="touch" exe="/usr/bin/touch" key="nginx_watch"

If anybody experiences this multiple times, I suggest they should try this.

1 Like