Fatal error: Too many clients error when connecting with https after a while

TL;DR why is the number of connections to the webpage status info so limited?

Hi, just installed rockstor, purchased the 5 year support. Fun to play with this. I’m running inside virtualbox vm, with 4 gig ram, 2 cpus, 1 physical disk mapped. My system has 16 gig physical ram, 4 proc i7 with multithreading, vt-d. I’m running 3.8.10. I am logged in with my non-root account.

I connect to the http interface on my local network, and everything works, (https://10.1.2,3 or whatever). The problem is I connect a few times a day from different systems. After a while I noticed that I started getting error messages and the ui wouldn’t load. I love the ability to get that error.tar file.

I don’t think I’m opening even 30 different connections. This seems like a strange fragility that I can’t open say 5 or 10 different connections.

In the error file, I see different: FATAL: sorry, too many clients already. I see this with any of the pages at my local network after the problem. Restarting the vm fixes it. Here’s the top of one of the stack frames:

[11/Jan/2016 09:10:40] ERROR [storageadmin.middleware:35] Exception occured while processing a request. Path: / method: GET
[11/Jan/2016 09:10:40] ERROR [storageadmin.middleware:36] FATAL: sorry, too many clients already
Traceback (most recent call last):
File “/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/core/handlers/base.py”, line 112, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/home.py”, line 59, in home
setup = Setup.objects.all()[0]
File “/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/models/query.py”, line 132, in getitem

1 Like

Hello,

I am experiencing the same issue. I am not running sevearal connections. Maybe there were two tabs open in Chrome. I do have the impression that connections are not properly terminated.

Greetings,
Hendrik

Thanks @Nick for reporting this issue and @henfri for reviving it. It almost fell through the cracks. I have also occasionally noticed this lately and I think it’s due to our APIWrapper implementation. db connections are supposed to close but there seems to be slow creep up of non-closed connections. I’ve tried to reproduce it but no luck so far. But this is a serious issue and we should fix it soon. Here’s the github issue for it.

In the meantime, the workaround is to restart the rockstor service with systemctl restart rockstor command.

Glad to see this is a known issue - I just installed rockstor last night and am also experiencing this issue after leaving the web UI open in a single browser window overnight. I got the error after waking my computer up this morning and trying to follow a link in the open web UI.

Hi all, this my first post and i hope it will help with the “too many clients”: it seems a problem related to db connection not closed / not properly managed (as for cron execution)

Here is the error with snapshots scheduled every 1 minute (it would be nice to have a text input instead of a select with 30 min / 1 min etc - i would like to have 10 minutes snapshots for 1 share, 30 min for another etc etc)

Thanks for rockstor - I’m planning to move from nas4free/freenas on proxmox to rockstor on proxmox (excluding the “too many clients”, i’ve already seen better samba + AD integration and samba read/write performance - 160 MB/s read-write on Gigabit, link a normal samba on centos/debian on bare metal :smiley: )

Traceback (most recent call last): File "/opt/rockstor/bin/st-snapshot", line 43, in <module> sys.exit(scripts.scheduled_tasks.snapshot.main()) File "/opt/rockstor/src/rockstor/scripts/scheduled_tasks/snapshot.py", line 57, in main tdo = TaskDefinition.objects.get(id=tid) File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/models/manager.py", line 151, in get return self.get_queryset().get(*args, **kwargs) File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/models/query.py", line 304, in get num = len(clone) File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/models/query.py", line 77, in __len__ self._fetch_all() File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/models/query.py", line 857, in _fetch_all self._result_cache = list(self.iterator()) File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/models/query.py", line 220, in iterator for row in compiler.results_iter(): File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/models/sql/compiler.py", line 713, in results_iter for rows in self.execute_sql(MULTI): File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/models/sql/compiler.py", line 785, in execute_sql cursor = self.connection.cursor() File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/backends/__init__.py", line 162, in cursor cursor = util.CursorWrapper(self._cursor(), self) File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/backends/__init__.py", line 132, in _cursor self.ensure_connection() File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/backends/__init__.py", line 127, in ensure_connection self.connect() File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/utils.py", line 99, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/backends/__init__.py", line 127, in ensure_connection self.connect() File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/backends/__init__.py", line 115, in connect self.connection = self.get_new_connection(conn_params) File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/backends/postgresql_psycopg2/base.py", line 115, in get_new _connection return Database.connect(**conn_params) File "/opt/rockstor/eggs/psycopg2-2.6-py2.7-linux-x86_64.egg/psycopg2/__init__.py", line 164, in connect conn = _connect(dsn, connection_factory=connection_factory, async=async) django.db.utils.OperationalError: FATAL: sorry, too many clients already

IMPORTANT - LOST SNAPSHOT SCHEDULE

I’ve been playing with this issue and found this: I had a 1 minute snapshot task schedule and with this error the cron stops. you can also restart rockstor via systemctl, but you get the same issue.

What I’ve done: Update via YUM UPDATE and you get 2 postgresql lib updates.

  1. no more “too many clients” error

  2. but got errors on schedule task although snapshots seems ok

Finally it seems to solve “too many clients” , but it’s not full compatible with rockstor scripts (task scheduled with 20 snapshots limit, now got 39 snapshots)

Pls note this happened as soon as file was added to the share via samba

Hi @suman , this is what i get / understand after a fresh new install (I had to do so because of the yum update VS rockstor apis):

New installation always over proxmox (1 Virtio root disk, 2 Virtio storage disk, 1 Sata “fake disk” to install directly on Virtio root)
Created a pool
Joined AD
Create share with root/Domain Users
Got my Scheduled task for snapshot every minute working fine (except for qgroup assing, but I’ve read about it in btrfs.py :slight_smile: and snapshots seem all ok)

No more too many clients error and i think this is the reason: I’m not using anymore F5 to refresh snapshots/scheduled tasks/other pages so probabbly via F5 you get some pending db connections

Just one thing: scheduled task is set for 25 turns, but still get 26 snapshots, is it right?

1 Like