Assuming you are on 3.8.16-12 now, Could you capture the entire output of yum update rockstor
? This output will have clues, if the problem appears again.
Update from 3.8.16-12 to 3.8.16-13 is OK.
Sorry @suman I did not capture the update output, as I just read your post, it did look smooth, no unusual message and I did not get the error message after reboot.
Previously I updated from 3.8.15 to 3.8.16-13 directly, that caused the error.
It seems you have a some step in 3.8.16-12 release what is fixing the database, but it is not included in 3.8.16-13.
Zoltan
Hi.
I just had the same issue as zgyivi, with a similar upgrade path (3.8.15 -> 3.8.16-14). Downgrading to -12 and then upgrading to -14 again worked for me.
@suman I’m going to set up a similar server next week and will try to capture the output from the 3.8.15 -> 3.8.16-14. Let me know whether I need to capture anything extra.
I’ve been having the same issue too. Just updated to 3.8.16-15 and had the same error for the column. I updated to 3.8.16-15 and now I have the following stacktrace
Traceback (most recent call last):
File "/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py", line 41, in _handle_exception
yield
File "/opt/rockstor/src/rockstor/storageadmin/views/user.py", line 106, in get_queryset
return combined_users()
File "/opt/rockstor/src/rockstor/storageadmin/views/ug_helpers.py", line 69, in combined_users
temp_uo.pincard_allowed, temp_uo.has_pincard = pincard_states(temp_uo) # noqa E501
File "/opt/rockstor/src/rockstor/system/pinmanager.py", line 146, in pincard_states
pincard_present = has_pincard(user)
File "/opt/rockstor/src/rockstor/system/pinmanager.py", line 129, in has_pincard
pins = Pincard.objects.filter(user=int(uid_field)).count()
File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 318, in count
return self.query.get_count(using=self.db)
File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/sql/query.py", line 466, in get_count
number = obj.get_aggregation(using, ['__count'])['__count']
File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/sql/query.py", line 447, in get_aggregation
result = compiler.execute_sql(SINGLE)
File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/sql/compiler.py", line 840, in execute_sql
cursor.execute(sql, params)
File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/utils.py", line 98, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
ProgrammingError: relation "storageadmin_pincard" does not exist
LINE 1: SELECT COUNT(*) AS "__count" FROM "storageadmin_pincard" WHE...
I’m trying yum downgrade rockstor
right now and capturing the output. Just want to help fix this bug
@gkadillak Hello again.
Are you sure your report is the same error as in this issues we have:
where as in your log we have:
which looks more like:
and
which were caused by a db migration tool change that meant no direct updates from Rockstor versions prior to 3.8.15, ie you had to install 3.8.15 as an interim step. However this newer issue has been seen (though rarely) when updated from rockstor-3.8.16-10.x86_64 to rockstor-3.8.16-12.x86_64.
That is in:
EDIT: corrected to link to 3.8.16-11 (not -10)
@Flyer added the issue specific db field in:
via:
commit 70ce761574f71be98b1a3df8f2f7a048ffc453b2
Share Usage reporting right sizes by MFlyer · Pull Request #1625 · rockstor/rockstor-core · GitHub
I’m currently looking into this but it seems initially that this is the db migration that fails to apply in some cases, resulting in the report subject of this thread.
I may have this all wrong so do share any findings corrections as it would be great to get this one sorted.
Sorry folks, linked to wrong release version. Have now edited that last post to point to:
for when
pqgroup_eusage and pqgroup_rusage were added.
Hi @suman . There is no warnings or anything strange in the output of the update. I can upload the log files somewhere if you want it, but I don’t think there is anything valuable in them, as they don’t show any kind of problem.
Just dropping by to confirm that this issue still exists. I did an update from 3.8.15 to 3.8.16-16 and ended up with the error described above. I rolled back, one version at a time, using yum downgrade rockstor
and the error resolved itself when I got to 3.8.16-12.
So glad this thread exists! Thanks guys.
I had this occur on my stable machine when it auto-updated to 3.9.0 this morning
Appears to be working after a yum downgrade rockstor
(which put it to 3.8.16) and then update + reboot from the gui
Awesome! Fixed my problem! Thank you!
Just want to chime in I had the same problem going from 3.8.15-9 to 3.8.16-16.
yum upgrade rockstor-3.8.16-16 ### upgrade from 3.8.15-9. Post-Install RPM Failure!!!
systemctl restart rockstor-pre rockstor rockstor-bootstrap ### FAILURE!!!
I downgraded to 3.8.16-12 (from 3.8.16-16) first, restarted Rockstor, then upgraded back to 3.8.16-16. That seemed to work this time.
yum downgrade rockstor-3.8.16-12
systemctl restart rockstor-pre rockstor rockstor-bootstrap
yum upgrade rockstor-3.8.16-16
systemctl restart rockstor-pre rockstor rockstor-bootstrap
Hi all, this is a copy&paste message on multiple forum threads:
checking over a Github ad hoc issue, any news on this after last Rockstor update?
On my side: some checks required with Rock-ons share usage (probably docker related), but no blocking errors like missing db fields, etc
Thanks
I have this problem too after auto upgrading to 3.9.0, trying to downgrade. But has any solution been found? I really don’t like being on old releases.
The root cause of this issue is a bug in our schema migration logic. I fixed it in https://github.com/rockstor/rockstor-core/pull/1694 which went into 3.9.0-4
hi @suman - i am on 3.9.0-5 and still have the issue listed:
Houston, we’ve had a problem.
column storageadmin_share.pqgroup_rusage does not exist LINE 1: …n_share".“rusage”, “storageadmin_share”.“eusage”, "storagead… ^
I have been a Stable Update Channel user - but after experiencing this issue - switched to Testing Updates channel to access the 0-4 release - this has actually provided me the 0-5 release but the error persists.
Upgraded from 3.8 to 3.9 (stable) here and had this problem. Tried:
yum downgrade rockstor-3.8.16-12
but errored “No package rockstor-3.8.16-12 available”
Then:
yum downgrade rockstor
systemctl restart rockstor-pre rockstor rockstor-bootstrap
But errored:
Job for rockstor-bootstrap.service failed because the control process exited with error code. See “systemctl status rockstor-bootstrap.service” … for details
Bootstrap service won’t start:
[root@nas ~]# systemctl status rockstor-bootstrap.service
● rockstor-bootstrap.service - Rockstor bootstrapping tasks
Loaded: loaded (/etc/systemd/system/rockstor-bootstrap.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2017-04-17 23:04:23 BST; 1min 12s ago
Process: 17144 ExecStart=/opt/rockstor/bin/bootstrap (code=exited, status=1/FAILURE)
Main PID: 17144 (code=exited, status=1/FAILURE)
Then upgraded back to 3.9 using webui, which got rid of the GUI errors. However rockstor-bootstrap service still won’t start and my main NFS share couldn’t be found (samba was fine). NFS service was started (tried restarting it). Webui showed
Unknown internal error doing a POST to /api/sm/services/rockstor-bootstrap/start
Rebooted, same state (except clock became correct timezone). Tried to recreate NFS share I needed but pressing trash icon did nothing. So just created a new NFS share for the same share, but with * for host and nfs client instead of the FQDN of the host that needs it. Now can access this share. But not a smooth upgrade experience, and there are still glitches:
- Can’t delete NFS shares
- Bootstrap service is stopped - should this be started? If so, how?
- Should I temporarily change to testing updates to get to -05, then back onto stable?
Could you reply with the output of systemctl status -l rockstor-pre
command?
Really sorry this nasty db migration bug go into stable. The easiest way to fix it is to update to -05, but also make sure rockstor-pre service has restarted. After update, please capture the output of systemctl status -l rockstor-pre
and if it has not restarted/started recently, go ahead and restart it with systemctl restart rockstor-pre
. If still having problems, reply with output of systemctl status -l rockstor-pre rockstor rockstor-bootstrap
Upgraded to -05. Can delete NFS shares now. systemctl status -l rockstor-pre says:
● rockstor-pre.service - Tasks required prior to starting Rockstor
Loaded: loaded (/etc/systemd/system/rockstor-pre.service; enabled; vendor preset: disabled)
Active: active (exited) since Tue 2017-04-18 09:24:45 BST; 1h 40min ago
Main PID: 3702 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/rockstor-pre.service
Apr 18 09:24:35 nas initrock[3702]: 2017-04-18 03:24:35,888: Running app database migrations…
Apr 18 09:24:44 nas initrock[3702]: 2017-04-18 03:24:44,185: Done
Apr 18 09:24:44 nas initrock[3702]: 2017-04-18 03:24:44,186: Running prepdb…
Apr 18 09:24:45 nas initrock[3702]: 2017-04-18 03:24:45,211: Done
Apr 18 09:24:45 nas initrock[3702]: 2017-04-18 03:24:45,212: stopping firewalld…
Apr 18 09:24:45 nas initrock[3702]: 2017-04-18 03:24:45,306: firewalld stopped and disabled
Apr 18 09:24:45 nas initrock[3702]: 2017-04-18 03:24:45,829: rockstor service looks correct. Not updating.
Apr 18 09:24:45 nas initrock[3702]: 2017-04-18 03:24:45,829: rockstor-bootstrap.service looks correct. Not updating.
Apr 18 09:24:45 nas initrock[3702]: default_if = nas03_team
Apr 18 09:24:45 nas initrock[3702]: ipaddr = #.#.#.#
Does “Active (Exited)” mean it has started successfully? Which genius Linux kernel dev dreamed up such an unintuitive status!? My guess is it’s started because has green text, not red. In any case, the “bootstrap” service in webui still shows stopped:
Other statuses, rockstor-bootstrap:
● rockstor-bootstrap.service - Rockstor bootstrapping tasks
Loaded: loaded (/etc/systemd/system/rockstor-bootstrap.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2017-04-18 11:04:48 BST; 12min ago
Process: 20213 ExecStart=/opt/rockstor/bin/bootstrap (code=exited, status=1/FAILURE)
Main PID: 20213 (code=exited, status=1/FAILURE)
Apr 18 11:04:48 nas bootstrap[20213]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait
2 seconds and try again. Exception: [‘Internal Server Error: No JSON object could be decoded’]
Apr 18 11:04:48 nas bootstrap[20213]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait
2 seconds and try again. Exception: [‘Internal Server Error: No JSON object could be decoded’]
Apr 18 11:04:48 nas bootstrap[20213]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait
2 seconds and try again. Exception: [‘Internal Server Error: No JSON object could be decoded’]
Apr 18 11:04:48 nas bootstrap[20213]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait
2 seconds and try again. Exception: [‘Internal Server Error: No JSON object could be decoded’]
Apr 18 11:04:48 nas bootstrap[20213]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait
2 seconds and try again. Exception: [‘Internal Server Error: No JSON object could be decoded’]
Apr 18 11:04:48 nas bootstrap[20213]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait
2 seconds and try again. Exception: [‘Internal Server Error: No JSON object could be decoded’]
Apr 18 11:04:48 nas bootstrap[20213]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait
2 seconds and try again. Exception: [‘Internal Server Error: No JSON object could be decoded’]
Apr 18 11:04:48 nas bootstrap[20213]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait
2 seconds and try again. Exception: [‘Internal Server Error: No JSON object could be decoded’]
Apr 18 11:04:48 nas bootstrap[20213]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait
2 seconds and try again. Exception: [‘Internal Server Error: No JSON object could be decoded’]
Apr 18 11:04:48 nas bootstrap[20213]: Max attempts(15) reached. Connection errors persist. Failed to bootstrap. Error: [‘Internal Server Error: No JSON object could be decoded’]
and rockstor:
● rockstor.service - RockStor startup script
Loaded: loaded (/etc/systemd/system/rockstor.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2017-04-18 09:24:45 BST; 1h 52min ago
Main PID: 9367 (supervisord)
CGroup: /system.slice/rockstor.service
├─ 9367 /usr/bin/python /opt/rockstor/bin/supervisord -c /opt/rockstor/etc/supervisord.conf
├─10157 nginx: master process /usr/sbin/nginx -c /opt/rockstor/etc/nginx/nginx.con
├─10158 /usr/bin/python /opt/rockstor/bin/gunicorn --bind=127.0.0.1:8000 --pid=/run/gunicorn.pid --workers=2 --log-file=/opt/rockstor/var/log/gunicorn.log
–pythonpath=/opt/rockstor/src/rockstor --settings=settings --timeout=120 --graceful-timeout=120 wsgi:application
├─10159 /usr/bin/python /opt/rockstor/bin/data-collector
├─10160 /usr/bin/python2.7 /opt/rockstor/bin/django ztaskd --noreload --replayfailed -f /opt/rockstor/var/log/ztask.log
├─10161 nginx: worker process
├─10162 nginx: worker process
├─10226 /usr/bin/python /opt/rockstor/bin/gunicorn --bind=127.0.0.1:8000 --pid=/run/gunicorn.pid --workers=2 --log-file=/opt/rockstor/var/log/gunicorn.log
–pythonpath=/opt/rockstor/src/rockstor --settings=settings --timeout=120 --graceful-timeout=120 wsgi:application
└─10227 /usr/bin/python /opt/rockstor/bin/gunicorn --bind=127.0.0.1:8000 --pid=/run/gunicorn.pid --workers=2 --log-file=/opt/rockstor/var/log/gunicorn.log
–pythonpath=/opt/rockstor/src/rockstor --settings=settings --timeout=120 --graceful-timeout=120 wsgi:application
Apr 18 09:24:46 nas supervisord[9367]: 2017-04-18 09:24:46,405 CRIT Server ‘unix_http_server’ running without any HTTP authentication checking
Apr 18 09:24:46 nas supervisord[9367]: 2017-04-18 09:24:46,405 INFO supervisord started with pid 9367
Apr 18 09:24:47 nas supervisord[9367]: 2017-04-18 09:24:47,408 INFO spawned: ‘nginx’ with pid 10157
Apr 18 09:24:47 nas supervisord[9367]: 2017-04-18 09:24:47,410 INFO spawned: ‘gunicorn’ with pid 10158
Apr 18 09:24:47 nas supervisord[9367]: 2017-04-18 09:24:47,414 INFO spawned: ‘data-collector’ with pid 10159
Apr 18 09:24:47 nas supervisord[9367]: 2017-04-18 09:24:47,424 INFO spawned: ‘ztask-daemon’ with pid 10160
Apr 18 09:24:49 nas supervisord[9367]: 2017-04-18 09:24:49,432 INFO success: data-collector entered RUNNING state, process has stayed up for > than
2 seconds (startsecs)
Apr 18 09:24:49 nas supervisord[9367]: 2017-04-18 09:24:49,432 INFO success: ztask-daemon entered RUNNING state, process has stayed up for > than 2
seconds (startsecs)
Apr 18 09:24:52 nas supervisord[9367]: 2017-04-18 09:24:52,436 INFO success: nginx entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
Apr 18 09:24:52 nas supervisord[9367]: 2017-04-18 09:24:52,436 INFO success: gunicorn entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
Maybe I can just ignore bootstrap and assume this issue is benign, no other problems now.