Null value in column "pqgroup_eusage" violates not-null constraintDETAIL: Failing row contains (6, 4, 0/259, data1, null, 4831838208, root, root, 755, 2017-02-14 17:12:34.201143 00, data1, f, no, 0, 0, 2015/4, null, null)

[Please complete the below template with details of the problem reported on your Web-UI. Be as detailed as possible. Community members, including developers, shall try and help. Thanks for your time in reporting this issue! We recommend purchasing commercial support for expedited support directly from the developers.]

Brief description of the problem

Error while creating share

Detailed step by step instructions to reproduce the problem

I made a pool of 10 disks, one (1) TB each, with raid 10 = 4.55 TB of space
I made a share the same size as the pool.
Clicking “Submit” thows the error

Web-UI screenshot

[Drag and drop the image here]

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py", line 41, in _handle_exception yield File "/opt/rockstor/src/rockstor/storageadmin/views/share.py", line 178, in post s.save() File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py", line 734, in save force_update=force_update, update_fields=update_fields) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py", line 762, in save_base updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py", line 846, in _save_table result = self._do_insert(cls._base_manager, using, fields, update_pk, raw) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py", line 885, in _do_insert using=using, raw=raw) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/manager.py", line 127, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 920, in _insert return query.get_compiler(using=using).execute_sql(return_id) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/sql/compiler.py", line 974, in execute_sql cursor.execute(sql, params) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/utils.py", line 98, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params) IntegrityError: null value in column "pqgroup_eusage" violates not-null constraint DETAIL: Failing row contains (6, 4, 0/259, data1, null, 4831838208, root, root, 755, 2017-02-14 17:12:34.201143+00, data1, f, no, 0, 0, 2015/4, null, null).

Hi @holmkvist and welcome to Rockstor!

I suppose you had an update to latest Rockstor version.
Question: did you perform a reboot?

No -> Reboot and try again (share usage updates had some db mods)
Yes -> Damn, will check it asap

Mirko

Hi @Flyer. db constraint violation most likely does not mean an update problem, but a software bug. You might want to go over your latest changes again to see if pqgroup_eusage and possibly even the other field you’ve added can ever be null? You have these values defaulting to 0 in the model, but if you explicitly supply None when you create/save the model this problem occurs.

Checking over share.py view, but that had no big mods:

Import for new volume_usage func + using it on PUT request (share resizing).
While saving new shares we had no eusage, rusage (both with default to 0 like pqgroup_rusage/eusage) and no mods to that.
Most code is on btrfs.py lib and share_helpers (that used on data_collector to update shares usage every 60 sec), so can’t understand this, trying with a fresh new production env to check it

@suman, @phillxnet and @holmkvist getting crazy on this

Adding some cross reference to @zgyivi and @KarstenV (from other forum threads), here my steps trying to reproduce all your issues:

  • Fresh new Rockstor install from 3.8.16-1 ISO image
  • Environment with 4 disks + 1 Rockstor disk
  • Had my installation with my UTC and my locale (looking for any potential issue)
  • First boot went fine, had my hostname, my user, my password, etc etc
  • Rockstor 3.8.16-1 : had a new Pool with 2 disks, one share, all fine
  • Update to 3.8.16-12 : it took some time (Rockstor update with migrations + yum updates) <- Please remember this, I think something failed there
  • New Rockstor 3.8.16-12 : had different Pools ( Single - Raid 1 with 2 disks, 3 disks, 4 disks - Raid 0 with 2 disks, 3 disks, 4 disks - Raid 10 like @holmkvist with 4 disks )
  • On every Pool round had one share and all got created without issue
  • Last check had Rock-ons (@KarstenV) and uTorrent Rock-on installed, plus some random snapshots

Had those tests also after a reboot, no issues again

:confused:

My idea:
while performing Rockstor update 3.8.16-1 to 3.8.16-12 saw it was a long task so had a coffee and started playing via shell with cat /opt/rockstor/src/rockstor/storageadmin/views/share.py and noticed it took some time to update (5 mins counter, that file got update when counter was 1 min or less), so probably Rockstor update was queued after all yum updates.
I think you got your update corrupted (ex. : page refresh & reboot while update still working or similar)

Really not able to reproduce your issues, sorry guys :cry:

Mirko

1 Like

Ok, let’s go with Rockstor - A New Hope, we need a brave user:

Log to Rockstor shell with root account

  • yum install -y patch
  • curl https://patch-diff.githubusercontent.com/raw/rockstor/rockstor-core/pull/1625.patch > 1625.patch
  • patch -d /opt/rockstor/ -p1 < 1496.patch
  • reboot and check

Note: not on a system with important data

Mirko

@Flyer and @holmkvist

Whilst looking into the related:

I have manged to reproduce the issue reported in this thread.

Install 3.8.16-1.iso
activate testing channel updates
- not sure if this interim version is relevant but anyway
yum update rockstor-3.8.16-10
(accept GPG key)
reboot
no errors.
yum update rockstor-3.8.16-11
Confirmed via rpm console that the db migration associated with this version was successful
also:
SELECT pqgroup_rusage FROM storageadmin_share;
 pqgroup_rusage 
----------------
              0
              0
(2 rows)
So we have the new db entries

then when attempting to create a share in rockstor_rockstor pool:

IntegrityError: null value in column "pqgroup_eusage" violates not-null constraint
DETAIL:  Failing row contains (12, 1, 0/263, test-share, null, 7025459, root, root, 755, 2017-03-07 16:57:44.890215+00, test-share, f, null, 0, 0, 2015/12, null, null).

But the good news is that after a reboot all subsequent share creations function as expected. Note also that the initial share creation that triggered the error took effect.

So looks like this one was not a missing db entry (via failed db migration) but as a result of stale code following the 3.8.16-11 update.

Popping in this thread so that we know how this might be reproduced.

1 Like

Whelp, I’m so sorry that I’m one of the ones that got caught in this position without a single solitary way (that I can see) to get out.

First off, something happened a while back to where I remotely copied over my ssh keys to the server and in the middle of the copy my connection died (thanks CenturyLink). I got back online and found that I was unable to ssh since I set my config to only allow logins from my key set. So, that sucks. I thought this would be okay for a little while since RockStor was running okay… the “system shell” link doesn’t work (displays page not found) and I hadn’t updated to the latest release because I was seeing some posts about how it was breaking a couple installs and felt it might need some ironing out.

Anyway, now the second part of the problem.

This is all remote by the way, the server is at my mothers office a couple states away serving as her cloud backup.

I have an admin user set up as the only user able to log into the GUI (even remotely, with a 256character pass) and that user cannot change roots password so I can have someone remotely log in via physical console and fix my keys. So I thought that performing a distro update would fix the issue with not being able to admin users etc. and give me system shell back on the main menu, but alas, nope.

So then I tried the boot to the ISO method an rescue shell and it cant seem to find my /mnt/sysimage instance so it’s unable to let me shell in and reset root that way as well.

Now I’m just kinda stuck in a position where I am unable to perform the yum downgrade actions since it just sits here on the GUI:

Houston, we've had a problem.

column storageadmin_share.pqgroup_rusage does not exist LINE 1: ...n_share"."rusage", "storageadmin_share"."eusage", "storagead... ^

            Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py", line 41, in _handle_exception
    yield
  File "/opt/rockstor/src/rockstor/storageadmin/views/share.py", line 100, in get_queryset
    reverse=reverse)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 162, in __iter__
    self._fetch_all()
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 965, in _fetch_all
    self._result_cache = list(self.iterator())
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 238, in iterator
    results = compiler.execute_sql()
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/sql/compiler.py", line 840, in execute_sql
    cursor.execute(sql, params)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py", line 64, in execute
    return self.cursor.execute(sql, params)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/utils.py", line 98, in __exit__
    six.reraise(dj_exc_type, dj_exc_value, traceback)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py", line 64, in execute
    return self.cursor.execute(sql, params)
ProgrammingError: column storageadmin_share.pqgroup_rusage does not exist
LINE 1: ...n_share"."rusage", "storageadmin_share"."eusage", "storagead...
                                                             ^

Any thoughts? @phillxnet @Flyer @suman lol

Thank you soooo much in advance for even entertaining yourselves with this post!

eXe

Hi @exelan,
some recent posts made me thing about other issues:
can you check / try same tests starting from this post ? → Unable to access web ui, and shares not mounted to file system after reboot - #12 by Flyer

First round we check for Pools & Shares, then we check for disks

Mirko

1 Like

Any news on this?

checking over a Github ad hoc issue, any news on this after last Rockstor update?

On my side: some checks required with Rock-ons share usage (probably docker related), but no blocking errors like missing db fields, etc

Thanks
Mirko

1 Like

@Flyer @phillxnet

Ahhh, so sorry for the delay! Things have been crazy around here, got laid off and had to travel to Atlanta to fix some servers before I left, anyway… to get back to this existing issue haha:

The main problem is that I can’t get CLI access to the server at the moment… I know, it’s insane. I’m not on-site where the server is located anymore, I moved to Seattle for the moment… I made a huge mistake and used my connect.py script on my work laptop to connect to my RockStor instance which I have automaticall copy my local SSH keys if they’re not present on the target machine… huge mistake, because I disable password authentication in my sshd_config. Since all that happeneed, I’m currently limited to either giving the root password to my Brother who happens to be on site where the server is and hopefully he can log in with its 256 characters (sigh) and then have him run my fix_keys script… Or I have admin rights as my username via the RockStor GUI, which isn’t giving me very much to fix Docker.

I ran the latest update and rebooted the machine and this is the new Docker error:

   Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/smart_manager/views/docker_service.py", line 40, in _validate_root
return Share.objects.get(name=root)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/manager.py", line 127, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 328, in get
num = len(clone)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 144, in __len__
self._fetch_all()
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 965, in _fetch_all
self._result_cache = list(self.iterator())
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 238, in iterator
results = compiler.execute_sql()
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/sql/compiler.py", line 840, in execute_sql
cursor.execute(sql, params)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/utils.py", line 98, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
ProgrammingError: column storageadmin_share.pqgroup_rusage does not exist
LINE 1: ...n_share"."rusage", "storageadmin_share"."eusage", "storagead...

Thanks so much for the replies!!

Exe

Hi @exelan,
after last updates every db related issue should be ok.
Well know issue: wrong usage because of Rock-ons (docker images) usage not evaluated, covered on a different github issue

Mirko

@flyer Thanks for the reply!

Do you happen to have a link to the github issue related to this? I assume there’s nothing I can do from the GUI itself and would need CLI.

If I could get my UI Web Shell instance to work, that would solve some issues for sure… but alas, another problem for another day!

EDIT: Yeah I realized that I pasted the wrong Py debug output, sorry! I’ve updated the code and also realized that its a definite faux pas to ask about another issue within an issue so I removed my Web Shell issue as well.

Exe

@exelan Hello again. Your reported:

is actually an instance of the issue I referenced in this thread as related but not the same.

ie this thread covers the error re [quote=“phillxnet, post:7, topic:2841”]
null value in column “pqgroup_eusage” violates not-null constraint
[/quote]

But you have pasted:

which is covered in the related issue I linked before:

and later in that thread (where a few report downgrading to set versions then upgrading again) as a fix to give the db another shot at the migration. However later on (it’s an old thread now) @suman details when he released a fix for the db migration issue that caused the “storageadmin_share.pqgroup_rusage does not exist” reports in the first place. That is the new code expects a database field that failed to get added due to the failed database migration.

And so given your rather constrained options currently and your inevitable (and evidently inconvenient) requirement to re-establish ssh access I thought I’d report my findings on just upgrading 2 very different machines from:

3.8.16-10 to 3.9.0-7

where they both then came up on the dashboard (and some other pages) showing the:

... column storageadmin_share.pqgroup_rusage does not exist LINE 1:

In both cases a consequent reboot (something you can still do remotely as I understand it) and a little pause for the new db migrations to take place, and both machines are now back to normal function.

Very much depends on if your machine got the new fix by @suman in 3.9.0-4 but if so you may well already have a fix in place and just need to reboot. Or if your version is not new enough then you may still have enough UI working to upgrade to beyond that fix and be on the home run again. You didn’t specify the version you had upgraded from and to though.

So in short, previously you had only the option to downgrade to try and trigger the db migration attempt again:

Where as now you may be able to receive a fix via an upgrade, which the UI can do(if it allows it given the reported issue) and if you haven’t already received it of course.

Hope that helps.