[storageadmin.util:44] Error?


Lately I’ve been getting error messages like this:

[25/May/2017 05:00:07] ERROR [storageadmin.util:44] exception: value too long for type character varying(256)
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 320, in post
return self._update_disk_state()
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py”, line 145, in inner
return func(*args, **kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 268, in _update_disk_state
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 734, in save
force_update=force_update, update_fields=update_fields)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 762, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 827, in _save_table
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 877, in _do_update
return filtered._update(values) > 0
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py”, line 580, in _update
return query.get_compiler(self.db).execute_sql(CURSOR)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/sql/compiler.py”, line 1062, in execute_sql
cursor = super(SQLUpdateCompiler, self).execute_sql(result_type)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/sql/compiler.py”, line 840, in execute_sql
cursor.execute(sql, params)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py”, line 64, in execute
return self.cursor.execute(sql, params)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/utils.py”, line 98, in exit
six.reraise(dj_exc_type, dj_exc_value, traceback)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py”, line 64, in execute
return self.cursor.execute(sql, params)
DataError: value too long for type character varying(256)

i don’t really know when it started but looking in the logs i can see that the oldest entry in it is the error from the 3rd of may this year.

i hope someone can help me shed some light on this problem and hopefully fix it.

@HBDK Hello again.

Yes, I’m afraid that one is my fault. Sorry.

The only trigger for this I’ve seen in the wild so far was related to a good few partitions on an existing device. Can’t know if your cause is this also but could probably tell with the output of:

ls -la /dev/disk/by-id/

Good news is, in an attempt to redeem myself, I have submitted a fix: Please see issue:

Where forum member @wellman in the following thread helped with output from the same above command. That issue also covers some as yet unreported possible causes for the same value overflow.

And the fix is awaiting code review via pull request:

Also see the now closed as duplicate report by Github user tomvancutsem who was able to confirmed that the partitions count was also the cause for his experiencing this error:

Hope that helps.

So short of it is that if my pull request passes muster then a fix should be available shortly in a future update. In the mean time the only work around is to reduce the partition count on the problem drive or drives. Which may not be an option given it can involve data loss. And the output of the above command should indicate the drive that is causing this. ‘As is’ it will block Rockstor’s ability to update drive state which is not good but has only been observed in more unusual partitioning cases so would be good to know if this is also the case in your setup.



i have 4 physical discs in the system two for shares raid1 and two for a MD raid with he system running on

running the command you mentioned and got this result:

[root@thesharanator ~]# ls -la /dev/disk/by-id/
total 0
drwxr-xr-x 2 root root 340 May 25 12:48 .
drwxr-xr-x 6 root root 120 May 25 12:48 …
lrwxrwxrwx 1 root root 9 May 25 12:48 ata-Corsair_Force_LS_SSD_16088024000104781293 -> …/…/sda
lrwxrwxrwx 1 root root 10 May 25 12:48 ata-Corsair_Force_LS_SSD_16088024000104781293-part1 -> …/…/sda1
lrwxrwxrwx 1 root root 10 May 25 12:48 ata-Corsair_Force_LS_SSD_16088024000104781293-part2 -> …/…/sda2
lrwxrwxrwx 1 root root 10 May 25 12:48 ata-Corsair_Force_LS_SSD_16088024000104781293-part3 -> …/…/sda3
lrwxrwxrwx 1 root root 9 May 25 12:48 ata-PS3109S9 -> …/…/sdb
lrwxrwxrwx 1 root root 9 May 25 12:48 ata-ST2000VN0001-1SF174_Z4H05RLS -> …/…/sdc
lrwxrwxrwx 1 root root 9 May 25 12:48 ata-ST2000VN0001-1SF174_Z4H063NT -> …/…/sdd
lrwxrwxrwx 1 root root 11 May 25 12:48 md-name-localhost:boot -> …/…/md126
lrwxrwxrwx 1 root root 11 May 25 12:48 md-name-localhost:root -> …/…/md125
lrwxrwxrwx 1 root root 11 May 25 12:48 md-name-localhost:swap -> …/…/md127
lrwxrwxrwx 1 root root 11 May 25 12:48 md-uuid-3bb7089c:a4e40093:575d9f34:1cb9d9c4 -> …/…/md125
lrwxrwxrwx 1 root root 11 May 25 12:48 md-uuid-98d5a238:88c70fb4:d36b2c90:3c517624 -> …/…/md126
lrwxrwxrwx 1 root root 11 May 25 12:48 md-uuid-9d60cbcf:aaa8c066:83d3d1ef:b12799d7 -> …/…/md127
lrwxrwxrwx 1 root root 9 May 25 12:48 wwn-0x5000c500908220d1 -> …/…/sdc
lrwxrwxrwx 1 root root 9 May 25 12:48 wwn-0x5000c500908227d5 -> …/…/sdd

@HBDK Thanks for the output, You do have a long device name there, ie the Corsair SSD, with is 51 character, 1 longer than I have as my longest seen so far so thanks. I’ve updated my spreadsheet to indicate failure points as a result. But 3 partitions shouldn’t overflow as the db entry would I think only be 207 characters without a redirect role applied, at least for the partitions role alone. But 4 would.
Could you give the output of:


As I’m curious what is overflowing this role field, I suspect a combination of the partition role and another.


sure anything that helps troubleshoot and/or improve the project.

NAME=“sdd” MODEL=“ST2000VN0001-1SF” SERIAL=“Z4H063NT” SIZE=“1.8T” TRAN=“sata” VENDOR=“ATA " HCTL=“3:0:0:0” TYPE=“disk” FSTYPE=“btrfs” LABEL=“Main” UUID=“3937ce25-3a3e-4e1e-94fd-b72298edb0a7"
NAME=“sdb” MODEL=“PS3109S9 " SERIAL=”” SIZE=“20M” TRAN=“sata” VENDOR=“ATA " HCTL=“1:0:0:0” TYPE=“disk” FSTYPE=”” LABEL="" UUID="“
NAME=“sdc” MODEL=“ST2000VN0001-1SF” SERIAL=“Z4H05RLS” SIZE=“1.8T” TRAN=“sata” VENDOR=“ATA " HCTL=“2:0:0:0” TYPE=“disk” FSTYPE=“btrfs” LABEL=“Main” UUID=“3937ce25-3a3e-4e1e-94fd-b72298edb0a7"
NAME=“sda” MODEL=“Corsair Force LS” SERIAL=“16088024000104781293” SIZE=“55.9G” TRAN=“sata” VENDOR=“ATA " HCTL=“0:0:0:0” TYPE=“disk” FSTYPE=”” LABEL=”” UUID="“
NAME=“sda2” MODEL=”" SERIAL="" SIZE=“1.9G” TRAN="" VENDOR="" HCTL="" TYPE=“part” FSTYPE=“linux_raid_member” LABEL=“localhost:boot” UUID=“98d5a238-88c7-0fb4-d36b-2c903c517624"
NAME=“md126” MODEL=”" SERIAL="" SIZE=“1.9G” TRAN="" VENDOR="" HCTL="" TYPE=“raid1” FSTYPE=“ext4” LABEL="" UUID=“bc8f0987-a25c-4eae-baf1-a56629392647"
NAME=“sda3” MODEL=”" SERIAL="" SIZE=“50.3G” TRAN="" VENDOR="" HCTL="" TYPE=“part” FSTYPE=“linux_raid_member” LABEL=“localhost:root” UUID=“3bb7089c-a4e4-0093-575d-9f341cb9d9c4"
NAME=“md125” MODEL=”" SERIAL="" SIZE=“50.3G” TRAN="" VENDOR="" HCTL="" TYPE=“raid1” FSTYPE=“btrfs” LABEL=“rockstor_rockstor” UUID=“59598c97-9a35-45b5-b3d5-0e431931b9a9"
NAME=“sda1” MODEL=”" SERIAL="" SIZE=“3.7G” TRAN="" VENDOR="" HCTL="" TYPE=“part” FSTYPE=“linux_raid_member” LABEL=“localhost:swap” UUID=“9d60cbcf-aaa8-c066-83d3-d1efb12799d7"
NAME=“md127” MODEL=”" SERIAL="" SIZE=“3.7G” TRAN="" VENDOR="" HCTL="" TYPE=“raid1” FSTYPE=“swap” LABEL="" UUID=“24bace0a-a6a2-422f-8046-17594713c6b1”

@HBDK Thanks for the info.

Yes this has helped and has identified a further over-site on my part.

I have updated the above issue with the proof and the reason I missed this initially and referenced your contribution.

Essentially I had assumed that only 2 of the 3 partitions would be listed and that was my finding on my test mdraid install here; your report led to me looking a little more closely at the db field contents on this test system:

{"mdraid": "linux_raid_member", "partitions": {"ata-QEMU_HARDDISK_QM00005-part3": "linux_raid_member", "ata-QEMU_HARDDISK_QM00005-part1": "linux_raid_member"}}

So an ‘mdraid’ role and the associated partitions. Even with your long names there should still not be an overflow of the 256 chars. Anyway I had wrongly attributed the absence of the part2 listing to a type of swap which we ignore however for these mdraid installs all partitions are the rather long fs type of “linux_raid_member” as swap is higher up. On investigation the missing part2 in my test mdraid setup is down to size not type as we also exclude everything that is less than 1G which my test mdraid install swap partition is (SIZE=“954M”). Silly me. I was attempting to use the minimum spec of an 8GB system disk and inadvertently covering up this issue (at least for mdraid system disk installs).

So thanks for reporting: and the explanation of your db field overflow is as a result of the system attempting to store the following:

{"mdraid": "linux_raid_member", "partitions": {"ata-Corsair_Force_LS_SSD_16088024000104781293-part1": "linux_raid_member", "ata-Corsair_Force_LS_SSD_16088024000104781293-part2": "linux_raid_member","ata-Corsair_Force_LS_SSD_16088024000104781293-part3": "linux_raid_member"}}

Which is 275 chars (19 chars over). Hence the breakage.

I am reluctant to provide a ‘hack’ workaround given the pending fix in the referenced pr (which quadruples the max field length to 1024 chars), and the only fairly non invasive one I can think of only saves 17 characters and given your long dev names is just not enough (though it would be if they were only 50 chars long!).

Apologies for the inconvenience and thanks again for assisting with improving things going forward.

My expectation is that once the pending fix is released your system should then function as expected.

Thanks i will wait patiently.

and replace the disk i hadn’t realized was dead until now… :S

@HBDK Great. Also if you set email notifications within Rockstor you should have received an email via default root email being redirected to the notification address for the missing mdraid member.

i had it setup to send from a gmail account but it stopped working at some point and i never got around to find a solution or another smtp server.

but i guess that’s one of the thing i should get fixed soon.

@HBDK Hello again.

Just a notification that a fix for the issue you reported in this thread has now been release in testing channel updates version 3.9.0-8.
So you should be able to upgrade (if you are on testing channel updates) and, once you have rebooted there after, or executed “systemctl restart rockstor-pre” to enable the then pending model/db field migration to it’s longer variant, you should be sorted.

Thanks for you patience and helping to route out another trigger for this model/db field overflow issue.

Linking to the relevant release post for context:

1 Like

cool, i installed it and i haven’t seen any of those errors since…

now i just need to figure out why i can’t install new rockons… :smiley: