phillxnet
(Philip Guyton)
May 25, 2017, 6:50pm
2
@HBDK Hello again.
Yes, I’m afraid that one is my fault. Sorry.
The only trigger for this I’ve seen in the wild so far was related to a good few partitions on an existing device. Can’t know if your cause is this also but could probably tell with the output of:
ls -la /dev/disk/by-id/
Good news is, in an attempt to redeem myself, I have submitted a fix: Please see issue:
opened 08:18PM - 06 May 17 UTC
closed 11:53PM - 30 May 17 UTC
bug
Thanks to forum members wellman and scanlom for reporting and confirming this is… sue. In the linked forum post there is evidence of a Disk.model field being insufficiently long:
```
DataError: value too long for type character varying(256)
```
Given the only field in our Disk model that is 256 characters long is the Disk.role:
```
role = models.CharField(max_length=256, null=True)
```
it is the most obvious candidate.
Currently the suspicion is that we have a combination of many partitions on a device with a long by-id name: resulting in an overly lengthy 'partitions' role value.
Linking to the forum thread for context:
https://forum.rockstor.com/t/drive-discovery/3053
(ongoing discussion in thread to confirm the cause of the db field overflow)
Where forum member @wellman in the following thread helped with output from the same above command. That issue also covers some as yet unreported possible causes for the same value overflow.
Hi, I’ve just installed Rockstor on my self-build NAS (MSI J1800I Mini-ITX + 2xWD Green 1TB + SAMSUNG 32GB mSATA SSD). After first run, when it asks for hostname and admin credentials, i’ve seen first bug:
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 320, in post
return self._update_disk_state()
File “/opt/rockstor/eggs/…
And the fix is awaiting code review via pull request:
rockstor:master
← phillxnet:1709_overflow_of_disk.role_db_field
opened 12:14PM - 23 May 17 UTC
This pr extends the disk.role model and consequent db field from it's prior leng… th of 256 characters to 1024 characters. The reported examples of overflow were related to the partitions role and triggered by prior use disks containing an unusual, but not rare, number of partitions. Calculations available in issue #1709 detail the current and future limit. Essentially before this pr we have a failure point in the 4-5 existing partition range for a reported case but if using the longest observed device name then in the 2-3 partition count range: depending on if an additional redirect role is required. Post pr our failure point would be 14-15 prior existing partitions: assuming the longest observed device name of 50 characters and the requirement or otherwise of an additional redirect role.
The procedure used to create the migration file was as per the documentation at:
http://rockstor.com/docs/contribute.html#database-migrations
The output of the commands executed were as follows:
```
/opt/rockstor-dev/bin/django makemigrations storageadmin
Migrations for 'storageadmin':
0004_auto_20170523_1140.py:
- Alter field role on disk
```
and when the created migration file was applied:
```
/opt/rockstor-dev/bin/django migrate storageadmin
Operations to perform:
Apply all migrations: storageadmin
Running migrations:
Rendering model states... DONE
Applying storageadmin.0004_auto_20170523_1140... OK
The following content types are stale and need to be deleted:
storageadmin | networkinterface
storageadmin | poolstatistic
storageadmin | sharestatistic
Any objects related to these content types by a foreign key will also
be deleted. Are you sure you want to delete these content types?
If you're unsure, answer 'no'.
Type 'yes' to continue, or 'no' to cancel: no
```
@schakrava I am a little concerned with the latter commands output re confirmation of stale type removal. As can be seen I answered 'no' here. Noting here in case this results in an error that may be interpreted by controlling code as a failure to migrate.
```
echo $?
0
```
However this was with my 'no' answer.
Prior to the generated migration file's application we have:
```
SELECT column_name, data_type, character_maximum_length FROM information_schema.columns WHERE table_name = 'storageadmin_disk' AND column_name = 'role';
column_name | data_type | character_maximum_length
-------------+-------------------+--------------------------
role | character varying | 256
(1 row)
```
post application we have:
```
SELECT column_name, data_type, character_maximum_length FROM information_schema.columns WHERE table_name = 'storageadmin_disk' AND column_name = 'role';
column_name | data_type | character_maximum_length
-------------+-------------------+--------------------------
role | character varying | 1024
(1 row)
```
Fixes #1709
Ready for review.
Also see the now closed as duplicate report by Github user tomvancutsem who was able to confirmed that the partitions count was also the cause for his experiencing this error:
https://github.com/rockstor/rockstor-core/issues/1715
Hope that helps.
So short of it is that if my pull request passes muster then a fix should be available shortly in a future update. In the mean time the only work around is to reduce the partition count on the problem drive or drives. Which may not be an option given it can involve data loss. And the output of the above command should indicate the drive that is causing this. ‘As is’ it will block Rockstor’s ability to update drive state which is not good but has only been observed in more unusual partitioning cases so would be good to know if this is also the case in your setup.
Thanks.