Drive discovery

Hi, I’ve just installed Rockstor on my self-build NAS (MSI J1800I Mini-ITX + 2xWD Green 1TB + SAMSUNG 32GB mSATA SSD). After first run, when it asks for hostname and admin credentials, i’ve seen first bug:

      Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 320, in post
return self._update_disk_state()
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py”, line 145, in inner
return func(*args, **kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 268, in _update_disk_state
dob.save()
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 734, in save
force_update=force_update, update_fields=update_fields)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 762, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 846, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 885, in _do_insert
using=using, raw=raw)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/manager.py”, line 127, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py”, line 920, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/sql/compiler.py”, line 974, in execute_sql
cursor.execute(sql, params)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py”, line 64, in execute
return self.cursor.execute(sql, params)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/utils.py”, line 98, in exit
six.reraise(dj_exc_type, dj_exc_value, traceback)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py”, line 64, in execute
return self.cursor.execute(sql, params)
DataError: value too long for type character varying(256)

But settings saved and it gone after page refresh. So I go further and… it seems Rockstor do not see any drives. Storage -> Disks is empty and clicking “Rescan” gives me this:

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 320, in post
return self._update_disk_state()
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py”, line 145, in inner
return func(*args, **kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 268, in _update_disk_state
dob.save()
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 734, in save
force_update=force_update, update_fields=update_fields)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 762, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 846, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/base.py”, line 885, in _do_insert
using=using, raw=raw)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/manager.py”, line 127, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py”, line 920, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/models/sql/compiler.py”, line 974, in execute_sql
cursor.execute(sql, params)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py”, line 64, in execute
return self.cursor.execute(sql, params)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/utils.py”, line 98, in exit
six.reraise(dj_exc_type, dj_exc_value, traceback)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py”, line 64, in execute
return self.cursor.execute(sql, params)
DataError: value too long for type character varying(256)

Upgrade to newest testing version does not any difference. :frowning:

Same issue here on the latest testing version…anyone have luck tracking down the cause?

@wellman and @scanlom Welcome to the Rockstor community.

I have some how managed to miss this post so my apologies, and thank you for reporting this issue. @scanlom addition to it drew my attention; belatedly.

I don’t currently know what is causing this error but it seem that our role field is being overloaded.

That is currently we have a rather new ‘slot’ in our database to store role information about the disks attached to Rockstor. My guess is that you have a disk that has more partitions than I have accounted for (along with long by-id names) and that this is overloading the database field of 256 characters.

The fix is straight forward enough (we extend the Disk.role field) but I would first like to confirm the reason as we have only had these 2 reports of this happening.

Could you both paste the output of the following command entered at a local console or via ssh on Rockstor.

ls -la /dev/disk/by-id/

Long device names along with may partitions on a single device could (I think) trigger this failure. At least that is all I can think of for the time being.

Thanks again for reporting this and it would be a good one to get sorted.

Upon receiving the output from the above command I should be able to confirm if my suspicion is correct. If so then a work around at your end would be to remove the partitions on the proposed data drives by hand. Obviously this will wipe all data from these drives but this will have to be done prior to their use within Rockstor anyway. Unless they are already btrfs pool members that you wish to import. Anyway if you could post the command’s output and we can get to crossing some bridges.

Sorry about this failure by the way as it is down to an over-site on my part I’m afraid. But I suspect we can sort this one quite quickly. But I have to first confirm exactly what is causing this field overflow.

Linking to the following issue opened as a result of this forum thread:

@phillxnet thanks very much for your reply. Here’s the output I get:

[root@xps-rockstor-11 ~]# ls -la /dev/disk/by-id/
total 0
drwxr-xr-x 2 root root 500 May 6 22:15 .
drwxr-xr-x 6 root root 120 May 6 22:15 …
lrwxrwxrwx 1 root root 9 May 6 22:15 ata-TOSHIBA_MK6461GSY_31DBT7P0T -> …/…/sda
lrwxrwxrwx 1 root root 10 May 6 22:15 ata-TOSHIBA_MK6461GSY_31DBT7P0T-part1 -> …/…/sda1
lrwxrwxrwx 1 root root 10 May 6 22:15 ata-TOSHIBA_MK6461GSY_31DBT7P0T-part2 -> …/…/sda2
lrwxrwxrwx 1 root root 10 May 6 22:15 ata-TOSHIBA_MK6461GSY_31DBT7P0T-part3 -> …/…/sda3
lrwxrwxrwx 1 root root 10 May 6 22:15 ata-TOSHIBA_MK6461GSY_31DBT7P0T-part4 -> …/…/sda4
lrwxrwxrwx 1 root root 10 May 6 22:15 ata-TOSHIBA_MK6461GSY_31DBT7P0T-part5 -> …/…/sda5
lrwxrwxrwx 1 root root 10 May 6 22:15 ata-TOSHIBA_MK6461GSY_31DBT7P0T-part6 -> …/…/sda6
lrwxrwxrwx 1 root root 10 May 6 22:15 ata-TOSHIBA_MK6461GSY_31DBT7P0T-part7 -> …/…/sda7
lrwxrwxrwx 1 root root 10 May 6 22:15 ata-TOSHIBA_MK6461GSY_31DBT7P0T-part8 -> …/…/sda8
lrwxrwxrwx 1 root root 9 May 6 22:15 ata-TSSTcorp_DVD+_-RW_TS-L633J_R8126GIB334278 -> …/…/sr0
lrwxrwxrwx 1 root root 9 May 6 22:15 usb-SanDisk_Ultra_4C530001080226121593-0:0 -> …/…/sdb
lrwxrwxrwx 1 root root 10 May 6 22:15 usb-SanDisk_Ultra_4C530001080226121593-0:0-part1 -> …/…/sdb1
lrwxrwxrwx 1 root root 10 May 6 22:15 usb-SanDisk_Ultra_4C530001080226121593-0:0-part2 -> …/…/sdb2
lrwxrwxrwx 1 root root 10 May 6 22:15 usb-SanDisk_Ultra_4C530001080226121593-0:0-part3 -> …/…/sdb3
lrwxrwxrwx 1 root root 9 May 6 22:15 wwn-0x5000039321503a66 -> …/…/sda
lrwxrwxrwx 1 root root 10 May 6 22:15 wwn-0x5000039321503a66-part1 -> …/…/sda1
lrwxrwxrwx 1 root root 10 May 6 22:15 wwn-0x5000039321503a66-part2 -> …/…/sda2
lrwxrwxrwx 1 root root 10 May 6 22:15 wwn-0x5000039321503a66-part3 -> …/…/sda3
lrwxrwxrwx 1 root root 10 May 6 22:15 wwn-0x5000039321503a66-part4 -> …/…/sda4
lrwxrwxrwx 1 root root 10 May 6 22:15 wwn-0x5000039321503a66-part5 -> …/…/sda5
lrwxrwxrwx 1 root root 10 May 6 22:15 wwn-0x5000039321503a66-part6 -> …/…/sda6
lrwxrwxrwx 1 root root 10 May 6 22:15 wwn-0x5000039321503a66-part7 -> …/…/sda7
lrwxrwxrwx 1 root root 10 May 6 22:15 wwn-0x5000039321503a66-part8 -> …/…/sda8

@phillxnet quick update - i wiped the drive, removing all the old partitions using parted, and appears to have resolved this, i was able to create a share and export it through nfs

1 Like

@scanlom Thanks for the command output and update; it helped to confirm what was happening. I’ve update the referenced issue and yes it was the partition count + dev name overflowing our roles database entry. Hopefully we can get this sorted soon so future users with a similar arrangement don’t get tripped up similarly.

By the way, why 8 partitions, and what filesystem / filesystems did those partitions have? And what was the prior use of this drive, appliance wise (if applicable). Might be nice to know as it would be easier to identify this issue in the future, rather than having to get ls -la outputs etc as not everyone is happy doing command line instructions?

Looks like with your hardware we failed on the 5th partition as there was room only for the first 4. Partitions is rather a new thing for Rockstor and intended primarily just to allow for future external drives new ‘off the shelf’ ie with only a single partition but given your input we should be able to sort things at least for the circumstance you and presumably @wellman reported.

I’ve opened an issue in our rockstor-doc repo to give further guidance on removing prior use partitions to future users in an effort to try and avoid such issues going forward:

Glad your up and running now and thanks for helping out on this one.

@phillxnet This is very much a home tinkering project, the drive is an old laptop that had windows 7 (ntfs), two installations of linux mint (ext4), linux swap partition, data partition (ntfs)…and the other three I don’t recall actually

@scanlom Thanks; kind of a one off then. I was just wondering if there was a storage appliance out there that did this kind of partitioning that we should know about in case others brought their disks over from it.

Cheers for the info, and it looks like you got your moneys worth from that drive. Lets hope it has a nice easy and long retirement in Rockstor. Does all the S.M.A.R.T info show up OK in the Rockstor UI?

Cheers.

@phillxnet Yes, looking good

@wellman

and @scanlom (although I know you are now sorted via partitions wipe work around)

Just a notification that as of testing channel release 3.9.0-8 this issue should now be fixed, please see:

Thanks again for you patience and help with reporting and diagnosing this issue.