Corner case where drive name change breaks mount points.
In looking into reports… of samba not starting even after pull request:-
https://github.com/rockstor/rockstor-core/pull/1032
which dealt with a bootstrap fail.
I have reproduced a very low frequency issue of mount point failures due to drive name change.
In my instance samba still started however the share was empty. This does not fit with a report 3.8-9.07 still having a failed samba start by @kimbl on the forum:-
http://forum.rockstor.com/t/pool-not-mounted-on-reboot/731/4
but may be related as the pool is not mounted as expected on boot.
Example of this issue:-
btrfs fi show
Label: 'rockstor_rockstor' uuid: 0faa780f-a339-43b7-a49f-59cfa98548b5
Total devices 1 FS bytes used 1.74GiB
devid 1 size 25.47GiB used 3.27GiB path /dev/sda3
Label: 'time_machine_pool' uuid: 8f363c7d-2546-4655-b81b-744e06336b07
Total devices 3 FS bytes used 29.92GiB
devid 1 size 149.05GiB used 19.00GiB path /dev/sdb
devid 2 size 153.38GiB used 24.01GiB path /dev/sdc
devid 3 size 149.05GiB used 19.01GiB path /dev/sdd
btrfs-progs v4.2.1
In the above we have the only non rockstor_rockstor pool as consisting of sdb, sdc, and, sdd
However Rockstor has attempted to mount our samba share (which is in the time_machine_pool) via reference to sda, however as can be seen sda is part of the system drive and as such doesn't contain any btrfs only the standard partitioning.
This results in may mount errors for the various mount points eg:-
[04/Dec/2015 15:38:34] ERROR [storageadmin.views.command:80] Exception while mounting a share(samba_share) during bootstrap: Error running a command. cmd = ['
/bin/mount', '-t', 'btrfs', '-o', u'subvol=samba_share', u'/dev/sda', u'/mnt2/samba_share']. rc = 32. stdout = ['']. stderr = ['mount: /dev/sda is already mou
nted or /mnt2/samba_share busy', '']
this is soon there after followed by:-
[04/Dec/2015 15:38:35] DEBUG [storageadmin.views.command:126] Bootstrap operations completed
[04/Dec/2015 16:19:53] DEBUG [smart_manager.data_collector:276] Sysinfo has been initialized
[04/Dec/2015 16:19:53] DEBUG [smart_manager.data_collector:280] Sysinfo has connected
[04/Dec/2015 16:19:53] DEBUG [smart_manager.data_collector:131] network stats connected
[04/Dec/2015 16:19:54] DEBUG [smart_manager.data_collector:352] Disk state updated successfully
[04/Dec/2015 16:19:55] ERROR [storageadmin.middleware:35] Exception occured while processing a request. Path: /api/commands/refresh-pool-state method: POST
[04/Dec/2015 16:19:55] ERROR [storageadmin.middleware:36] Error running a command. cmd = ['/sbin/btrfs', 'fi', 'show', u'/dev/sda']. rc = 1. stdout = ['']. stderr = ['ERROR: No btrfs on /dev/sda', '']
As can be seen the refresh-pool-state api post is asking btrfs fi show to work on a non btrfs sda which is currently our boot drive with traditional partitioning where only the third partition is btrfs.
Traceback (most recent call last):
File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/core/handlers/base.py", line 112, in get_response
response = wrapped_callback(request, _callback_args, *_callback_kwargs)
File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/views/decorators/csrf.py", line 57, in wrapped_view
return view_func(_args, *_kwargs)
File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/views/generic/base.py", line 69, in view
return self.dispatch(request, _args, *_kwargs)
File "/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 452, in dispatch
response = self.handle_exception(exc)
File "/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 449, in dispatch
response = handler(request, _args, *_kwargs)
File "/opt/rockstor/eggs/Django-1.6.11-py2.7.egg/django/db/transaction.py", line 371, in inner
return func(_args, *_kwargs)
File "/opt/rockstor/src/rockstor/storageadmin/views/command.py", line 232, in post
pool_info = get_pool_info(fd.name)
File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 70, in get_pool_info
o, e, rc = run_command(cmd)
File "/opt/rockstor/src/rockstor/system/osi.py", line 89, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = ['/sbin/btrfs', 'fi', 'show', u'/dev/sda']. rc = 1. stdout = ['']. stderr = ['ERROR: No btrfs on /dev/sda', '']
[04/Dec/2015 16:19:55] ERROR [smart_manager.data_collector:354] Failed to update pool state.. exception: Internal Server Error: No JSON object could be decoded
[04/Dec/2015 16:19:56] DEBUG [smart_manager.data_collector:352] Share state updated successfully
[04/Dec/2015 16:19:56] DEBUG [smart_manager.data_collector:352] Snapshot state updated successfully
[04/Dec/2015 16:19:57] DEBUG [smart_manager.data_collector:331] Updated Rock-on metadata.
[04/Dec/2015 16:19:59] DEBUG [smart_manager.data_collector:104] disconnect received
[04/Dec/2015 16:19:59] DEBUG [smart_manager.data_collector:136] network stats disconnected
Note the "Failed to update pool state.. exception: Internal Server Error: No JSON object could be decoded.
I think this issue is the root cause of a few sporadic mount / service problems.
This may relate to #897 which suspects previous drive names as disturbing subsequent "on boot" mounts.