Failed to mount Pool(fullPool) due to an unknown reason

Brief description of the problem

I’m trying to migrate to Rockstor 4.1 (OpenSuse). That’s not working (freezing during install on HP Proliant Microserver N54L AMD Turion), and including nomodeset kernel option has not made any difference.

Before prodding around some more (and no doubt needing help for that) I thought I’d reinstate the previous install.

This hasn’t worked.

The system boots up and sees the data discs (but SMART is not supported - I believe it used to be), but the pool is not mounted. The shares are visible but empty. (Of course if I could get the pool mounted in the V4.1 system, this wouldn’t be a problem.)

Detailed step by step instructions to reproduce the problem

  • Attempt to install Rockstor V4.1 (with data discs unplugged)
  • Give up after several attempts to install e.g. ctrl+alt+del before prompt for select locale
  • Revert to previous boot disc 3.9.1-16
  • System fails to see pool - discs are there but the pool fails to mount.

Note: although the data discs were unplugged before any attempt to install, they were plugged in when booting into a version of grub which did not enter the installer i.e. found nothing for grub to pass control to.

Web-UI screenshot


Error Traceback provided on the Web-UI

            Traceback (most recent call last):
File "/opt/rockstor/src/rockstor/rest_framework_custom/", line 41, in _handle_exception
File "/opt/rockstor/src/rockstor/storageadmin/views/", line 47, in get_queryset
File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/", line 145, in inner 
  return func(*args, **kwargs)
File "/opt/rockstor/src/rockstor/storageadmin/views/", line 72, in _balance_status
  cur_status = balance_status(pool)
File "/opt/rockstor/src/rockstor/fs/", line 1064, in balance_status
  mnt_pt = mount_root(pool)
File "/opt/rockstor/src/rockstor/fs/", line 283, in mount_root
  'Command used %s' % (, mnt_cmd))

Exception: Failed to mount Pool(fullPool) due to an unknown reason. Command used ['/bin/mount', u'/dev/disk/by-label/fullPool', u'/mnt2/fullPool']


I’d failed to refit the discs properly, so they weren’t present in the system. Dohhh!

I suppose Rockstor might have been able to tell me this more obviously e.g. not display the ghost of discs it has known in the past but aren’t there now, or maybe I just didn’t look in quite the right place, or quite hard enough. Still, I’d have though loading the system with no discs would have led to a more obvious error.

Sorry to have troubled you.

No doubt I’ll be posting about not being able to install Rockstor 4.1 tomorrow (unless anyone can put me out of my misery straightaway - please).


@ajk Hello again, long time no hear.

Glad you got this transition sorted in the end.

Our ‘way’ here is to sow the disks as detached. It’s visible in your pic actually where all disks are renamed detached-*. Agreed we could pop up a red flashing warning in the header as we do for missing disks in a pool. But if all are missing it’s actually not a error necessarily. It may just be a detached pool. I.e. imagine a pool that you only connect for backups say. Se we have to be cautious/flexible about the border between error and intended/possible use.

Thanks for he input on this. All good stuff.

Hope that helps.

1 Like