Subvolumes not mounted but pools mount

I am in a situation now where my root mounts and 3 pools mount but two of the pools (my 2 that run all my rockons and collections) don’t mount subvolumes. I’ve run scrub and am now running a btrfs check (I’m too afraid to run --repair). Currently the check hits the memory error so I’m using btrfs check --mode lowmem /dev/sdc (one of the RAID10 disks) and it’s running…very slowly

Reboots haven’t worked. I think it may be a qgroup issue and I tried /opt/rockstor/bin/qgroup-clean but that didn’t change much.

rockstor-bootstrap fails and it’s just strange. the root volume mounts and a single disk pool called backup mounts but my two big pools (tv, movies) only mount the main pool. What is very interesting is I can run a docker container for plex (because that is like a nightmare for my little kids when it’s down) by creating a new name container and using /mnt2/movies/radarr-movies:/radarr-movies instead of the /mnt2/radarr-movies:/radarr-movies for example.

I really really don’t want to reinstall rockstor considering I don’t see what’s wrong with it. I was also on experimental just fine and possibly the -13 might have affected it? I downgraded to 12 and 11 and then back to stable but all resulted in the same issue.

from journalctl when manually running rockstor-bootstrap

Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: BTRFS device scan complete
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com bootstrap[4125]: Max attempts(15) reached. Connection errors persist. Failed to bootstrap. Error: ['Internal Server Error: No JSON object could be decoded']
Sep 21 08:44:23 rocky.gamull.com systemd[1]: rockstor-bootstrap.service: main process exited, code=exited, status=1/FAILURE
Sep 21 08:44:23 rocky.gamull.com systemd[1]: Failed to start Rockstor bootstrapping tasks.
Sep 21 08:44:23 rocky.gamull.com systemd[1]: Unit rockstor-bootstrap.service entered failed state.
Sep 21 08:44:23 rocky.gamull.com systemd[1]: rockstor-bootstrap.service failed.

from journalctl starting bootstrap also:

Sep 21 08:42:28 rocky.gamull.com systemd[1]: Starting Rockstor bootstrapping tasks...
Sep 21 08:42:28 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:28,870 CRIT Supervisor running as root (no user in config file)
Sep 21 08:42:28 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:28,885 INFO RPC interface 'supervisor' initialized
Sep 21 08:42:28 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:28,886 CRIT Server 'unix_http_server' running without any HTTP authentication checking
Sep 21 08:42:28 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:28,886 INFO supervisord started with pid 4124
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label tv devid 1 transid 26786 /dev/sdc
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label tv devid 2 transid 26786 /dev/sdi
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label tv devid 3 transid 26786 /dev/sdh
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label tv devid 4 transid 26786 /dev/sdb
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label movies devid 6 transid 112693 /dev/sdj
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label movies devid 1 transid 112693 /dev/sdf
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label movies devid 2 transid 112693 /dev/sdd
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label movies devid 3 transid 112693 /dev/sdg
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label movies devid 5 transid 112693 /dev/sde
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label movies devid 4 transid 112693 /dev/sdk
Sep 21 08:42:29 rocky.gamull.com kernel: BTRFS: device label backup devid 1 transid 3257 /dev/sdm
Sep 21 08:42:29 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:29,889 INFO spawned: 'nginx' with pid 4136
Sep 21 08:42:29 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:29,891 INFO spawned: 'gunicorn' with pid 4137
Sep 21 08:42:29 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:29,893 INFO spawned: 'data-collector' with pid 4138
Sep 21 08:42:29 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:29,895 INFO spawned: 'ztask-daemon' with pid 4139
Sep 21 08:42:31 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:31,910 INFO success: data-collector entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
Sep 21 08:42:31 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:31,910 INFO success: ztask-daemon entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
Sep 21 08:42:32 rocky.gamull.com kernel: device-mapper: uevent: version 1.0.3
Sep 21 08:42:32 rocky.gamull.com kernel: device-mapper: ioctl: 4.35.0-ioctl (2016-06-23) initialised: dm-devel@redhat.com
Sep 21 08:42:33 rocky.gamull.com kernel: BTRFS info (device sdm): use lzo compression
Sep 21 08:42:33 rocky.gamull.com kernel: BTRFS info (device sdm): disk space caching is enabled
Sep 21 08:42:33 rocky.gamull.com kernel: BTRFS info (device sdm): has skinny extents
Sep 21 08:42:34 rocky.gamull.com kernel: BTRFS info (device sdk): use no compression
Sep 21 08:42:34 rocky.gamull.com kernel: BTRFS info (device sdk): disk space caching is enabled
Sep 21 08:42:34 rocky.gamull.com kernel: BTRFS info (device sdk): has skinny extents
Sep 21 08:42:34 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:34,914 INFO success: nginx entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
Sep 21 08:42:34 rocky.gamull.com supervisord[4124]: 2017-09-21 08:42:34,914 INFO success: gunicorn entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
Sep 21 08:42:46 rocky.gamull.com kernel: BTRFS info (device sdb): use no compression
Sep 21 08:42:46 rocky.gamull.com kernel: BTRFS info (device sdb): disk space caching is enabled
Sep 21 08:42:46 rocky.gamull.com kernel: BTRFS info (device sdb): has skinny extents
Sep 21 08:42:51 rocky.gamull.com kernel: BTRFS error (device sdb): qgroup generation mismatch, marked as inconsistent
Sep 21 08:42:51 rocky.gamull.com kernel: BTRFS info (device sdb): checking UUID tree

btrfs fi show

[root@rocky log]# btrfs fi show
Label: 'rockstor_rockstor'  uuid: 3e3b17e7-2490-484f-9c8a-f402b2a51517
	Total devices 1 FS bytes used 3.57GiB
	devid    1 size 122.66GiB used 7.06GiB path /dev/md126

Label: 'backup'  uuid: 8ac79908-cb09-4568-bfc8-b0fd377dcf15
	Total devices 1 FS bytes used 67.91GiB
	devid    1 size 3.64TiB used 71.02GiB path /dev/sdm

Label: 'movies'  uuid: c77c9722-7a5d-458c-bb1f-c077a950771d
	Total devices 6 FS bytes used 3.51TiB
	devid    1 size 1.82TiB used 1.35TiB path /dev/sdf
	devid    2 size 1.82TiB used 1.35TiB path /dev/sdd
	devid    3 size 1.82TiB used 1.35TiB path /dev/sdg
	devid    4 size 1.82TiB used 1.35TiB path /dev/sdk
	devid    5 size 1.82TiB used 1.35TiB path /dev/sde
	devid    6 size 1.82TiB used 1.35TiB path /dev/sdj

Label: 'tv'  uuid: 76b16cb2-0f13-401d-8395-c408cfc0fdfe
	Total devices 4 FS bytes used 3.72TiB
	devid    1 size 2.73TiB used 1.86TiB path /dev/sdc
	devid    2 size 2.73TiB used 1.86TiB path /dev/sdi
	devid    3 size 2.73TiB used 1.86TiB path /dev/sdh
	devid    4 size 2.73TiB used 1.86TiB path /dev/sdb

Only other caveat is the rockstor root is on mdraid following instructions from documentation. Again, this isn’t fun to reinstall.

@magicalyak Hello again.

Not sure quite whats going on but re Rockstor version there was a know ‘corner case’ behaviour, which also related to quotas, that existed in the last stable and was fixed in the following release:

Which included the following:

https://github.com/rockstor/rockstor-core/issues/1769

and that could cause some shares to have a mount failure.

Try looking at:

journalctl -xe

for clues as something is tripping up the rockstor-botstrap service.

I’m going to try to remake the rockon share but trying to run bootstrap gives the same error

So I can manually mount subvolumes just fine. Something seems broken in bootstrap. No other info than the JSON error occurs in the logs…

@magicalyak You could try and turn on debug logging to see if anything else surfaces:

/opt/rockstor/bin/debug-mode on

If used with no parameters you get the available options.

Lets hope that shows up something. Would be nice to get to the bottom of this one. What’s your current Rockstor version as we assume latest testing unless otherwise stated.

It could be we have another corner case issue like that one sighted re rogue db values or the like.

3.9.1-13
latest testing
in rockstor.log i see this:

[21/Sep/2017 17:33:01] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:01] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/rockon-root/btrfs/subvolumes/6136ecd9a967d823aceccc8ef28dc45c05cff026a438a4f61a1d899af325bbc2
[21/Sep/2017 17:33:03] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:03] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/rockon-root
[21/Sep/2017 17:33:03] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:03] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/root00
[21/Sep/2017 17:33:03] ERROR [storageadmin.middleware:32] Exception occured while processing a request. Path: /api/commands/bootstrap method: POST
[21/Sep/2017 17:33:03] ERROR [storageadmin.middleware:33] 'unicode' object has no attribute 'name'
Traceback (most recent call last):
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/handlers/base.py", line 132, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/views/decorators/csrf.py", line 58, in wrapped_view
    return view_func(*args, **kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/views/generic/base.py", line 71, in view
    return self.dispatch(request, *args, **kwargs)
  File "/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 452, in dispatch
    response = self.handle_exception(exc)
  File "/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 449, in dispatch
    response = handler(request, *args, **kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py", line 145, in inner
    return func(*args, **kwargs)
  File "/opt/rockstor/src/rockstor/storageadmin/views/command.py", line 95, in post
    import_shares(p, request)
  File "/opt/rockstor/src/rockstor/storageadmin/views/share_helpers.py", line 110, in import_shares
    cshare.pool.name))
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 412, in shares_info
    mnt_pt = mount_root(pool)
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 229, in mount_root
    root_pool_mnt = DEFAULT_MNT_DIR + pool.name
AttributeError: 'unicode' object has no attribute 'name'
[21/Sep/2017 17:33:09] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:09] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/rockon-root
[21/Sep/2017 17:33:09] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:09] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/root00
[21/Sep/2017 17:33:09] ERROR [storageadmin.middleware:32] Exception occured while processing a request. Path: /api/commands/bootstrap method: POST
[21/Sep/2017 17:33:09] ERROR [storageadmin.middleware:33] 'unicode' object has no attribute 'name'
Traceback (most recent call last):
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/handlers/base.py", line 132, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/views/decorators/csrf.py", line 58, in wrapped_view
    return view_func(*args, **kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/views/generic/base.py", line 71, in view
    return self.dispatch(request, *args, **kwargs)
  File "/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 452, in dispatch
    response = self.handle_exception(exc)
  File "/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 449, in dispatch
    response = handler(request, *args, **kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py", line 145, in inner
    return func(*args, **kwargs)
  File "/opt/rockstor/src/rockstor/storageadmin/views/command.py", line 95, in post
    import_shares(p, request)
  File "/opt/rockstor/src/rockstor/storageadmin/views/share_helpers.py", line 110, in import_shares
    cshare.pool.name))
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 412, in shares_info
    mnt_pt = mount_root(pool)
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 229, in mount_root
    root_pool_mnt = DEFAULT_MNT_DIR + pool.name
AttributeError: 'unicode' object has no attribute 'name'
[21/Sep/2017 17:33:15] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:15] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/rockon-root
[21/Sep/2017 17:33:15] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor

Note that the volume root mounts though…except for the recurring error
I also see this before that looks normal:

[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/tv
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/tv/rockon/btrfs/subvolumes/0bb151a2891c62767d9eaf00f3210ebcc6da042689b0807cde6da236d3777188
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/tv
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/tv/rockon/btrfs/subvolumes/f8f9bf711b07f23665023085b9120001a8967823a9077efdb4d9f8038015ddbc
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/tv
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/tv/rockon/btrfs/subvolumes/bc0924784a6f05d22a55fa87d81b30609ffdb807a74947404aac0d7eaaecbc6e-init
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/tv
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/tv/rockon/btrfs/subvolumes/e4933b6dae09c2145da39233c901cf381cd4eff60e3586af24efecd608b36adf
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/tv
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/tv/rockon/btrfs/subvolumes/11bb74e691c7f6509ab16f30825d019ec46bf50b2a2e3cc821eb72ef582066c9
[21/Sep/2017 17:32:46] ERROR [storageadmin.middleware:32] Exception occured while processing a request. Path: /api/commands/bootstrap method: POST
[21/Sep/2017 17:32:46] ERROR [storageadmin.middleware:33] 'unicode' object has no attribute 'name'
Traceback (most recent call last):
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/handlers/base.py", line 132, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/views/decorators/csrf.py", line 58, in wrapped_view
    return view_func(*args, **kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/views/generic/base.py", line 71, in view
    return self.dispatch(request, *args, **kwargs)
  File "/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 452, in dispatch
    response = self.handle_exception(exc)
  File "/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 449, in dispatch
    response = handler(request, *args, **kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py", line 145, in inner
    return func(*args, **kwargs)
  File "/opt/rockstor/src/rockstor/storageadmin/views/command.py", line 95, in post
    import_shares(p, request)
  File "/opt/rockstor/src/rockstor/storageadmin/views/share_helpers.py", line 110, in import_shares
    cshare.pool.name))
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 412, in shares_info
    mnt_pt = mount_root(pool)
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 229, in mount_root
    root_pool_mnt = DEFAULT_MNT_DIR + pool.name
AttributeError: 'unicode' object has no attribute 'name'
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/tv
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/tv/rockon/btrfs/subvolumes/0a13f9afab06fb838255e584038be5c07183092fd3a6aaa9c1537dbd87820019-init
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/tv
[21/Sep/2017 17:32:46] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/tv/rockon/btrfs/subvolumes/a44d274a007dd87d76a7ae614c8b08dc4212abf1acab4f2312723abbcf8a2c94
..........
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/rockon-root/btrfs/subvolumes/57143aed02bda3332d3a526e69198c18e48f1e4b3b097843a18536b6f5d31a35
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/rockon-root/btrfs/subvolumes/4738d44b05e3f0d117943999e3b6687bc665eb2321a28867009cd0abd36b10bd
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/rockon-root/btrfs/subvolumes/d21d8db47999dab018db5c47374f65d57b562b7675b60de6d15775e4160dffec
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/rockon-root/btrfs/subvolumes/7edb346af423e5b5771b3edcc806f0e6f225d4e77d01f58aab21c324edc2fba6-init
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/rockon-root/btrfs/subvolumes/d8c2793af8715d982496111ecf260fe9bc81ade5179edc071f661e3cbab05bc3
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs subvolume list /mnt2/rockstor_rockstor
[21/Sep/2017 17:33:00] DEBUG [system.osi:104] Running command: /sbin/btrfs qgroup show /mnt2/rockstor_rockstor/rockon-root/btrfs/subvolumes/d387f93340fe03ca2c288a4c72cc58cb7634f4fe201f3cdaca38b48b3a901e65

@magicalyak

Great thanks, at least we have some info to go on now. Not yet able to reproduce this but still looking.

I really want to solve this. I’ve had other similar issues like this but I was too impatient to wait so I’d reinstall. I’m running docker manually with remapped shares for the time being. Just let me know if I should give up on this. Till then I’m happy to run tests and send logs.

I did notice that when I tried to create a new rock on share on the root system, it also disappeared after boot. I also noticed I can manually mount sub volumes but I don’t remember all my mount options so I only tried this with the rock on share and then didn’t do it again. My new containers all are duplicates of the existing (not the same name). There is a program someone wrote called rekcod that parses container info and I’ve used that to duplicate what I couldn’t remember from the json. I have a bunch of new rockons I was hoping to upload but I’ll have to test them on a vm at this point.

Anyway let me know if there is more I can try.

Is there anything I can do to fix any qgroup issues and not mess up usage information? I basically recreated about 50 shares and reinstalled just to get usage a few weeks ago so I’d hate to go back to 0 usage info again (which is always what I get when importing existing pools on a new install).

@magicalyak
OK, that’s a load of questions. Sorry but still working on some of your subvolumes not mounting. I have now found a programming anomaly that I’m looking into so once I’ve pinned down exactly where that anomaly is surfacing I can create an issue and address it.

As per ‘usage 0’ yes that is a pain and may even be related (fancy that) but I must follow through on this issue to see if that is the case first. Do you fancy opening a GitHub issue on the 0 usage issue along with steps to reproduce as I don’t think we have a clear reproducer in the current issues on that one.

Thanks. I think from your recent DEBUG log I have what I need and it has lead to turning up a ‘strange’ bit of code which could have quite significant effects so I have to be especially methodical.

As per trial of fix (hopefully shortly) that would not be appropriate on a production install such as you seem to indicate you have so not quite sure where we can go with that one. Will report back here on progress and the eventual issue.

Lets get that zero usage info issue opened and I’ll keep investigating this code anomaly as I really need to look at the history of how it came about before I alter it: but pretty sure I’m close to a fix for your share and possible the usage thing so that would be nice, however no promises as just started looking.

Cheers.

@phillxnet I just figured it out!!! I have no idea why but there was a duplicate named subvolume on my backup pool. Both were called virtexport and the rockstor referenced the one on my tv pool. I didn’t realize this before but I mounted (manually) the tv one and deleted it from the rockstor UI. Then I crossed my fingers and ran systemctl start rockstor-bootstrap while I tailed the rockstor.log
no errors
success!!!

So in my case, double check all pools for a subvolume duplicate name

Is it possible to add any workaround though in case that happens or are we stuck with names instead of subvolid?

@magicalyak

Thanks, this also confirms the code area I’m currently looking at. We should catch this and warn about it, and have an existing exception for it, but a bug near buy is upsetting things. That info definitely helps and I’m just in the process of referencing you and this this forum thread in an issue to address the core ‘bug’.

There is an ongoing effort to make at least internal references more flexible and this should lead to great user level flexibility in the short to near term. But it’s a large change and has already had a few casualties on the way so we have to more forward carefully.

Thanks for the ongoing input on this one and well done on pinning down your issue. As it goes I’m pretty sure Rockstor shouldn’t have let you do that except for the bug I’m working on acting as a short circuit to the error you would have received. Oh well, still working this one out so will have more understanding / fixes soon.

@magicalyak In the interest of easing the diagnosis of the various elements you bring up in this thread do you fancy starting a new thread with this new performance element as I think it’s best we leave this thread as addressing the subvols issue you initially brought up?

By the way I have now opened the following issue as a result of this thread:

Yes: lets keep up the effort to tease out individual issue in separate forum threads and GitHub issues as all the easier to actually get them fixed then.