Unable to update Rock-Ons

Hi All,

I have installed RockStor on my old intel PC and able to use all features except Rock-Ons. When I am trying to update Rock-Ons list getting error window with no information. You can find same in below snapshot.

Could anyone help me on this to resolve same?

Thanks,
Raju

@raju_ga153 welcome to the Rockstor community.
We have seen this in the past a few times on older version (and platforms), e.g. here:

And more specifically

The “POST” part here indicates that the error was during an update of the database

This is quite curios. Does your rockstor_rockstor pool look to be rw still or has it gone read only as this would cause this same issue potentially as the post (db write) would be unable to complete. Or you system drive could be low on space, but again that would have other knock-on.

Can you check the logs to see whether more details on the messsage are available? you can get to it via the WebUI, or at the command line, the file is located here:

/opt/rockstor/var/logs/rockstor.log

2 Likes

Hi @Hooverdan ,

Here I am pasting log info in below. Could you please go through it and let me know the root cuase?

[30/Jul/2023 21:20:50] ERROR [smart_manager.views.base_service:73] Exception while querying status of service(replication): Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
[30/Jul/2023 21:20:50] ERROR [smart_manager.views.base_service:74] Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/smart_manager/views/base_service.py”, line 64, in _get_status
o, e, rc = service_status(service.name, config)
File “/opt/rockstor/src/rockstor/system/services.py”, line 186, in service_status
return superctl(service_name, “status”)
File “/opt/rockstor/src/rockstor/system/services.py”, line 125, in superctl
out, err, rc = run_command([SUPERCTL_BIN, switch, service])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 227, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
[30/Jul/2023 21:25:21] INFO [scripts.initrock:278] Normalising on shellinaboxd service file
[30/Jul/2023 21:25:21] INFO [scripts.initrock:282] - shellinaboxd.service already exists
[30/Jul/2023 21:25:21] INFO [scripts.initrock:297] Establishing Rockstor nginx service override file
[30/Jul/2023 21:25:21] INFO [scripts.initrock:407] /etc/systemd/system/nginx.service.d/30-rockstor-nginx-override.conf up-to-date.
[30/Jul/2023 21:25:21] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor-pre.service up-to-date.
[30/Jul/2023 21:25:21] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor.service up-to-date.
[30/Jul/2023 21:25:21] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor-bootstrap.service up-to-date.
[30/Jul/2023 21:26:18] ERROR [storageadmin.util:45] Exception: insert or update on table “storageadmin_networkdevice” violates foreign key constraint “connection_id_refs_id_1db23ec5”
DETAIL: Key (connection_id)=(11) is not present in table “storageadmin_networkconnection”.
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/network.py”, line 236, in get_queryset
self._refresh_devices()
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/utils/decorators.py”, line 185, in inner
return func(*args, **kwargs)
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/db/transaction.py”, line 223, in exit
connection.commit()
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/db/backends/base/base.py”, line 262, in commit
self._commit()
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/db/backends/base/base.py”, line 236, in _commit
return self.connection.commit()
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/db/utils.py”, line 94, in exit
six.reraise(dj_exc_type, dj_exc_value, traceback)
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/db/backends/base/base.py”, line 236, in _commit
return self.connection.commit()
IntegrityError: insert or update on table “storageadmin_networkdevice” violates foreign key constraint “connection_id_refs_id_1db23ec5”
DETAIL: Key (connection_id)=(11) is not present in table “storageadmin_networkconnection”.

[30/Jul/2023 21:31:24] ERROR [smart_manager.views.base_service:73] Exception while querying status of service(replication): Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
[30/Jul/2023 21:31:24] ERROR [smart_manager.views.base_service:74] Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/smart_manager/views/base_service.py”, line 64, in _get_status
o, e, rc = service_status(service.name, config)
File “/opt/rockstor/src/rockstor/system/services.py”, line 186, in service_status
return superctl(service_name, “status”)
File “/opt/rockstor/src/rockstor/system/services.py”, line 125, in superctl
out, err, rc = run_command([SUPERCTL_BIN, switch, service])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 227, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
[30/Jul/2023 21:31:55] ERROR [system.osi:225] non-zero code(7) returned by command: [‘/usr/bin/zypper’, ‘–non-interactive’, ‘-q’, ‘list-updates’]. output: [‘’] error: [‘System management is locked by the application with pid 16259 (/usr/bin/zypper).’, ‘Close this application before trying again.’, ‘’]
[30/Jul/2023 21:31:57] ERROR [system.osi:225] non-zero code(7) returned by command: [‘/usr/bin/zypper’, ‘–non-interactive’, ‘-q’, ‘list-updates’]. output: [‘’] error: [‘System management is locked by the application with pid 16259 (/usr/bin/zypper).’, ‘Close this application before trying again.’, ‘’]
[30/Jul/2023 21:31:59] ERROR [system.osi:225] non-zero code(7) returned by command: [‘/usr/bin/zypper’, ‘–non-interactive’, ‘-q’, ‘list-updates’]. output: [‘’] error: [‘System management is locked by the application with pid 16259 (/usr/bin/zypper).’, ‘Close this application before trying again.’, ‘’]
[30/Jul/2023 21:32:12] ERROR [smart_manager.views.base_service:73] Exception while querying status of service(replication): Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
[30/Jul/2023 21:32:12] ERROR [smart_manager.views.base_service:74] Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/smart_manager/views/base_service.py”, line 64, in _get_status
o, e, rc = service_status(service.name, config)
File “/opt/rockstor/src/rockstor/system/services.py”, line 186, in service_status
return superctl(service_name, “status”)
File “/opt/rockstor/src/rockstor/system/services.py”, line 125, in superctl
out, err, rc = run_command([SUPERCTL_BIN, switch, service])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 227, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
[30/Jul/2023 21:33:29] INFO [scripts.initrock:278] Normalising on shellinaboxd service file
[30/Jul/2023 21:33:29] INFO [scripts.initrock:282] - shellinaboxd.service already exists
[30/Jul/2023 21:33:29] INFO [scripts.initrock:297] Establishing Rockstor nginx service override file
[30/Jul/2023 21:33:29] INFO [scripts.initrock:407] /etc/systemd/system/nginx.service.d/30-rockstor-nginx-override.conf up-to-date.
[30/Jul/2023 21:33:29] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor-pre.service up-to-date.
[30/Jul/2023 21:33:29] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor.service up-to-date.
[30/Jul/2023 21:33:29] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor-bootstrap.service up-to-date.
[30/Jul/2023 21:34:16] ERROR [smart_manager.views.base_service:73] Exception while querying status of service(replication): Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
[30/Jul/2023 21:34:16] ERROR [smart_manager.views.base_service:74] Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/smart_manager/views/base_service.py”, line 64, in _get_status
o, e, rc = service_status(service.name, config)
File “/opt/rockstor/src/rockstor/system/services.py”, line 186, in service_status
return superctl(service_name, “status”)
File “/opt/rockstor/src/rockstor/system/services.py”, line 125, in superctl
out, err, rc = run_command([SUPERCTL_BIN, switch, service])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 227, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
[30/Jul/2023 21:34:29] ERROR [smart_manager.views.base_service:73] Exception while querying status of service(replication): Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
[30/Jul/2023 21:34:29] ERROR [smart_manager.views.base_service:74] Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/smart_manager/views/base_service.py”, line 64, in _get_status
o, e, rc = service_status(service.name, config)
File “/opt/rockstor/src/rockstor/system/services.py”, line 186, in service_status
return superctl(service_name, “status”)
File “/opt/rockstor/src/rockstor/system/services.py”, line 125, in superctl
out, err, rc = run_command([SUPERCTL_BIN, switch, service])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 227, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = [‘replication STOPPED Not started’, ‘’]. stderr = [‘’]
[30/Jul/2023 21:34:42] ERROR [storageadmin.views.network:213] NetworkConnection matching query does not exist.
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/storageadmin/views/network.py”, line 207, in update_connection
name=dconfig[“connection”]
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/db/models/manager.py”, line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/db/models/query.py”, line 380, in get
self.model._meta.object_name
DoesNotExist: NetworkConnection matching query does not exist.
[30/Jul/2023 21:51:42] INFO [scripts.initrock:278] Normalising on shellinaboxd service file
[30/Jul/2023 21:51:42] INFO [scripts.initrock:282] - shellinaboxd.service already exists
[30/Jul/2023 21:51:42] INFO [scripts.initrock:297] Establishing Rockstor nginx service override file
[30/Jul/2023 21:51:42] INFO [scripts.initrock:407] /etc/systemd/system/nginx.service.d/30-rockstor-nginx-override.conf up-to-date.
[30/Jul/2023 21:51:42] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor-pre.service up-to-date.
[30/Jul/2023 21:51:42] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor.service up-to-date.
[30/Jul/2023 21:51:42] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor-bootstrap.service up-to-date.
[30/Jul/2023 21:51:52] ERROR [system.osi:225] non-zero code(7) returned by command: [‘/usr/bin/zypper’, ‘–non-interactive’, ‘-q’, ‘list-updates’]. output: [‘’] error: [‘System management is locked by the application with pid 13427 (/usr/bin/zypper).’, ‘Close this application before trying again.’, ‘’]
[31/Jul/2023 07:55:25] INFO [scripts.initrock:278] Normalising on shellinaboxd service file
[31/Jul/2023 07:55:25] INFO [scripts.initrock:282] - shellinaboxd.service already exists
[31/Jul/2023 07:55:25] INFO [scripts.initrock:297] Establishing Rockstor nginx service override file
[31/Jul/2023 07:55:25] INFO [scripts.initrock:407] /etc/systemd/system/nginx.service.d/30-rockstor-nginx-override.conf up-to-date.
[31/Jul/2023 07:55:25] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor-pre.service up-to-date.
[31/Jul/2023 07:55:25] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor.service up-to-date.
[31/Jul/2023 07:55:25] INFO [scripts.initrock:407] /usr/lib/systemd/system/rockstor-bootstrap.service up-to-date.

Thanks,
Raju

@raju_ga153
mhm, I can’t identify anything related to the Rockon problem you’re having. There is an error related to the replication service (have you enabled/tried the replication service between 2 Rockstor instances recently?) and an error related to writing a network connection definition to the database. I don’t think either of these two is causing your Rockon POST failure.

I assume, the drive where you installed the Rockstor package is mounted in read/write mode and not, for some reason, (e.g. lack of space) mounted as read-only?

I hope @Flox might have some other suggestion on what to check?

Hi @Hooverdan ,

Here I am suspecting 2 cases.

  1. I have only one disk(500GB) and its been used as both ROOT and Home. And used Home as root for rockons-root. You can refer snapshots in below. You can confirm me that if it has any issue.

  2. I am seeing issue with “docker” network connection and IPv6 tables. Pls cross check same.

Thanks,
Raju

On the docker network connection, I am not sure what you are referring to about the IPv6 tables. Looking at the screenshot, I have the same attributes.

Usually, the recommendation is to separate OS and everything else (also to make restores/reinstalls more stable) onto separate drives. I am not sure that, while not recommended, this is causing your Rockon refresh problem. Your physical disk shows only a low usage. However, you would want to create a separate share that is only for the Rockon service, and not intermingled with any other files. So, single disk comment from earlier aside, you want to create a separate share that’s representing the Rockon root.

2 Likes

Hi @Hooverdan ,

I have attached 250GB USB disk and mapped rockons-root to same. Even then I have no luck and issue is same. Have look into below snapshot.

I am seeing following info from Kernel(dmesg). Does this helps to debug my issue?

Aug 01 18:50:15 RazNAS kernel: r8169 0000:02:00.0: Direct firmware load for rtl_nic/rtl8168e-3.fw failed with error -2
Aug 01 18:50:15 RazNAS kernel: r8169 0000:02:00.0: Unable to load firmware rtl_nic/rtl8168e-3.fw (-2)
Aug 01 18:50:15 RazNAS kernel: RTL8211E Gigabit Ethernet r8169-0-200:00: attached PHY driver (mii_bus:phy_addr=r8169-0-200:00, irq=MAC)
Aug 01 18:50:15 RazNAS kernel: r8169 0000:02:00.0 eth0: Link is Down
Aug 01 18:50:18 RazNAS kernel: r8169 0000:02:00.0 eth0: Link is Up - 1Gbps/Full - flow control off
Aug 01 18:50:18 RazNAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Aug 01 18:50:18 RazNAS kernel: NET: Registered PF_PACKET protocol family
Aug 01 18:50:49 RazNAS kernel: device-mapper: uevent: version 1.0.3
Aug 01 18:50:49 RazNAS kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Aug 01 18:50:51 RazNAS kernel: BTRFS info (device sdb): use no compression
Aug 01 18:50:51 RazNAS kernel: BTRFS info (device sdb): disk space caching is enabled
Aug 01 18:50:51 RazNAS kernel: BTRFS info (device sdb): has skinny extents
Aug 01 18:50:55 RazNAS kernel: audit: type=1400 audit(1690896055.953:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="docker-default" pid=13652 comm="apparmor_parser"
Aug 01 18:50:56 RazNAS kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Aug 01 18:50:56 RazNAS kernel: Bridge firewalling registered
Aug 01 18:50:56 RazNAS kernel: bpfilter: Loaded bpfilter_umh pid 13657
Aug 01 18:50:56 RazNAS unknown: Started bpfilter
Aug 01 18:50:56 RazNAS kernel: Initializing XFRM netlink socket

Not sure that this will address the Rockon issue, however it seems that your board is using Realtek hardware for the network device, which are not included in the Rockstor install at this time.

Based on this:

you can try to install them from the OpenSUSE repo using:
zypper in kernel-firmware-realtek

or follow @Flox instructions further down in the thread to get the very latest ones, however that is a little bit more involved.

2 Likes

By following given solution, I am able to resolve issue with Realtek Network device. But still no luck on Rock-Ons.
I am also seeing below issue from Kernel(dmesg).

Aug 01 21:29:12 RazNAS kernel: audit: type=1400 audit(1690905552.226:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="docker-default" pid=13537 comm="apparmor_parser"
Aug 01 21:29:12 RazNAS kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Aug 01 21:29:12 RazNAS kernel: Bridge firewalling registered
Aug 01 21:29:12 RazNAS kernel: bpfilter: Loaded bpfilter_umh pid 13543
Aug 01 21:29:12 RazNAS unknown: Started bpfilter
Aug 01 21:29:12 RazNAS kernel: Initializing XFRM netlink socket
2 Likes

I am fresh out of ideas right now, will continue to think about it.
I hope, @Flox might have a few minutes to provide some additional pointers on what to look for, but I know he’s very busy right now.

1 Like

Hi @Flox ,

I am sure you are too busy but if you can get some time go through my issue and give some suggestion to debug this issue further will be great.

Thanks,
Raju

@raju_ga153 in the terminal, can you try
curl https://rockstor.com/rockons/root.json
to see whether you’re getting back a list of the Rockons in json format? Just want to make sure that you don’t (for any weird reason) have connectivity issue. If you do get the list, then at least we have confirmed that the connection “works”.

2 Likes

I am able get the list with given command.

1 Like

ok, again out of ideas from my side. I assume, somewhere here, something goes wrong and does not return a clear error/exception message

@phillxnet, @Flox would this suggest temporarily turning django to debug mode to get better insights on where this might go wrong (e.g. during the save to the database)?

Thank you so much @Hooverdan for all.your time and help there! This is a rather curious one indeed.

Could be worth it, indeed, good idea! I’m not sure it’ll show anything else directly related to the update of Rock-Ons available itself, unfortunately, because I believe we have a wide range of error catching in place already in this area but it’s worth trying indeed. Ital may also reveal a related issue but in a completely different part of the code.

@raju_ga153, to implement @Hooverdan’s idea:

cd /opt/rockstor
poetry run debug-mode ON

A quick question: you never had a successful update of the list for Rock-Ons available on this machine, correct? If yes, it may be helpful to have a look at the state of the storageadmin_rockons table in the database. I wonder if the process fails half way during the update, or before that. If the former, you would see some entries there; if the latter, you wouldn’t see anything.
I’m currently traveling for work so I can’t test for sure but I’ll see if I can setup something real quick to let you know how to do that. It would basically constitute of running a psql command.

1 Like

I forgot to mention that after turning ON the debug-mode, you can go ahead and refresh your browser. Then, open a terminal and monitor the log:

tail -f /opt/rockstor/var/logs/rockstor.log

You can then click the Update button again in the Rock-Ons page and see what the log says. You should now see some logger DEBUG lines.

With regards to checking your database content, you can do the following:

psql -U rocky -d storageadmin -c "SELECT * FROM storageadmin_rockons"

Note that I am still not in the capacity to verify that command so it may needs some adjustments if I got a few names wrong there.

2 Likes

it should actually be (just dropping the s at the end of the table name)

psql -U rocky -d storageadmin -c "SELECT * FROM storageadmin_rockon"

password is: rocky (same as user)

same thing, just one s too many :slight_smile:

tail -f /opt/rockstor/var/log/rockstor.log

And for good measure, I assume, to turn off the debugger again, you would use?

cd /opt/rockstor
poetry run debug-mode OFF
2 Likes

Correct across the board… Thank you for making all of this right :+1:

1 Like

RazNAS:~ # cd /opt/rockstor/
RazNAS:/opt/rockstor # poetry run debug-mode ON
DEBUG flag is now set to True
RazNAS:/opt/rockstor # clear
RazNAS:/opt/rockstor # tail -f /opt/rockstor/var/log/rockstor.log
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/db/models/sql/compiler.py”, line 1204, in execute_sql
cursor = super(SQLUpdateCompiler, self).execute_sql(result_type)
File “/opt/rockstor/.venv/lib/python2.7/site-packages/django/db/models/sql/compiler.py”, line 899, in execute_sql
raise original_exception
OperationalError: deadlock detected
DETAIL: Process 13502 waits for ShareLock on transaction 1853; blocked by process 13467.
Process 13467 waits for ShareLock on transaction 1856; blocked by process 13502.
HINT: See server log for query details.
CONTEXT: while updating tuple (0,27) in relation “storageadmin_pool”

[03/Aug/2023 17:50:48] DEBUG [system.osi:208] Running command: /usr/bin/systemctl enable docker
[03/Aug/2023 17:50:48] DEBUG [system.osi:208] Running command: /usr/bin/systemctl start docker
[03/Aug/2023 17:50:51] DEBUG [storageadmin.views.home:67] context={‘setup_user’: True, ‘current_appliance’: <Appliance: Appliance object>, ‘request’: <WSGIRequest: GET ‘/home’>, ‘page_size’: 15, ‘update_channel’: ‘Testing’}
[03/Aug/2023 17:50:51] DEBUG [storageadmin.views.home:69] ABOUT TO RENDER INDEX
[03/Aug/2023 17:50:52] DEBUG [storageadmin.views.rockon:74] HUEY.pending() []
[03/Aug/2023 17:50:52] DEBUG [storageadmin.views.rockon:83] PENDING TASK ID’S []
[03/Aug/2023 17:50:52] DEBUG [storageadmin.views.rockon:90] PENDING ROCKON_ID’S []
[03/Aug/2023 17:50:52] DEBUG [storageadmin.views.rockon:74] HUEY.pending() []
[03/Aug/2023 17:50:52] DEBUG [storageadmin.views.rockon:83] PENDING TASK ID’S []
[03/Aug/2023 17:50:52] DEBUG [storageadmin.views.rockon:90] PENDING ROCKON_ID’S []
[03/Aug/2023 17:50:52] DEBUG [system.osi:495] — Inheriting base_root_disk info —
[03/Aug/2023 17:50:53] DEBUG [storageadmin.views.share_helpers:111] ---- Share name = RockOns.
[03/Aug/2023 17:50:53] DEBUG [storageadmin.views.share_helpers:113] Updating pre-existing same pool db share entry.
[03/Aug/2023 17:50:53] DEBUG [system.osi:208] Running command: /usr/sbin/btrfs subvolume list /mnt2/Maxtor
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(@).
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(.snapshots).
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(.snapshots/1/snapshot).
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(opt).
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(root).
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(srv).
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(tmp).
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(var).
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(usr/local).
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(boot/grub2/i386-pc).
[03/Aug/2023 17:50:53] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(boot/grub2/x86_64-efi).
[03/Aug/2023 17:50:53] DEBUG [storageadmin.views.share_helpers:111] ---- Share name = home.
[03/Aug/2023 17:50:53] DEBUG [storageadmin.views.share_helpers:113] Updating pre-existing same pool db share entry.
[03/Aug/2023 17:50:53] DEBUG [system.osi:208] Running command: /usr/sbin/btrfs subvolume list /mnt2/ROOT
[03/Aug/2023 17:51:12] DEBUG [storageadmin.views.rockon:131] Update Rock-ons info in database
[03/Aug/2023 17:51:53] DEBUG [system.osi:495] — Inheriting base_root_disk info —
[03/Aug/2023 17:51:54] DEBUG [storageadmin.views.share_helpers:111] ---- Share name = RockOns.
[03/Aug/2023 17:51:54] DEBUG [storageadmin.views.share_helpers:113] Updating pre-existing same pool db share entry.
[03/Aug/2023 17:51:54] DEBUG [system.osi:208] Running command: /usr/sbin/btrfs subvolume list /mnt2/Maxtor
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(@).
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(.snapshots).
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(.snapshots/1/snapshot).
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(opt).
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(root).
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(srv).
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(tmp).
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(var).
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(usr/local).
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(boot/grub2/i386-pc).
[03/Aug/2023 17:51:54] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(boot/grub2/x86_64-efi).
[03/Aug/2023 17:51:54] DEBUG [storageadmin.views.share_helpers:111] ---- Share name = home.
[03/Aug/2023 17:51:54] DEBUG [storageadmin.views.share_helpers:113] Updating pre-existing same pool db share entry.
[03/Aug/2023 17:51:54] DEBUG [system.osi:208] Running command: /usr/sbin/btrfs subvolume list /mnt2/ROOT
[03/Aug/2023 17:52:55] DEBUG [system.osi:495] — Inheriting base_root_disk info —
[03/Aug/2023 17:52:55] DEBUG [storageadmin.views.share_helpers:111] ---- Share name = RockOns.
[03/Aug/2023 17:52:55] DEBUG [storageadmin.views.share_helpers:113] Updating pre-existing same pool db share entry.
[03/Aug/2023 17:52:55] DEBUG [system.osi:208] Running command: /usr/sbin/btrfs subvolume list /mnt2/Maxtor
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(@).
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(.snapshots).
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(.snapshots/1/snapshot).
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(opt).
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(root).
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(srv).
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(tmp).
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(var).
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(usr/local).
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(boot/grub2/i386-pc).
[03/Aug/2023 17:52:55] DEBUG [fs.btrfs:845] Skipping excluded subvol: name=(boot/grub2/x86_64-efi).
[03/Aug/2023 17:52:55] DEBUG [storageadmin.views.share_helpers:111] ---- Share name = home.
[03/Aug/2023 17:52:55] DEBUG [storageadmin.views.share_helpers:113] Updating pre-existing same pool db share entry.
[03/Aug/2023 17:52:55] DEBUG [system.osi:208] Running command: /usr/sbin/btrfs subvolume list /mnt2/ROOT

1 Like