After update to 4.6.0-0 and after to get to stable shares are not mounted

Hi,

strange thing, after I updated to 4.6.0-0 and the I got into stable repo I rebooted and then after all my shares will appear as “Not Mounted” in red… I could mount some of them manually from shell, but not sure what is going on… I saw in the logs this, but not sure if it is related:

[02/Jun/2023 15:01:59] ERROR [system.osi:244] non-zero code(1) returned by command: ['/usr/bin/gzip', '/opt/rockstor/static/config-backups/backup-2022-07-14-211044.json.gz']. output: [''] error: ['gzip: /opt/rockstor/static/config-backups/backup-2022-07-14-211044.json.gz: No such file or directory', '']
[02/Jun/2023 15:01:59] ERROR [storageadmin.middleware:33] Exception occurred while processing a request. Path: /api/config-backup method: GET
[02/Jun/2023 15:01:59] ERROR [storageadmin.middleware:34] ConfigBackup object can't be deleted because its id attribute is set to None.
Traceback (most recent call last):
  File "/opt/rockstor/.venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 185, in _get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/opt/rockstor/.venv/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view
    return view_func(*args, **kwargs)
  File "/opt/rockstor/.venv/lib/python2.7/site-packages/django/views/generic/base.py", line 68, in view
    return self.dispatch(request, *args, **kwargs)
  File "/opt/rockstor/.venv/lib/python2.7/site-packages/rest_framework/views.py", line 495, in dispatch
    response = self.handle_exception(exc)
  File "/opt/rockstor/.venv/lib/python2.7/site-packages/rest_framework/views.py", line 455, in handle_exception
    self.raise_uncaught_exception(exc)
  File "/opt/rockstor/.venv/lib/python2.7/site-packages/rest_framework/views.py", line 466, in raise_uncaught_exception
    raise exc
AssertionError: ConfigBackup object can't be deleted because its id attribute is set to None.
[02/Jun/2023 15:03:31] ERROR [smart_manager.views.base_service:73] Exception while querying status of service(replication): Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = ['replication                      STOPPED   Not started', '']. stderr = ['']
[02/Jun/2023 15:03:31] ERROR [smart_manager.views.base_service:74] Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = ['replication                      STOPPED   Not started', '']. stderr = ['']
Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/smart_manager/views/base_service.py", line 64, in _get_status
    o, e, rc = service_status(service.name, config)
  File "/opt/rockstor/src/rockstor/system/services.py", line 192, in service_status
    return superctl(service_name, "status")
  File "/opt/rockstor/src/rockstor/system/services.py", line 142, in superctl
    out, err, rc = run_command([SUPERCTL_BIN, switch, service])
  File "/opt/rockstor/src/rockstor/system/osi.py", line 246, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = ['replication                      STOPPED   Not started', '']. stderr = ['']
[02/Jun/2023 15:03:43] ERROR [smart_manager.views.base_service:73] Exception while querying status of service(replication): Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = ['replication                      STOPPED   Not started', '']. stderr = ['']
[02/Jun/2023 15:03:43] ERROR [smart_manager.views.base_service:74] Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = ['replication                      STOPPED   Not started', '']. stderr = ['']
Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/smart_manager/views/base_service.py", line 64, in _get_status
    o, e, rc = service_status(service.name, config)
  File "/opt/rockstor/src/rockstor/system/services.py", line 192, in service_status
    return superctl(service_name, "status")
  File "/opt/rockstor/src/rockstor/system/services.py", line 142, in superctl
    out, err, rc = run_command([SUPERCTL_BIN, switch, service])
  File "/opt/rockstor/src/rockstor/system/osi.py", line 246, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /opt/rockstor/.venv/bin/supervisorctl status replication. rc = 3. stdout = ['replication                      STOPPED   Not started', '']. stderr = ['']

Did anybody have this problem before???

thanks :slight_smile:

@khamon, a couple of questions for clarification:
Did you “just” run the upgrade from 4.1 to the 4.6 by switching from stable to testing, upgraded, and then back to stable, or did you perform a fresh install of 4.6? The error message indicates that something went wrong during the restoring of a configuration backup?

4 Likes

Actually from 4.1 to 4.6 (from testing to testing) and then I saw that I changed from Testing to Stable (I have realized today of the Collective so I did purchased the membership)…So I went for it… maybe I should have not done it… :thinking:

But not, it wasn’t a clean install… but if there is something wrong with the settings backup not sure what I can do now…

1 Like

I am surprised that the settings backup would have any role here (maybe @Flox or @phillxnet can enlighten us), but during a straight upgrade I would not have expected it to come into play.

1 Like

Hi @khamon ,

I’m short on time but I still wanted to chip in before I can contribute in more details.

I agree with @Hooverdan here:

Could you clarify whether you have attempted a config backup restore after the update? You shouldn’t need to. The error you listed above would occur if one attempts to upload a config backup from your local machine, for instance.
Either way, I don’t see it as being the cause for your failure to mount.

Would you mind having a look at your rockstor-bootstrap logs, in particular for the most recent boot?

journalctl -u rockstor-boostrap

This is where Rockstor attempts to mount all your pools, etc…, so that may give us more information as to what is happening.

Sorry again for my very brief message here.

2 Likes

Hi @Flox,

well it’s interesting:

 journalctl -u rockstor-boostrap
-- Logs begin at Sat 2023-06-03 01:38:57 CEST, end at Sat 2023-06-03 01:40:37 CEST. --
-- No entries --

Also interesting the Shares I mounted manually now they are being mounted automatically after the reboot :thinking: … Today I had quite busy day and I was a bit distracted, but I think I did a backup after going to 4.6, just I think it did not show me any file to download … :thinking:

I checked the journalctl after rebooting, and the only error I found is related with docker (also now it’s not starting)

Jun 03 01:39:05 corrinvm dockerd[531]: chmod /mnt2/rockon: operation not permitted
Jun 03 01:39:05 corrinvm systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 03 01:39:05 corrinvm systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 03 01:39:05 corrinvm systemd[1]: Failed to start Docker Application Container Engine.
Jun 03 01:39:05 corrinvm systemd[1]: Started PostgreSQL database server.
Jun 03 01:39:05 corrinvm systemd[1]: Starting Tasks required prior to starting Rockstor...
Jun 03 01:39:05 corrinvm systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
Jun 03 01:39:05 corrinvm systemd[1]: Stopped Docker Application Container Engine.
Jun 03 01:39:05 corrinvm systemd[1]: Starting Docker Application Container Engine...

but I think it’s related I saw this:

Jun 03 01:39:05 corrinvm dockerd[531]: chmod /mnt2/rockon: operation not permitted

Definitely something happened… but not sure what is it… it’s like my configuration got deleted or something like this…

I have again rebooted, but there is some weird stuff, still mounting just a few shares, but not all… I could take a config backup, but it seems that I do not have any prior to the update, like it they would be deleted … :thinking: maybe something not mounting correctly …

At this point not sure if I reinstall from scratch it will be easier, as long as I can have somehow “import” the shares… :thinking:

@khamon Thanks for the testing report here by-the-way.

What is the base OS you are on, my apologies if you have mentioned this.
Our current base is Leap 15.4, if you were on a 15.3, which is now EOL, it may be causing some issues. But if you are then a re-install using one of our installers based on 15.4 would likely be easier/faster than zypper dup ing your existing install. We still build rpm’s for the 15.3 but there are now known issues there due to a postgresql user change. The downloads page indicates this.

Not that all will be lost on the system disk if you re-use the same disk that is. But you pool (if healthy) should still import OK. If you use a fresh disk you can always revert to the current one if need be.

Pool import is to be done first and is done via the disk menu from any of the prior pool members:
https://rockstor.com/docs/interface/storage/disks.html#import-btrfs-pool

Again if you are on a 15.3 base still then we have the following howto for in-place updates:
https://rockstor.com/docs/howtos/15-3_to_15-4.html

Without a config saved then you would have to re-apply your config by hand. But all Pool and Share entries should be imported: but nothing else like the exports etc. And the Rock-ons, but they can pick-up where they left off if you re-select the exact same dedicated shares as before during their re-install. Assuming you didn’t put any of them on the system drive (other than the Rock-ons-root which is just the downloaded docker images).

Apologies for the slightly rushed response here.

Our Pool import code has been fairly unchanged for quite a while now so that should work similarly. But your report is a little strange - hence checking on the leap version underlying your Rockstor version. We originally released the last stable of 4.1.0-0 on 15.3 but that was a while ago so we have some stable folks still using that now End Of Life version. Hence the referenced HowTo.

That bit is currently normal by the way.

Don’t rush if you are not pressed for access here as there may be more input on this thread: I just wanted to address the import question. You can also check via a btrfs scrub at the command line your pool if there are doubts about the health there.

Our installers currently have 4.5.8-0 pre-installed, hoping to build new ones soon but that won’t be for a bit unfortunately.

Hope that helps.

3 Likes

Hi @phillxnet ,

Well actually thanks to you and community for helping :slight_smile: :slight_smile:

And it seemed that you are right, I had 15.3 !! so I updated, but after updated it, it seems that the nginx stop working :smiley: :smiley:

un 04 00:26:38 corrinvm systemd[1]: Failed to start The nginx HTTP and reverse proxy server - 30-rockstor-nginx-override.conf.
░░ Subject: A start job for unit nginx.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit nginx.service has finished with a failure.
░░

Long history short, I think something change the config after the update to the 15.4 :smiley: :smiley: so I decided to do a clean reinstall and I will import the pool members :smiley: :smiley: hopefully it will work :smiley: hahah at least I know that the data should be ok :slight_smile:

3 Likes