No GUI after updates 4.1.x

I ran the updates in the GUI and rebooted. After this the GUI will no longer load. When I ssh in to the sytem I run:

systemctl list-units --failed
  UNIT                      LOAD   ACTIVE SUB    DESCRIPTION
● dmraid-activation.service loaded failed failed Activation of DM RAID sets
● docker.service            loaded failed failed Docker Application Container Engine
● rockstor-pre.service      loaded failed failed Tasks required prior to starting Rockstor

then I run:

systemctl status rockstor-pre.service
● rockstor-pre.service - Tasks required prior to starting Rockstor
     Loaded: loaded (/etc/systemd/system/rockstor-pre.service; enabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Sat 2023-01-14 11:13:14 EST; 25min ago
    Process: 13303 ExecStart=/opt/rockstor/bin/initrock (code=exited, status=1/FAILURE)
   Main PID: 13303 (code=exited, status=1/FAILURE)

Jan 14 11:13:14 Balboa.local initrock[13303]:   File "/opt/rockstor/bin/initrock", line 40, in <module>
Jan 14 11:13:14 Balboa.local initrock[13303]:     sys.exit(scripts.initrock.main())
Jan 14 11:13:14 Balboa.local initrock[13303]:   File "/opt/rockstor/src/rockstor/scripts/initrock.py", line 476, in main
Jan 14 11:13:14 Balboa.local initrock[13303]:     run_command(fake_initial_migration_cmd + ["--database=default", "contenttypes"])
Jan 14 11:13:14 Balboa.local initrock[13303]:   File "/opt/rockstor/src/rockstor/system/osi.py", line 224, in run_command
Jan 14 11:13:14 Balboa.local initrock[13303]:     raise CommandException(cmd, out, err, rc)
Jan 14 11:13:14 Balboa.local initrock[13303]: system.exceptions.CommandException: Error running a command. cmd = /opt/rockstor/bin/django migrate --noinput --fake-initial>
Jan 14 11:13:14 Balboa.local systemd[1]: rockstor-pre.service: Main process exited, code=exited, status=1/FAILURE
Jan 14 11:13:14 Balboa.local systemd[1]: rockstor-pre.service: Failed with result 'exit-code'.
Jan 14 11:13:14 Balboa.local systemd[1]: Failed to start Tasks required prior to starting Rockstor.

then I run:

/opt/rockstor/bin/django migrate --noinput --fake-initial
Operations to perform:
  Synchronize unmigrated apps: staticfiles, rest_framework, pipeline, djhuey, messages
  Apply all migrations: oauth2_provider, sessions, admin, sites, auth, contenttypes, smart_manager, storageadmin
Synchronizing apps without migrations:
  Creating tables...
    Running deferred SQL...
  Installing custom SQL...
Running migrations:
  Rendering model states... DONE
  Applying oauth2_provider.0002_08_updates...Traceback (most recent call last):
  File "/opt/rockstor/bin/django", line 44, in <module>
    sys.exit(djangorecipe.manage.main('rockstor.settings'))
  File "/opt/rockstor/eggs/djangorecipe-1.9-py2.7.egg/djangorecipe/manage.py", line 9, in main
    management.execute_from_command_line(sys.argv)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/management/__init__.py", line 354, in execute_from_command_line
    utility.execute()
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/management/__init__.py", line 346, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/management/base.py", line 394, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/management/base.py", line 445, in execute
    output = self.handle(*args, **options)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/management/commands/migrate.py", line 222, in handle
    executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/migrations/executor.py", line 110, in migrate
    self.apply_migration(states[migration], migration, fake=fake, fake_initial=fake_initial)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/migrations/executor.py", line 148, in apply_migration
    state = migration.apply(state, schema_editor)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/migrations/migration.py", line 115, in apply
    operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/migrations/operations/fields.py", line 62, in database_forwards
    field,
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/postgresql_psycopg2/schema.py", line 18, in add_field
    super(DatabaseSchemaEditor, self).add_field(model, field)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/base/schema.py", line 398, in add_field
    self.execute(sql, params)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/base/schema.py", line 111, in execute
    cursor.execute(sql, params)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py", line 64, in execute
    return self.cursor.execute(sql, params)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/utils.py", line 98, in __exit__
    six.reraise(dj_exc_type, dj_exc_value, traceback)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/utils.py", line 64, in execute
    return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: column "skip_authorization" of relation "oauth2_provider_application" already exists

Also of note:

  • after reboot the postgresql service is not running. I had to start it.
  • postgresql 13 is being used, 10 is also installed
  • docker service shows failed state as well, not sure if this gets started by rockstor init or something.

any help is greatly appreciated.

Hi @mercbot7,

Thanks for the detailed report, you made it clear to see what is failing exactly. It does look like something related to the system and not your pools, etc… so that means the latter are probably still in good shape.

To see if I can reproduce that, could you confirm the Rockstor version you are currently using? You do mention 4.1 in your title but do you have an update channel activated?
I also presume you mean that you ran the system updates in the GUI (clicking on the blinking RSS-looking icon in the top right corner), is that correct?

My guess is that this is due to the fact that the Rockstor service fails to start, which means the Pool in which your Rock-Ons root is located was not mounted, causing the Docker service to fail to start (as it is looking for data there).

2 Likes

@Flox, Thank you for responding so quickly. Yes I am talking about the RSS icon for updates.

I am sure I am on at least 4.1 as I recall looking at that prior to updating via the blinking RSS icon.
Is there a command I can run to check what version when no GUI is running?

Yes, I appologize, I did not mention that the BTRFS filesystems are not mounted and I am not seeing them in the /etc/fstabs file, not sure if they are supposed to be in there or not.

when I run:

btrfs filesystem show
Label: 'ROOT'  uuid: 90ae303d-95b4-4884-b3b5-b9e707228bac
        Total devices 1 FS bytes used 6.29GiB
        devid    1 size 109.72GiB used 7.80GiB path /dev/sde4

Label: 'Adult_Swim'  uuid: b0794371-93cc-4c61-8d8e-20354e8ccb5d
        Total devices 4 FS bytes used 1001.09GiB
        devid    1 size 1.82TiB used 505.03GiB path /dev/sdf
        devid    2 size 1.82TiB used 505.03GiB path /dev/sda
        devid    3 size 1.82TiB used 505.03GiB path /dev/sdd
        devid    4 size 1.82TiB used 505.03GiB path /dev/sdb

Label: 'backup'  uuid: 7223bdd8-82da-40d7-a67c-591910c8e041
        Total devices 1 FS bytes used 3.22TiB
        devid    1 size 3.64TiB used 3.23TiB path /dev/sdc

and when I run:

cat /etc/fstab
LABEL=SWAP swap swap defaults 0 0
LABEL=ROOT / btrfs noatime 0 0
LABEL=ROOT /.snapshots btrfs noatime,subvol=@/.snapshots 0 0
LABEL=ROOT /home btrfs noatime,subvol=@/home 0 0
LABEL=ROOT /opt btrfs noatime,subvol=@/opt 0 0
LABEL=ROOT /root btrfs noatime,subvol=@/root 0 0
LABEL=ROOT /srv btrfs noatime,subvol=@/srv 0 0
LABEL=ROOT /tmp btrfs noatime,subvol=@/tmp 0 0
LABEL=ROOT /var btrfs noatime,subvol=@/var 0 0
LABEL=EFI /boot/efi vfat defaults 0 0
LABEL=ROOT /usr/local btrfs noatime,subvol=@/usr/local 0 0
LABEL=ROOT /boot/grub2/i386-pc btrfs noatime,subvol=@/boot/grub2/i386-pc 0 0
LABEL=ROOT /boot/grub2/x86_64-efi btrfs noatime,subvol=@/boot/grub2/x86_64-efi 0 0

Thanks!

1 Like

@Flox

When I run:

cat /opt/rockstor/.installed.cfg | grep "buildout-directory ="
buildout-directory = /var/lib/buildbot/worker/Build-on-Leap15-3/build/BUILDROOT/rockstor-4.1.0-0.x86_64/opt/rockstor
buildout-directory = /var/lib/buildbot/worker/Build-on-Leap15-3/build/BUILDROOT/rockstor-4.1.0-0.x86_64/opt/rockstor
buildout-directory = /var/lib/buildbot/worker/Build-on-Leap15-3/build/BUILDROOT/rockstor-4.1.0-0.x86_64/opt/rockstor
buildout-directory = /var/lib/buildbot/worker/Build-on-Leap15-3/build/BUILDROOT/rockstor-4.1.0-0.x86_64/opt/rockstor
buildout-directory = /var/lib/buildbot/worker/Build-on-Leap15-3/build/BUILDROOT/rockstor-4.1.0-0.x86_64/opt/rockstor
1 Like

@mercbot7 I can chip in a little here:
Re:

The 13 version did bring some changes and we deal better with that in rockstor version 4.5.0-0 testing channel, it’s actually a dependency of that rpm. Where-as 4.1.0-0 was dependant on postgresql 10. And from your log it looks like the Django version used at that time was from 4.1.0-0, rockstor version 4.5.0-0 now uses 1.11.29:

Re:

yes, we do our own mounting so it’s normal to not see the pools in fstab.

The following command should help here (run as the root user):

zypper info rockstor

paste the output back into this thread and it should help others here on the forum with context.

Hope that helps.

1 Like

@phillxnet Thanks for responding!

Here is the output from:

zypper info rockstor
Loading repository data...
Reading installed packages...


Information for package rockstor:
---------------------------------
Repository     : Rockstor-Stable
Name           : rockstor
Version        : 4.1.0-0
Arch           : x86_64
Vendor         : YewTreeApps
Installed Size : 73.3 MiB
Installed      : Yes
Status         : up-to-date
Source package : rockstor-4.1.0-0.src
Upstream URL   : http://rockstor.com/
Summary        : Btrfs Network Attached Storage (NAS) Appliance.
Description    :
    Software raid, snapshot capable NAS solution with built-in file integrity protection.
    Allows for file sharing between network attached devices.
1 Like

Thanks for the additional details and confirmation, @mercbot7.

I just tried applying all system updates on a Rockstor system installed from the latest ISO, so running 4.1 as well and everything went well, including the reboot… Maybe something happened during your system updates in which they were somehow interrupted? It’s hard to tell, but we need to get you back up and running anyway.

As you identified, the rockstor-pre service is failing due to the initrock.py script failing to apply required migrations. The error you see…

django.db.utils.ProgrammingError: column "skip_authorization" of relation "oauth2_provider_application" already exists

… seems to indicate that Django is trying to apply database structure changes (a migration) that are already applied (the relation (...) already exists). I’ve personally seen this happen when somehow Django failed to see a migration as already applied. We’ve actually seen a very similar problem after updating our Django version, see below for details:

While I fail to see why you seem to experience a similar problem, maybe we can manually follow a similar approach to resolve your situation.
It might thus be helpful to see what the current migrations are seen as applied for you. Could you run the following command and paste its output here?

/opt/rockstor/bin/django showmigrations

Also, maybe the full logs may show something out of the ordinary:

journalctl -b0 -u rockstor-pre

Let’s hope we can figure that one out.

2 Likes

@Flox
When I run:

 /opt/rockstor/bin/django showmigrations
admin
 [X] 0001_initial
auth
 [X] 0001_initial
 [X] 0002_alter_permission_name_max_length
 [X] 0003_alter_user_email_max_length
 [X] 0004_alter_user_username_opts
 [X] 0005_alter_user_last_login_null
 [X] 0006_require_contenttypes_0002
contenttypes
 [X] 0001_initial
 [X] 0002_remove_content_type_name
oauth2_provider
 [X] 0001_initial
 [ ] 0002_08_updates
sessions
 [ ] 0001_initial
sites
 [ ] 0001_initial
smart_manager
 [ ] 0001_initial
 [ ] 0002_auto_20170216_1212
storageadmin
 [X] 0001_initial
 [X] 0002_auto_20161125_0051
 [X] 0003_auto_20170114_1332
 [X] 0004_auto_20170523_1140
 [X] 0005_auto_20180913_0923
 [X] 0006_dcontainerargs
 [X] 0007_auto_20181210_0740
 [X] 0008_auto_20190115_1637
 [X] 0009_auto_20200210_1948
 [X] 0010_sambashare_time_machine
 [X] 0011_auto_20200314_1207
 [X] 0012_auto_20200429_1428
 [X] 0013_auto_20200815_2004
 [X] 0014_rockon_taskid

and then:

journalctl -b0 -u rockstor-pre
-- Logs begin at Sat 2023-01-14 11:11:24 EST, end at Sat 2023-01-14 15:15:42 EST. --
Jan 14 11:11:30 Balboa systemd[1]: Starting Tasks required prior to starting Rockstor...
Jan 14 11:11:31 Balboa initrock[685]: 2023-01-14 11:11:31,549: Checking for flash and Running flash optimizations if appropriate.
Jan 14 11:11:32 Balboa initrock[685]: 2023-01-14 11:11:32,157: Updating the timezone from the system
Jan 14 11:11:32 Balboa initrock[685]: 2023-01-14 11:11:32,158: system timezone = America/New_York
Jan 14 11:11:32 Balboa initrock[685]: 2023-01-14 11:11:32,159: Updating sshd_config
Jan 14 11:11:32 Balboa initrock[685]: 2023-01-14 11:11:32,159: SSHD_CONFIG Customization
Jan 14 11:11:32 Balboa initrock[685]: 2023-01-14 11:11:32,160: sshd_config already has the updates. Leaving it unchanged.
Jan 14 11:11:32 Balboa initrock[685]: 2023-01-14 11:11:32,160: Running app database migrations...
Jan 14 11:11:32 Balboa initrock[685]: Traceback (most recent call last):
Jan 14 11:11:32 Balboa initrock[685]:   File "/opt/rockstor/bin/initrock", line 40, in <module>
Jan 14 11:11:32 Balboa initrock[685]:     sys.exit(scripts.initrock.main())
Jan 14 11:11:32 Balboa initrock[685]:   File "/opt/rockstor/src/rockstor/scripts/initrock.py", line 476, in main
Jan 14 11:11:32 Balboa initrock[685]:     run_command(fake_initial_migration_cmd + ["--database=default", "contenttypes"])
Jan 14 11:11:32 Balboa initrock[685]:   File "/opt/rockstor/src/rockstor/system/osi.py", line 224, in run_command
Jan 14 11:11:32 Balboa initrock[685]:     raise CommandException(cmd, out, err, rc)
Jan 14 11:11:32 Balboa initrock[685]: system.exceptions.CommandException: Error running a command. cmd = /opt/rockstor/bin/django migrate --noinput --fake-initial --datab>
Jan 14 11:11:32 Balboa systemd[1]: rockstor-pre.service: Main process exited, code=exited, status=1/FAILURE
Jan 14 11:11:32 Balboa systemd[1]: rockstor-pre.service: Failed with result 'exit-code'.
Jan 14 11:11:32 Balboa systemd[1]: Failed to start Tasks required prior to starting Rockstor.
Jan 14 11:13:13 Balboa.local systemd[1]: Starting Tasks required prior to starting Rockstor...
Jan 14 11:13:13 Balboa.local initrock[13303]: 2023-01-14 11:13:13,720: Checking for flash and Running flash optimizations if appropriate.
Jan 14 11:13:14 Balboa.local initrock[13303]: 2023-01-14 11:13:14,322: Updating the timezone from the system
Jan 14 11:13:14 Balboa.local initrock[13303]: 2023-01-14 11:13:14,322: system timezone = America/New_York
Jan 14 11:13:14 Balboa.local initrock[13303]: 2023-01-14 11:13:14,323: Updating sshd_config
Jan 14 11:13:14 Balboa.local initrock[13303]: 2023-01-14 11:13:14,323: SSHD_CONFIG Customization
Jan 14 11:13:14 Balboa.local initrock[13303]: 2023-01-14 11:13:14,324: sshd_config already has the updates. Leaving it unchanged.
Jan 14 11:13:14 Balboa.local initrock[13303]: 2023-01-14 11:13:14,324: Running app database migrations...
Jan 14 11:13:14 Balboa.local initrock[13303]: Traceback (most recent call last):
Jan 14 11:13:14 Balboa.local initrock[13303]:   File "/opt/rockstor/bin/initrock", line 40, in <module>
Jan 14 11:13:14 Balboa.local initrock[13303]:     sys.exit(scripts.initrock.main())
Jan 14 11:13:14 Balboa.local initrock[13303]:   File "/opt/rockstor/src/rockstor/scripts/initrock.py", line 476, in main
Jan 14 11:13:14 Balboa.local initrock[13303]:     run_command(fake_initial_migration_cmd + ["--database=default", "contenttypes"])
Jan 14 11:13:14 Balboa.local initrock[13303]:   File "/opt/rockstor/src/rockstor/system/osi.py", line 224, in run_command
Jan 14 11:13:14 Balboa.local initrock[13303]:     raise CommandException(cmd, out, err, rc)
Jan 14 11:13:14 Balboa.local initrock[13303]: system.exceptions.CommandException: Error running a command. cmd = /opt/rockstor/bin/django migrate --noinput --fake-initial>
Jan 14 11:13:14 Balboa.local systemd[1]: rockstor-pre.service: Main process exited, code=exited, status=1/FAILURE
Jan 14 11:13:14 Balboa.local systemd[1]: rockstor-pre.service: Failed with result 'exit-code'.
Jan 14 11:13:14 Balboa.local systemd[1]: Failed to start Tasks required prior to starting Rockstor.
2 Likes

Hi @mercbot7,

Thanks for the additional details, and sorry for the delay in getting back; I’m unfortunately under a hard deadline at work and free time is difficult for me at the moment. My apologies. I’ll try to help nonetheless.

The only difference between your migrations and those of a working install is that I can see is the following:

admin
 [X] 0001_initial

This one isn’t applied on my working system:

admin
 [ ] 0001_initial

The admin database is not one we do define as it is a Django “built-in” one so I’m unsure why it is applied in your case. My only guess so far relates to the slight difference between the error you first saw and the manual migration you tried to apply. Indeed, the migration that failed in your original post was the following:

/opt/rockstor/bin/django migrate --noinput --fake-initial  --database=default,contenttypes

Note that I’m unsure of the exact syntax at the moment for the database part, but the idea is that we ask for a fake apply of the initial migration for the default and contenttypes databases.

In the migration command you manually ran, it was not specified:

/opt/rockstor/bin/django migrate --noinput --fake-initial 

… in which case it is believed to default to the default database. We may thus still need to --fake-initial apply the migrations for contenttypes but I’m not certain that’s the cause here.

Would you have any idea here if you have a minute, @phillxnet?

@mercbot7, by curiosity, was the Rockstor v4.1.0-0 install a fresh one (from the ISO), or a distribution upgrade from a previous version?
Also, could you verify the version of postgres that you are running, just in case?

postgres --version
1 Like

@Flox @phillxnet, You do not need to apologize for helping me out by any means and especially on a Saturday. I understand and appreciate you have other jobs and that is not lost on me. I do not believe I have lost any data so I am not super worried.

This is a fresh install on 4.0 and imported an export from 3.9.2, then upgraded to 4.1 through the automatic upgrades.

As I mentioned both postgresql 10 and 13 are installed. Alternatives is set on auto and is using postgresql 13.

postgres --version
postgres (PostgreSQL) 13.9

alternatives --config postgresql
There are 2 choices for the alternative postgresql (providing /usr/lib/postgresql).

  Selection    Path                   Priority   Status
------------------------------------------------------------
* 0            /usr/lib/postgresql13   130       auto mode
  1            /usr/lib/postgresql10   100       manual mode
  2            /usr/lib/postgresql13   130       manual mode

Press <enter> to keep the current choice[*], or type selection number:

as far as the commands I used. I thought i had simply copypasta from the systemctl status rockstor-pre.service output and now am thinking there was more I missed there?

1 Like

ok, so here is a better output of the current status. Boy is my face red.

systemctl status rockstor-pre.service > pre_out.txt
Balboa:~ # cat pre_out.txt
● rockstor-pre.service - Tasks required prior to starting Rockstor
     Loaded: loaded (/etc/systemd/system/rockstor-pre.service; enabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Sat 2023-01-14 11:13:14 EST; 2 days ago
    Process: 13303 ExecStart=/opt/rockstor/bin/initrock (code=exited, status=1/FAILURE)
   Main PID: 13303 (code=exited, status=1/FAILURE)

Jan 14 11:13:14 Balboa.local initrock[13303]:   File "/opt/rockstor/bin/initrock", line 40, in <module>
Jan 14 11:13:14 Balboa.local initrock[13303]:     sys.exit(scripts.initrock.main())
Jan 14 11:13:14 Balboa.local initrock[13303]:   File "/opt/rockstor/src/rockstor/scripts/initrock.py", line 476, in main
Jan 14 11:13:14 Balboa.local initrock[13303]:     run_command(fake_initial_migration_cmd + ["--database=default", "contenttypes"])
Jan 14 11:13:14 Balboa.local initrock[13303]:   File "/opt/rockstor/src/rockstor/system/osi.py", line 224, in run_command
Jan 14 11:13:14 Balboa.local initrock[13303]:     raise CommandException(cmd, out, err, rc)
Jan 14 11:13:14 Balboa.local initrock[13303]: system.exceptions.CommandException: Error running a command. cmd = /opt/rockstor/bin/django migrate --noinput --fake-initial --database=default contenttypes. rc = 1. stdout = ['']. stderr = ['Traceback (most recent call last):', '  File "/opt/rockstor/bin/django", line 44, in <module>', "    sys.exit(djangorecipe.manage.main('rockstor.settings'))", '  File "/opt/rockstor/eggs/djangorecipe-1.9-py2.7.egg/djangorecipe/manage.py", line 9, in main', '    management.execute_from_command_line(sys.argv)', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/management/__init__.py", line 354, in execute_from_command_line', '    utility.execute()', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/management/__init__.py", line 346, in execute', '    self.fetch_command(subcommand).run_from_argv(self.argv)', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/management/base.py", line 394, in run_from_argv', '    self.execute(*args, **cmd_options)', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/management/base.py", line 445, in execute', '    output = self.handle(*args, **options)', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/management/commands/migrate.py", line 93, in handle', '    executor = MigrationExecutor(connection, self.migration_progress_callback)', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/migrations/executor.py", line 19, in __init__', '    self.loader = MigrationLoader(self.connection)', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/migrations/loader.py", line 47, in __init__', '    self.build_graph()', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/migrations/loader.py", line 191, in build_graph', '    self.applied_migrations = recorder.applied_migrations()', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/migrations/recorder.py", line 59, in applied_migrations', '    self.ensure_schema()', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/migrations/recorder.py", line 49, in ensure_schema', '    if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()):', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/base/base.py", line 164, in cursor', '    cursor = self.make_cursor(self._cursor())', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/base/base.py", line 135, in _cursor', '    self.ensure_connection()', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/base/base.py", line 130, in ensure_connection', '    self.connect()', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/utils.py", line 98, in __exit__', '    six.reraise(dj_exc_type, dj_exc_value, traceback)', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/base/base.py", line 130, in ensure_connection', '    self.connect()', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/base/base.py", line 119, in connect', '    self.connection = self.get_new_connection(conn_params)', '  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/postgresql_psycopg2/base.py", line 176, in get_new_connection', '    connection = Database.connect(**conn_params)', '  File "/opt/rockstor/eggs/psycopg2-2.7.4-py2.7-linux-x86_64.egg/psycopg2/__init__.py", line 130, in connect', '    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)', 'django.db.utils.OperationalError: could not connect to server: No such file or directory', '\tIs the server running locally and accepting', '\tconnections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?', '', '']
Jan 14 11:13:14 Balboa.local systemd[1]: rockstor-pre.service: Main process exited, code=exited, status=1/FAILURE
Jan 14 11:13:14 Balboa.local systemd[1]: rockstor-pre.service: Failed with result 'exit-code'.
Jan 14 11:13:14 Balboa.local systemd[1]: Failed to start Tasks required prior to starting Rockstor.
2 Likes

Oh thanks for double-checking your log output, it actually shows something I didn’t anticipate:

django.db.utils.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

There seems to be a problem with postgresql itself, then…
What does the following show?

systemctl status postgresql

After a reboot the postgresql server is not running and I started it using: systemctl start postgresql

When I run:

systemctl status postgresql
● postgresql.service - PostgreSQL database server
     Loaded: loaded (/usr/lib/systemd/system/postgresql.service; disabled; vendor preset: disabled)
     Active: active (running) since Sat 2023-01-14 11:20:38 EST; 2 days ago
    Process: 13632 ExecStart=/usr/share/postgresql/postgresql-script start (code=exited, status=0/SUCCESS)
   Main PID: 13650 (postgres)
      Tasks: 8
     CGroup: /system.slice/postgresql.service
             ├─13650 /usr/lib/postgresql10/bin/postgres -D /var/lib/pgsql/data
             ├─13651 postgres: logger process
             ├─13653 postgres: checkpointer process
             ├─13654 postgres: writer process
             ├─13655 postgres: wal writer process
             ├─13656 postgres: autovacuum launcher process
             ├─13657 postgres: stats collector process
             └─13658 postgres: bgworker: logical replication launcher

Jan 14 11:20:38 Balboa.local systemd[1]: Starting PostgreSQL database server...
Jan 14 11:20:38 Balboa.local postgresql-script[13632]:  Your database files were created by PostgreSQL version 10.
Jan 14 11:20:38 Balboa.local postgresql-script[13632]:  Using the executables in /usr/lib/postgresql10/bin.
Jan 14 11:20:38 Balboa.local postgresql-script[13650]: 2023-01-14 08:20:38.412 PST [13650] LOG:  listening on IPv4 address "127.0.0.1", port 5432
Jan 14 11:20:38 Balboa.local postgresql-script[13650]: 2023-01-14 08:20:38.414 PST [13650] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Jan 14 11:20:38 Balboa.local postgresql-script[13650]: 2023-01-14 08:20:38.418 PST [13650] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
Jan 14 11:20:38 Balboa.local postgresql-script[13650]: 2023-01-14 08:20:38.423 PST [13650] LOG:  redirecting log output to logging collector process
Jan 14 11:20:38 Balboa.local postgresql-script[13650]: 2023-01-14 08:20:38.423 PST [13650] HINT:  Future log output will appear in directory "log".
Jan 14 11:20:38 Balboa.local systemd[1]: Started PostgreSQL database server.

That may be a very good clue here. We can see in your output that it is disabled, which explains why it is not running when you reboot.
I would try to enable it:

systemctl enable postgresql

and then either reboot the machine (to make sure it all works), or simply restart the rockstor service:

systemctl restart rockstor

Should it be running postgresql 10 or 13?

Mine is running on 10, like yours:

$ systemctl status postgresql
● postgresql.service - PostgreSQL database server
     Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled)
     Active: active (running) since Mon 2023-01-16 09:15:08 EST; 3h 9min ago
   Main PID: 692 (postgres)
      Tasks: 8
     CGroup: /system.slice/postgresql.service
             ├─692 /usr/lib/postgresql10/bin/postgres -D /var/lib/pgsql/data
             ├─738 postgres: logger process
             ├─764 postgres: checkpointer process
             ├─765 postgres: writer process
             ├─766 postgres: wal writer process
             ├─767 postgres: autovacuum launcher process
             ├─768 postgres: stats collector process
             └─769 postgres: bgworker: logical replication launcher

Jan 16 09:15:08 rockdevstable systemd[1]: Starting PostgreSQL database server...
Jan 16 09:15:08 rockdevstable postgresql-script[625]:  Your database files were created by PostgreSQL version 10.
Jan 16 09:15:08 rockdevstable postgresql-script[625]:  Using the executables in /usr/lib/postgresql10/bin.
Jan 16 09:15:08 rockdevstable postgresql-script[692]: 2023-01-16 06:15:08.422 PST [692] LOG:  listening on IPv4 address "127.0.0.1", port 5432
Jan 16 09:15:08 rockdevstable postgresql-script[692]: 2023-01-16 06:15:08.439 PST [692] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Jan 16 09:15:08 rockdevstable postgresql-script[692]: 2023-01-16 06:15:08.467 PST [692] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
Jan 16 09:15:08 rockdevstable postgresql-script[692]: 2023-01-16 06:15:08.489 PST [692] LOG:  redirecting log output to logging collector process
Jan 16 09:15:08 rockdevstable postgresql-script[692]: 2023-01-16 06:15:08.489 PST [692] HINT:  Future log output will appear in directory "log".
Jan 16 09:15:08 rockdevstable systemd[1]: Started PostgreSQL database server.
1 Like

@Flox and Boom!! It is running.

So, it sounds like the postgresql service got disabled somehow during an upgrade? I did not do anything to the system outside of run the updates via the RSS icon prior to the GUI and rockstor not starting again.

2 Likes

I am wondering if a quick script folks can run at the commandline to gather this data in an event like this might be helpful? Kind of like gathering a support bundle? Check for services and service configuration, versions, main basic config settings etc.?

3 Likes

That is a very good idea. We should also have a easy to trigger it from the webUI in cases the latter is still accessible.
Would you mind creating an issue on our GitHub repo describing what you should described and linking to your post here for reference?

This would help us not forget about it and allow anyone who wants to tackle it to do so.

2 Likes

@Flox and @phillxnet, You guys rock (pun intended :-)). Thank you so much.

Per your request here is a github issue for consideration:

3 Likes