Restoring Config from 3.9-57 to 4.0.7 - most rock-ons failed

Hi,

I’ve taken the plunge, with new 2242 nvme SSD which arrived today, which I can screw down properly to the MB, and installed 4.0.7 (from my kiwi built ISO CD) having first disconnected my data drives.

Install seems really smooth, (although looks like memtest86+ is not added to the ISO build by kiwi).

I then re-connected the data drives and imported the disks/pools/shares, and then restored config from the back-up I’d made before I started. After at least an hour, only Plex seemed to be re-installed (I had NextCloud, JellyFin, Subsonic, MariaDB and Syncthing installed as well.)

I was only evaluating JellyFin and Subsonic, but I do need Nextcloud and Syncthing back.

I tried to download the logs from the GUI, but that’s not working, so I used scp.

rockstor.log indicates Plex was the only rock-on restore that worked, the others seemed to have been abandoned as having timed out. I can attach the logs if that would be useful.

Many thanks in advance for any help,

Best Wishes,

Matthew

1 Like

Trying the downgrade of docker described by @sanderweel here :

With a zypper addlock docker mentioned by @phillxnet for good measure:

EDIT

So far

  • HTTP to HTTPS redirect
  • Jellyfin
  • Nextcloud official
  • Plex
  • Subsonic
  • Syncthing

restored, (Syncthing not yet started)

And the Rockstor GUI is a lot more responsive!

Started Syncthing manually, and the WebUI is up. Some errors with folders missing I need to check out.

Progress!

1 Like

I think I might have had some symbolic links going on which has foxed things?

Share: syncthing-data ( /mnt2/syncthing-data )
is mapped to: /home/syncthing/Sync for the Syncthing Rock-on.

My backup folders are actually in Share Backups ( /mnt2/Backups )

There is an empty folder /mnt2/syncthing-data/Backups

Syncthing is looking for /home/syncthing/Sync/Backups/<my backup folders>
when they’re actually in /mnt2/Backups/<my backup folders>

I’ll mount the old drive in my desktop PC to try to find out what state it was in.
Should be fixable. In the morning. :slight_smile:

I’m still getting lots of errors in the logs, also one for the morning.

Hi again, @HarryHUK.

Thanks for the report, and glad you could fix most of it.

If you can, it’s usually rather helpful to post these errors here so that we can hopefully see what is really happening. I’ll try to provide some general information in case it triggers a “of course!” moment on your side, or guides you in the right direction.

If your Syncthing Rock-on is still installed, I would first have a look at its installation settings (and even post a screenshot here if you’d like) to see whether it looks the way it should. Theoretically, the settings should be exactly the same as they were when you took the config backup; so that would be the first thing to verify.
I would actually point you to our documentation on “Adding Storage to a Rock-on” as it includes a screenshot of said settings for the Syncthing Rock-on:
https://rockstor.com/docs/interface/overview.html#add-storage

As you can see in this screenshot, we have (in this example), the share syncthing_conf mapped to the /config path in the Docker container. As the syncthing_conf share exists on the Rockstor system at the /mnt2/syncthing_conf path, we essentially have /mnt2/syncthing_conf seen within the syncthing container as /config.

In case these weren’t selected as the share for the /home/syncthing/Sync volume, then you can use the “Add storage volume” to add them to the container. You may have already done so, though, so please pardon my mistake if any. From your post, however, it seems you may have selected your shared named syncthing-data as the volume for the /home/syncthing/Sync/ mapping. If they actually are in /mnt2/Backups/<my backup folders> as you describe, then you may need to select the share named Backups as the volume mapping for /home/syncthing/Sync/ (labeled Data Storage during the Rock-in installation wizard).

I hope this helps and that I didn’t confused myself too much (and you in the process).

2 Likes

This one is curious as we had a problem like that with our Logs Manager in an earlier Rockstor 4 version, but we fixed that in 4.0.7:
Github issue:
https://github.com/rockstor/rockstor-core/issues/2281

PR:
https://github.com/rockstor/rockstor-core/pull/2293

And Rockstor 4.0.7 Changelog:

I can’t try to replicate it at the moment, but feel free to create a new thread with more details on your Rockstor version, logs in question, and exact behavior you encounter…

1 Like

Will try downloading the logs again. What happened was, I selected the logs to download, clicked the orange button to zip them up, clicked to the download button which said they were ready and… nothing.
I think it worked before I rolled back docker. Will look through the logs as I post more

I have separate shares for Backups, Music, Pictures, and Video, which appear under /mnt2 with content in.
I don’t use Syncthing just for Backups. So I don’t want to make that the syncthing-data share.
I think what I had done, after installing the Rock-on (in 3.9.2) was to create links under syncthing-data (/home/syncthing/Sync) because Syncthing is configured as if everything is in syncthing-data, (will check the config-file but the content is elsewhere. Unfortunately I didn’t back up the content of syncthing-config, but I’ve no reason to suspect that has magically changed. It would be either there unchanged, or not there)

I suspect what I did to achieve this was outside the assumptions of the restore, so I have to re-do the symbolic links. Does that make sense?

(I looked at my old system disk, and naturally as it’s in a different PC now, the /mnt2/Backups dir is empty, because the share is still on the datapool on my rockstor NAS)

Many thanks again for following up.

Matthew

1 Like

This seems to have done the trick! :smiley:
I’ll go back and do similar for Music Pictures and Video

What is good is that this is in the UI and I don’t have to go around it to achieve what I want.
What might have happened is that additional storage mappings are either not backed up or not restored?
(Looking at the saved json config backup file the mappings look saved, but I’m not an expert in the structure of the data - the mappings look tied to a container, rather than directly to the Rockon?)

Many, many thanks! :smiley:

2 Likes

Should I raise an issue that the additional Storage Mappings weren’t restored?

Let’s try to get to the bottom of this here first so that we know what was the problem exactly and establish reproducers. Ill try to reproduce that later tonight and see what I get.
The reason I’m saying this is that we did test this particular scenario and made sure that user-added storage were restored too.
I’ll get back to you later tonight (within the next 10hrs) with more details.

2 Likes

It could be just a consequence of the failure induced by the docker/ipv6 problem?

Hi @HarryHUK,

I unfortunately have not been able to find the time to try reproducing that behavior–very busy at work lately–but I can still try to give you additional pointers to look at what could have happened in case you’re interested. I’ll thus go back to how the config backup & restore works as that’s an important part.

First, here’s the link to our related documentation–for those who have not yet seen it:
https://rockstor.com/docs/interface/system/config_backup.html

The gist of the process is as follows:

  • Backup: export select parts of Rockstor’s database, which thus includes settings and configurations that the user might have done to these select parts.
  • Restore: use Rockstor’s API endpoints to re-instate these settings/configurations into Rockstor’s database the same way they happen when the user made these changes using the web-UI.

This means that these select parts of the Rockstor database will be in the same state as they were at the time the backup was taken. The config backup is thus a snapshot of these select parts of Rockstor’s database.

Thus, if something seems to not have been returned to what it was, the first thing to do would be to verify it was indeed included in the backup. As you mentioned that you already looked into your backup, I would be curious to know whether you can find the “Added storage” that was problematic in your case this your backup.
The way these specific “added storage” are identified in Rockstor’s database is by the "uservol" boolean flag–see the section describing the DVolume model in our wiki post. As a result, you could look for the "uservol": true string in your config backup JSON and see if you can find the share in question here.
Note that these are defined at the container level rather than rock-on level, as this is how volumes are bound in Docker. You can find more details in our wiki post linked above on that if you’re interested.

I’m glad you could get your system back to the way you want it anyway! These additional storage mappings (the “Add Storage” option in a Rock-on post-install customization) are the best way to share data (Shares) between Rock-ons and this what fits the “Docker-way”. I would thus recommend to go this route if you want/need to share “Shares” between two or more Rock-ons.

Hope this helps!

2 Likes

Hi @Flox,

many thanks for your efforts. Here are (I think) the relevant parts from the json config backup. They look to be all there (particularly Backups )

{“fields”: {“launch_order”: 1, “rockon”: 34, “uid”: null, “dimage”: 36, “name”: “syncthing”}, “model”:
“storageadmin.dcontainer”, “pk”: 36},

{“fields”: {“repo”: “na”, “tag”: “latest”, “name”: “linuxserver/syncthing”}, “model”: “storageadmin.dimage”, “pk”: 36},

{“fields”: {“container”: 36, “description”: “Choose a Share for configuration. Eg: create a Share called syncthing-config for this purpose alone.”, “uservol”: false, “share”: 13, “label”: “Config Storage”, “min_size”: 1073741824, “dest_dir”: “/config”}, “model”: “storageadmin.dvolume”, “pk”: 64},

{“fields”: {“container”: 36, “description”: “Choose a Share for all incoming data. Eg: create a Share called syncthing-data for this purpose alone.”, “uservol”: false, “share”: 14, “label”: “Data Storage”, “min_size”: null, “dest_dir”: "/home/syncthing/Sync"}, “model”: “storageadmin.dvolume”, “pk”: 65},

// (No entry for Videos here, I don’t think I had added it to Syncthing.)

{“fields”: {“container”: 36, “description”: null, “uservol”: true, “share”: 4, “label”: null, “min_size”: null, “dest_dir”: “/home/syncthing/Sync/Music”}, “model”: “storageadmin.dvolume”, “pk”: 142},
{“fields”: {“container”: 36, “description”: null, “uservol”: true, “share”: 6, “label”: null, “min_size”: null, “dest_dir”: “/home/syncthing/Sync/Backups”}, “model”: “storageadmin.dvolume”, “pk”: 143},
{“fields”: {“container”: 36, “description”: null, “uservol”: true, “share”: 3, “label”: null, “min_size”: null, “dest_dir”: “/home/syncthing/Sync/Pictures”}, “model”: “storageadmin.dvolume”, “pk”: 144},

{“fields”: {“pqgroup_rusage”: 0, “group”: “root”, “name”: “Pictures”, “perms”: “755”, “pqgroup”: “-1/-1”, “eusage”: 3307, “uuid”: null, “pqgroup_eusage”: 0, “compression_algo”: “no”, “owner”: “root”, “replica”: false, “qgroup”: “0/260”, “toc”: “2021-08-07T15:07:33.488Z”, “subvol_name”: “Pictures”, “rusage”: 46703575, “pool”: 2, “size”: 1073741824}, “model”: “storageadmin.share”, “pk”: 3},
{“fields”: {“pqgroup_rusage”: 0, “group”: “admin”, “name”: “Music”, “perms”: “775”, “pqgroup”: “-1/-1”, “eusage”: 288, “uuid”: null, “pqgroup_eusage”: 0, “compression_algo”: “no”, “owner”: “root”, “replica”: false, “qgroup”: “0/261”, “toc”: “2021-08-07T15:07:33.718Z”, “subvol_name”: “Music”, “rusage”: 34036776, “pool”: 2, “size”: 1073741824}, “model”: “storageadmin.share”, “pk”: 4},
{“fields”: {“pqgroup_rusage”: 0, “group”: “root”, “name”: “Videos”, “perms”: “755”, “pqgroup”: “2015/894”, “eusage”: 2703, “uuid”: null, “pqgroup_eusage”: 0, “compression_algo”: “no”, “owner”: “root”, “replica”: false, “qgroup”: “0/262”, “toc”: “2021-08-07T15:07:33.435Z”, “subvol_name”: “Videos”, “rusage”: 1524713390, “pool”: 2, “size”: 2147483648}, “model”: “storageadmin.share”, “pk”: 5},
{“fields”: {“pqgroup_rusage”: 0, “group”: “root”, "name": “Backups”, “perms”: “755”, “pqgroup”: “-1/-1”, “eusage”: 12268339, “uuid”: null, “pqgroup_eusage”: 0, “compression_algo”: “no”, “owner”: “root”, “replica”: false, “qgroup”: “0/263”, “toc”: “2021-08-07T15:07:33.813Z”, “subvol_name”: “Backups”, “rusage”: 2888365506, “pool”: 2, “size”: 4294967296}, “model”: “storageadmin.share”, “pk”: 6},

{“fields”: {“pqgroup_rusage”: 0, “group”: “admin”, “name”: “syncthing-config”, “perms”: “755”, “pqgroup”: “-1/-1”, “eusage”: 771409, “uuid”: null, “pqgroup_eusage”: 0, “compression_algo”: “no”, “owner”: “backer”, “replica”: false, “qgroup”: “0/314”, “toc”: “2021-08-07T15:07:33.750Z”, “subvol_name”: “syncthing-config”, “rusage”: 771409, “pool”: 2, “size”: 20971520}, “model”: “storageadmin.share”, “pk”: 13},

{“fields”: {“pqgroup_rusage”: 39720058, “group”: “admin”, “name”: “syncthing-data”, “perms”: “775”, “pqgroup”: “2015/892”, “eusage”: 39720058, “uuid”: null, “pqgroup_eusage”: 39720058, “compression_algo”: “no”, “owner”: “backer”, “replica”: false, “qgroup”: “0/315”, “toc”: “2021-08-07T15:07:33.040Z”, “subvol_name”: “syncthing-data”, “rusage”: 39720058, “pool”: 2, “size”: 1073741824}, “model”: “storageadmin.share”, “pk”: 14},

Here’s the part of the rockstor.log where the restore failed the first time (due to docker ipv6 problem)

(I’ll look for the logs shortly, where I had downgraded to the previous working dock version, and tried again)

[07/Aug/2021 19:58:01] INFO [storageadmin.tasks:55] Now executing Huey task [restore_rockons], id: 861b954f-6341-4826-8dee-372d9502dd2d.
[07/Aug/2021 19:58:01] INFO [storageadmin.views.config_backup:250] Started restoring rock-ons.
[07/Aug/2021 19:58:14] INFO [storageadmin.views.config_backup:57] Successfully created resource: https://localhost/api/rockons/update. Payload: {}
[07/Aug/2021 19:58:14] INFO [storageadmin.views.config_backup:252] The following rock-ons will be restored: {34: {'rname': u'Syncthing', 'new_rid': 52}, 67: {'rname': u'Nextcloud-Official', 'new_rid': 28}, 9: {'rname': u'Subsonic', 'new_rid': 5}, 66: {'rname': u'Plex', 'new_rid': 8}, 52: {'rname': u'HTTP to HTTPS redirect', 'new_rid': 26}, 59: {'rname': u'Jellyfin', 'new_rid': 16}, 63: {'rname': u'MariaDB', 'new_rid': 34}}.
[07/Aug/2021 19:58:14] INFO [storageadmin.views.config_backup:303] Send install command to the rock-ons api for the following rock-on: Syncthing
[07/Aug/2021 19:58:15] INFO [storageadmin.views.config_backup:57] Successfully created resource: https://localhost/api/rockons/52/install. Payload: {'rname': u'Syncthing', 'cc': {}, 'devices': {}, 'new_rid': 52, 'environment': {u'PUID': u'1002', u'PGID': u'1000'}, 'shares': {u'syncthing-data': u'/home/syncthing/Sync', u'syncthing-config': u'/config'}, 'ports': {22000: 22000, 8384: 8384, 21027: 21027}, 'containers': [36]}
[07/Aug/2021 19:58:16] INFO [storageadmin.tasks:55] Now executing Huey task [install], id: 8b3c5deb-8851-477f-958f-8b4b2198c321.
[07/Aug/2021 19:59:03] ERROR [storageadmin.views.config_backup:295] Waited too long for the previous rock-on to install...Stop trying to install the rock-on (Syncthing)
[07/Aug/2021 19:59:03] INFO [storageadmin.views.config_backup:303] Send stop command to the rock-ons api for the following rock-on: Syncthing
[07/Aug/2021 19:59:03] INFO [storageadmin.views.config_backup:57] Successfully created resource: https://localhost/api/rockons/52/stop. Payload: {'rname': u'Syncthing', 'cc': {}, 'labels': {}, 'devices': {}, 'new_rid': 52, 'environment': {u'PUID': u'1002', u'PGID': u'1000'}, 'shares': {u'/home/syncthing/Sync/Pictures': u'Pictures', u'/home/syncthing/Sync/Backups': u'Backups', u'/home/syncthing/Sync/Music': u'Music'}, 'ports': {22000: 22000, 8384: 8384, 21027: 21027}, 'containers': [36]}
[07/Aug/2021 19:59:51] ERROR [storageadmin.views.config_backup:295] Waited too long for the previous rock-on to install...Stop trying to install the rock-on (Syncthing)
[07/Aug/2021 19:59:51] INFO [storageadmin.views.config_backup:303] Send update command to the rock-ons api for the following rock-on: Syncthing
[07/Aug/2021 19:59:51] ERROR [storageadmin.util:45] Exception: Another rock-on is in state transition. Multiple simultaneous Rock-on transitions are not supported. Please try again later.
Traceback (most recent call last):
  File "/opt/rockstor/eggs/gunicorn-19.7.1-py2.7.egg/gunicorn/workers/sync.py", line 68, in run_for_one
    self.accept(listener)
  File "/opt/rockstor/eggs/gunicorn-19.7.1-py2.7.egg/gunicorn/workers/sync.py", line 27, in accept
    client, addr = listener.accept()
  File "/usr/lib64/python2.7/socket.py", line 206, in accept
    sock, addr = self._sock.accept()
error: [Errno 11] Resource temporarily unavailable
[07/Aug/2021 19:59:51] ERROR [storageadmin.views.config_backup:63] Exception occurred while creating resource: https://localhost/api/rockons/52/update. Payload: {'rname': u'Syncthing', 'cc': {}, 'labels': {}, 'devices': {}, 'new_rid': 52, 'environment': {u'PUID': u'1002', u'PGID': u'1000'}, 'shares': {u'/home/syncthing/Sync/Pictures': u'Pictures', u'/home/syncthing/Sync/Backups': u'Backups', u'/home/syncthing/Sync/Music': u'Music'}, 'ports': {22000: 22000, 8384: 8384, 21027: 21027}, 'containers': [36]}. Exception: 500 Server Error: INTERNAL SERVER ERROR. Moving on.
[07/Aug/2021 20:00:35] ERROR [system.osi:199] non-zero code(125) returned by command: ['/usr/bin/docker', 'run', '-d', '--restart=unless-stopped', '--name', 'syncthing', '-v', '/mnt2/syncthing-config:/config', '-v', '/mnt2/syncthing-data:/home/syncthing/Sync', '-v', '/etc/localtime:/etc/localtime:ro', '-p', '22000:22000/tcp', '-p', '8384:8384/tcp', '-p', '21027:21027/udp', '-e', 'PUID=1002', '-e', 'PGID=1000', 'linuxserver/syncthing:latest']. output: ['8dd6ae5e88e07161e0136ef8291bf6e2b2762b95ff4c4bd5281e3d979ab27ab1', ''] error: ['docker: Error response from daemon: driver failed programming external connectivity on endpoint syncthing (e7bf7dc00df967c6d462a665504cb3099f272757af3cd08ef17512d283939190): Error starting userland proxy: listen tcp6 [::]:22000: socket: address family not supported by protocol.', '']
[07/Aug/2021 20:00:35] ERROR [storageadmin.views.rockon_helpers:207] Error running a command. cmd = /usr/bin/docker run -d --restart=unless-stopped --name syncthing -v /mnt2/syncthing-config:/config -v /mnt2/syncthing-data:/home/syncthing/Sync -v /etc/localtime:/etc/localtime:ro -p 22000:22000/tcp -p 8384:8384/tcp -p 21027:21027/udp -e PUID=1002 -e PGID=1000 linuxserver/syncthing:latest. rc = 125. stdout = ['8dd6ae5e88e07161e0136ef8291bf6e2b2762b95ff4c4bd5281e3d979ab27ab1', '']. stderr = ['docker: Error response from daemon: driver failed programming external connectivity on endpoint syncthing (e7bf7dc00df967c6d462a665504cb3099f272757af3cd08ef17512d283939190): Error starting userland proxy: listen tcp6 [::]:22000: socket: address family not supported by protocol.', '']
Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py", line 204, in install
    globals().get("{}_install".format(rockon.name.lower()), generic_install)(rockon)
  File "/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py", line 390, in generic_install
    run_command(cmd, log=True)
  File "/opt/rockstor/src/rockstor/system/osi.py", line 201, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/docker run -d --restart=unless-stopped --name syncthing -v /mnt2/syncthing-config:/config -v /mnt2/syncthing-data:/home/syncthing/Sync -v /etc/localtime:/etc/localtime:ro -p 22000:22000/tcp -p 8384:8384/tcp -p 21027:21027/udp -e PUID=1002 -e PGID=1000 linuxserver/syncthing:latest. rc = 125. stdout = ['8dd6ae5e88e07161e0136ef8291bf6e2b2762b95ff4c4bd5281e3d979ab27ab1', '']. stderr = ['docker: Error response from daemon: driver failed programming external connectivity on endpoint syncthing (e7bf7dc00df967c6d462a665504cb3099f272757af3cd08ef17512d283939190): Error starting userland proxy: listen tcp6 [::]:22000: socket: address family not supported by protocol.', '']
[07/Aug/2021 20:00:35] INFO [storageadmin.tasks:63] Task [install] completed OK
[07/Aug/2021 20:00:35] INFO [storageadmin.tasks:55] Now executing Huey task [stop], id: baa7efb8-b793-4b68-8736-30a1083264e5.
[07/Aug/2021 20:00:36] INFO [storageadmin.tasks:63] Task [stop] completed OK