Poetry/Can't Delete Rock-on

I tried to delete Watchtower Official - and it’s stuck on deleting. I am on 4.5.8 testing channel. I followed the new instructions for using poetry to force delete, but poetry does not run at all. I can’t turn on any of my other rock-ons because it states that the other is in transition.

How do I get poetry to work, or how to a scrub this out of the db to remove the bad uninstall?
image

@njmike73 Hello again.
Re:
We recently updated our docs on that front:
https://rockstor.com/docs/interface/overview.html#force-uninstall-of-a-rock-on

Running this script (as the ‘root’ user) without argument will detail its usage:

It may well be you are using a pre-poetry version of Rockstor (for others reading this)
or
That you are not the root user.

The poetry build system was only adopted from Rockstor version 4.5.4-0 onwards.
The above doc link gives instructions on how to run the delete-rockon for pre and post this version.

Try the same but when logged in as the root user.

Hope that helps.

1 Like

I have tried this both ways… and I am running as ‘root’. I am using 4.5.8 at the moment, and Leap 15.4 installed, but it still doesn’t want to run poetry saying the command not found… and the old way doesn’t have the command either.

2 Likes

Hi @njmike73… that is curious.
Could you try the following and see if you do have everything where it should be?

rleap154:~ # whoami
root

rleap154:~ # which poetry
/root/.local/bin/poetry

rleap154:~ # ls -lah /root/.local/bin/
total 4.0K
drwxr-xr-x 1 root root 12 Jan 26 07:11 .
drwxr-xr-x 1 root root 16 Jan 26 07:11 ..
lrwxrwxrwx 1 root root 43 Jan 26 07:11 poetry -> /root/.local/share/pypoetry/venv/bin/poetry

rleap154:~ # ls -lah /root/.local/share/pypoetry/venv/bin/
total 80K
drwxr-xr-x 1 root root  276 Jan 26 07:11 .
drwxr-xr-x 1 root root   76 Jan 26 07:11 ..
-rw-r--r-- 1 root root 2.2K Jan 26 07:11 activate
-rw-r--r-- 1 root root 1.3K Jan 26 07:11 activate.csh
-rw-r--r-- 1 root root 2.4K Jan 26 07:11 activate.fish
-rwxr-xr-x 1 root root  243 Jan 26 07:11 doesitcache
-rwxr-xr-x 1 root root  258 Jan 26 07:11 easy_install
-rwxr-xr-x 1 root root  258 Jan 26 07:11 easy_install-3.6
-rwxr-xr-x 1 root root  237 Jan 26 07:11 keyring
-rwxr-xr-x 1 root root  271 Jan 26 07:11 normalizer
-rwxr-xr-x 1 root root  248 Jan 26 07:11 pip
-rwxr-xr-x 1 root root  248 Jan 26 07:11 pip3
-rwxr-xr-x 1 root root  248 Jan 26 07:11 pip3.6
-rwxr-xr-x 1 root root  245 Jan 26 07:11 pkginfo
-rwxr-xr-x 1 root root  240 Jan 26 07:11 poetry
-rwxr-xr-x 1 root root  11K Jan 26 07:11 python
-rwxr-xr-x 1 root root  11K Jan 26 07:11 python3
-rwxr-xr-x 1 root root  265 Jan 26 07:11 virtualenv

@njmike73, nevermind… you should have all of these as otherwise you wouldn’t have Rockstor running anyway.

I suspect you are using the “System shell/Shellinabox”, is that correct?
If you connect to your Rockstor box using SSH, you should be ok.

@phillxnet, we seem to have a disconnection here in PATH between SSH and Shellinabox sessions:

From Shellinabox:

rleap154:/home/admin # echo $PATH
/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin                                                                                          

From SSH session:

rleap154:/opt/rockstor # echo $PATH
/sbin:/usr/sbin:/usr/local/sbin:/root/.local/bin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin
2 Likes

OK I will give it a try in a little while and let you know!
Yes I am using shell in a box
I will load up SSH in a bit on my PC

2 Likes

@Flox Re:

Yes the shellinabox does have some personality of it’s own. But good to know how it’s affecting us in this case. We likely want to update our docs on this front. I think we should try and avoid reach into shellinabox, at least for the time being.

There may be a difference between su to root in shellinabox and using the following option:

Shell connection service: SSH

But it seems we have an as-yet unreported issue with this option failing to allow login. Likely some more sshd changes of late.

buildvm login: root
command-line line 0: Unsupported option “rhostsrsaauthentication”
command-line line 0: Unsupported option “rsaauthentication”
Session closed.

I think what is required here, if the shellinabox is being used, is to request the target users environment:

Directory: /home/radmin                                                                                        
Fri 10 Mar 17:49:01 WET 2023                                                                                   
radmin@buildvm:~> su -                                                                                         
Password:                                                                                                      
buildvm:~ # echo $PATH
/sbin:/usr/sbin:/usr/local/sbin:/root/.local/bin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/lib/mit/bin:/usr/l
ib/mit/sbin                                                                                                    

Note the use of “-” after su to request target users shell, rather than just their privaledges:


– normal Web-UI user (radmin) login (shellinabox defaults), “su -” to the root user with root env –

And so in the above we then get our root users ‘special’ “/root/.local/bin” directory and hence have access to the poetry binary.

[EDIT]
From man su (not on a rockstor instance as we are JeOS based where manuals are not available !

   -, -l, --login
          Start the shell as a login shell with an environment similar to a real login:

Hope that helps.

2 Likes

This is great information!
I did use SSH and was able to remove the failing rock-on… but alas ran into something else strange. So I had uninstalled Pi-Hole (example one, also did the same with OpenVPN) and re-installed using the same shares, same configs (ports, etc) and the system says it’s installing - but they never seem in reinstall, and comes back up as available to install - this is even after about an hour of waiting.

I have refreshed the browser, and clicked refresh on Rock-Ons - what I have not done was reboot, or stop the Rock-On service (I was afraid to do that just yet)…

TO NOTE: I did do the same with Jellyfin and that one installed fine?

1 Like

I’ll see if I can help figuring out what is happening. Having a good idea of whether a docker container (what Rock-Ons are using) is not always a straightforward thing and that explains in part why sometimes we see a disconnection between what Rockstor sees (installed and running, or failed).

Let’s focus on Pi-Hole here then, so that we keep things as simple as possible.
First, let’s see what Docker is reporting. Could you log in (ssh is always preferred), as root, and ask Docker what are the containers currently running?

docker ps -a

If you see a line (each line is a different container) with the name pi-hole, then let’s force-delete it to make sure we reset anything that could be confusing the system. You seem to have done that (multiple times?) already but just in case, that would be:

cd /opt/rockstor
poetry run delete-rockon "Pi-Hole"

Then, let’s refresh the Rock-On database: navigate to the Rock-Ons page, and click on the “Update” button in the top right part of the page. After a little while, all the information in the database should be refreshed and you should see the Pi-Hole Rock-On back as a Rock-On available to install.

In your terminal window, prepare to watch the logs. The idea is to look at them as soon as you click the Submit button when you install the Rock-on. Unfortunately, the container will not exist before you click that submit button so what I usually do is preparing the command in the terminal, then go through the Rock-On install wizard, and then click submit. Right after being done with the install wizard, I go back to the terminal and press enter to watch the docker logs for the container(s) that was just created. In your case, it would be:

docker logs -f pi-hole

If all is working, you should see a lot of lines flowing as the container and the Pi-Hole application start.
If there is a problem, there most likely will be some error displayed then. Once these logs are no longer giving you anything, you can stop “watching” them by typing Ctrl + C, and then have another look at what docker is reporting:

docker ps -a

Sorry for all the manual investigation here, but let’s hope it will help see what is happening.

2 Likes

It’s no problem really!
I did go a little further - looks like it never creates the container? but if I go to images it has pihole/pihole in the list… when I run ```
docker logs -f pi-hole

it comes back with with no such container.
I did also run ```
cd /opt/rockstor
poetry run delete-rockon "Pi-Hole"

which did seem to find it in the database and deleted it (I ran it again and it was not found to make sure). It’s almost like it never creates the container?

Here is what I do have running, which I was able to successfully re-install…
Manage containers
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6286c2421e2e nextcloud-official 0.01% 49.85MiB / 7.447GiB 0.65% 55.3kB / 2.94kB 0B / 0B 6
c8b1b1e0c385 jellyfinserver 0.01% 174.7MiB / 7.447GiB 2.29% 0B / 0B 0B / 0B 33
6771a71ae055 netdata_official 9.83% 181.2MiB / 7.447GiB 2.38% 0B / 0B 0B / 0B 75

and my ```
docker ps -a

CONTAINER ID   IMAGE                         COMMAND                  CREATED             STATUS                    PORTS                                                                          NAMES
6286c2421e2e   nextcloud:latest              "/entrypoint.sh apac…"   About an hour ago   Up 18 minutes             0.0.0.0:8889->80/tcp, 0.0.0.0:8889->80/udp, :::8889->80/tcp, :::8889->80/udp   nextcloud-official
c8b1b1e0c385   linuxserver/jellyfin:latest   "/init"                  About an hour ago   Up 18 minutes                                                                                            jellyfinserver
6771a71ae055   netdata/netdata:latest        "/usr/sbin/run.sh"       22 hours ago        Up 18 minutes (healthy)                                                                                  netdata_official

when you check the rockstor.log (either in the WebUI or directly under /opt/rockstor/var/log/ itself) do you find any errors related to execution of a docker command? If the original docker command already fails (for some reason) then neither a container is pulled, nor any docker logs are available.

Usually you would see a note when something fails during the install, when you check back under the “All” Rockon tab for that particular Rockon, giving you a note at the bottom of that entry that the installation failed and to check the Rockstor logs.

2 Likes

Maybe this helps? I just ran it again…

10/Mar/2023 19:13:36] INFO [storageadmin.tasks:56] Now executing Huey task [install], id: 8d861dcb-6d24-4bdf-890b-38b4b82cae9b.
[10/Mar/2023 19:13:36] ERROR [system.osi:225] non-zero code(1) returned by command: ['/usr/bin/docker', 'stop', 'pi-hole']. output: [''] error: ['Error response from daemon: No such container: pi-hole', '']
[10/Mar/2023 19:13:36] ERROR [system.osi:225] non-zero code(1) returned by command: ['/usr/bin/docker', 'rm', 'pi-hole']. output: [''] error: ['Error: No such container: pi-hole', '']
[10/Mar/2023 19:13:37] ERROR [system.osi:225] non-zero code(127) returned by command: ['/usr/bin/docker', 'run', '-d', '--restart=unless-stopped', '--name', 'pi-hole', '-v', '/mnt2/dnsmasq-config:/etc/dnsmasq.d', '-v', '/mnt2/pihole-config:/etc/pihole', '-v', '/etc/localtime:/etc/localtime:ro', '-p', '83:80/tcp', '-p', '83:80/udp', '--cap-add', 'NET_ADMIN', '--dns', '127.0.0.1', '--dns', '8.8.8.8', '--net', 'host', '-e', 'IPv6=False', '-e', 'FTLCONF_LOCAL_IPV4=192.168.10.60', '-e', 'WEB_PORT=83', '-e', 'WEBPASSWORD=shit888Thomas', 'pihole/pihole:latest']. output: [''] error: ['WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.', 'docker: Error response from daemon: stat /mnt2/Rock-On/btrfs/subvolumes/fd422dcdee506ecc596e426b19583f126219daf368769d54625441e44b509692: no such file or directory.', "See 'docker run --help'.", '']
[10/Mar/2023 19:13:37] ERROR [storageadmin.views.rockon_helpers:207] Error running a command. cmd = /usr/bin/docker run -d --restart=unless-stopped --name pi-hole -v /mnt2/dnsmasq-config:/etc/dnsmasq.d -v /mnt2/pihole-config:/etc/pihole -v /etc/localtime:/etc/localtime:ro -p 83:80/tcp -p 83:80/udp --cap-add NET_ADMIN --dns 127.0.0.1 --dns 8.8.8.8 --net host -e IPv6=False -e FTLCONF_LOCAL_IPV4=192.168.10.60 -e WEB_PORT=83 -e WEBPASSWORD=shit888Thomas pihole/pihole:latest. rc = 127. stdout = ['']. stderr = ['WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.', 'docker: Error response from daemon: stat /mnt2/Rock-On/btrfs/subvolumes/fd422dcdee506ecc596e426b19583f126219daf368769d54625441e44b509692: no such file or directory.', "See 'docker run --help'.", '']
Traceback (most recent call last):
File "/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py", line 204, in install
  globals().get("{}_install".format(rockon.name.lower()), generic_install)(rockon)
File "/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py", line 390, in generic_install
  run_command(cmd, log=True)
File "/opt/rockstor/src/rockstor/system/osi.py", line 227, in run_command
  raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/docker run -d --restart=unless-stopped --name pi-hole -v /mnt2/dnsmasq-config:/etc/dnsmasq.d -v /mnt2/pihole-config:/etc/pihole -v /etc/localtime:/etc/localtime:ro -p 83:80/tcp -p 83:80/udp --cap-add NET_ADMIN --dns 127.0.0.1 --dns 8.8.8.8 --net host -e IPv6=False -e FTLCONF_LOCAL_IPV4=192.168.10.60 -e WEB_PORT=83 -e WEBPASSWORD=shit888Thomas pihole/pihole:latest. rc = 127. stdout = ['']. stderr = ['WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.', 'docker: Error response from daemon: stat /mnt2/Rock-On/btrfs/subvolumes/fd422dcdee506ecc596e426b19583f126219daf368769d54625441e44b509692: no such file or directory.', "See 'docker run --help'.", '']
[10/Mar/2023 19:13:37] INFO [storageadmin.tasks:64] Task [install], id: 8d861dcb-6d24-4bdf-890b-38b4b82cae9b completed OK

Log reading progress

100.00%

1 Like

I am so hoping I can fix this somehow - not sure what happened or why. While Pi-Hole was important to me, the same thing happens with OpenVPN which now seems to do the same thing.

I also looked into this directory below - and seem to have a TON of subvolumes… some say -init - not sure what that all means, but just trying to provide as much info as I can…

Thanks!
@Hooverdan was once again right on the spot.

That error is helpful, indeed. It seems to fail while trying to use one of the container’s layers… There might have been some quirk while pulling the previous ones or some issues with previous attempts. I would thus try to delete the image and then try again to install the Rock-On; the image will be pulled again.

To delete the image, first let’s find its ID:

docker images

You should see the pihole image. Copy its ID.

Then, delete the image.

docker rmi <image id>

Make sure there is no Pi-Hole container first, of course but in your case you shouldn’t have any (that’s the whole problem.

Then repeat the Rock-On installation procedure.

1 Like

Ok so I have done that…

MERCURY:/mnt2/Rock-On/btrfs/subvolumes # docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
linuxserver/jellyfin latest ef697cc05e50 47 hours ago 863MB
netdata/netdata latest 04730617537a 2 days ago 355MB
nextcloud latest bf4b9cffee3f 9 days ago 993MB
pihole/pihole latest 0faa0df1f400 3 weeks ago 322MB
kosdk/wsdd latest 4eac5f0804d1 13 months ago 56.4MB
postgres 9.5 6d176851b77f 22 months ago 197MB
owncloud latest 327bd201c5fb 4 years ago 618MB
pschmitt/owncloud 8.2.1 a763bb0a065e 7 years ago 471MB
You have new mail in /var/spool/mail/root
MERCURY:/mnt2/Rock-On/btrfs/subvolumes # docker rmi 0faa0df1f400
Untagged: pihole/pihole:latest
Untagged: pihole/pihole@sha256:9abbf1c218f32a4084e614150a44714f046f73c40a9d2889a0e6edf01ff0a387
Deleted: sha256:0faa0df1f400dbdd85aea4d1a18630e8ab48b17e2d83648a27c4457a21b9542e
MERCURY:/mnt2/Rock-On/btrfs/subvolumes # docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
linuxserver/jellyfin latest ef697cc05e50 47 hours ago 863MB
netdata/netdata latest 04730617537a 2 days ago 355MB
nextcloud latest bf4b9cffee3f 9 days ago 993MB
kosdk/wsdd latest 4eac5f0804d1 13 months ago 56.4MB
postgres 9.5 6d176851b77f 22 months ago 197MB
owncloud latest 327bd201c5fb 4 years ago 618MB
pschmitt/owncloud 8.2.1 a763bb0a065e 7 years ago 471MB
MERCURY:/mnt2/Rock-On/btrfs/subvolumes #

So it was gone, and now here is the log:

[10/Mar/2023 20:41:28] INFO [storageadmin.tasks:56] Now executing Huey task [install], id: c0f8c5f1-b92d-4a68-b671-5ba06c8deba5.
[10/Mar/2023 20:41:28] ERROR [system.osi:225] non-zero code(1) returned by command: ['/usr/bin/docker', 'stop', 'pi-hole']. output: [''] error: ['Error response from daemon: No such container: pi-hole', '']
[10/Mar/2023 20:41:28] ERROR [system.osi:225] non-zero code(1) returned by command: ['/usr/bin/docker', 'rm', 'pi-hole']. output: [''] error: ['Error: No such container: pi-hole', '']
[10/Mar/2023 20:41:29] ERROR [system.osi:225] non-zero code(127) returned by command: ['/usr/bin/docker', 'run', '-d', '--restart=unless-stopped', '--name', 'pi-hole', '-v', '/mnt2/dnsmasq-config:/etc/dnsmasq.d', '-v', '/mnt2/pihole-config:/etc/pihole', '-v', '/etc/localtime:/etc/localtime:ro', '-p', '83:80/tcp', '-p', '83:80/udp', '--cap-add', 'NET_ADMIN', '--dns', '127.0.0.1', '--dns', '8.8.8.8', '--net', 'host', '-e', 'IPv6=False', '-e', 'FTLCONF_LOCAL_IPV4=192.168.10.60', '-e', 'WEB_PORT=83', '-e', 'WEBPASSWORD=shit888Thomas', 'pihole/pihole:latest']. output: [''] error: ['WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.', 'docker: Error response from daemon: stat /mnt2/Rock-On/btrfs/subvolumes/fd422dcdee506ecc596e426b19583f126219daf368769d54625441e44b509692: no such file or directory.', "See 'docker run --help'.", '']
[10/Mar/2023 20:41:29] ERROR [storageadmin.views.rockon_helpers:207] Error running a command. cmd = /usr/bin/docker run -d --restart=unless-stopped --name pi-hole -v /mnt2/dnsmasq-config:/etc/dnsmasq.d -v /mnt2/pihole-config:/etc/pihole -v /etc/localtime:/etc/localtime:ro -p 83:80/tcp -p 83:80/udp --cap-add NET_ADMIN --dns 127.0.0.1 --dns 8.8.8.8 --net host -e IPv6=False -e FTLCONF_LOCAL_IPV4=192.168.10.60 -e WEB_PORT=83 -e WEBPASSWORD=shit888Thomas pihole/pihole:latest. rc = 127. stdout = ['']. stderr = ['WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.', 'docker: Error response from daemon: stat /mnt2/Rock-On/btrfs/subvolumes/fd422dcdee506ecc596e426b19583f126219daf368769d54625441e44b509692: no such file or directory.', "See 'docker run --help'.", '']
Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py", line 204, in install
    globals().get("{}_install".format(rockon.name.lower()), generic_install)(rockon)
  File "/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py", line 390, in generic_install
    run_command(cmd, log=True)
  File "/opt/rockstor/src/rockstor/system/osi.py", line 227, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/docker run -d --restart=unless-stopped --name pi-hole -v /mnt2/dnsmasq-config:/etc/dnsmasq.d -v /mnt2/pihole-config:/etc/pihole -v /etc/localtime:/etc/localtime:ro -p 83:80/tcp -p 83:80/udp --cap-add NET_ADMIN --dns 127.0.0.1 --dns 8.8.8.8 --net host -e IPv6=False -e FTLCONF_LOCAL_IPV4=192.168.10.60 -e WEB_PORT=83 -e WEBPASSWORD=shit888Thomas pihole/pihole:latest. rc = 127. stdout = ['']. stderr = ['WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.', 'docker: Error response from daemon: stat /mnt2/Rock-On/btrfs/subvolumes/fd422dcdee506ecc596e426b19583f126219daf368769d54625441e44b509692: no such file or directory.', "See 'docker run --help'.", '']
[10/Mar/2023 20:41:29] INFO [storageadmin.tasks:64] Task [install], id: c0f8c5f1-b92d-4a68-b671-5ba06c8deba5 completed OK

It still doesn’t install and I don’t see the container either. It’s very strange what’s going on.

1 Like

@njmike73, sorry it wasn’t enough. It looks like we have to reset your rockons-root share and start anew. No worries, though, as you won’t loose any data. The idea is to delete your rockons-root share, and then re-create it; docker will then not look for that troubling subvolume anymore.

There are two options on how to do that.

  • option 1: re-installing the Rock-ons one by one
  • option 2: create a config backup so that Rockstor will re-install all your Rock-Ons for you

I would of course favor option 2. If something goes wrong with it, you can still finish what needs to be finish using option 1 thereafter anyway. Note, however, that restoring a config backup (option 2) will also restore other Rockstor’s “settings” (samba shares, etc…). See our docs on what is included and what isn’t for details: https://rockstor.com/docs/interface/system/config_backup.html. This shouldn’t be a problem, though, as you would restore these database items right after you created the backup, which means nothing will have changed in the meantime. I just wanted to let you know.

The procedure for option 2 would be as follows:

  1. Turn OFF all Rock-Ons that you currently have installed and running. While this is overkill, it doesn’t hurt and will be required in the later steps anyway; you might as well do it now.
  2. Go to System > Config Backup and create a new backup.
  3. Go back to the Rock-Ons page and uninstall all Rock-Ons you have installed.
  4. Go to System > Services and turn OFF the Rock-Ons service.
  5. Go to Storage > Shares, and click on the share used as rockons-root. In the “Snapshots” tab, select all snapshots and delete them.
  6. Go to the main Storage > Shares page, and take a note of the name of the share used as rockons-root. This is important as you will need to re-create that share with the exact same name (and case, I believe?).
  7. Delete the share used as rockons-root. Remember to check the “force delete” checkbox as this is required for this share; Rockstor will remind you anyway if you forget.
  8. Once this share is deleted, create a new share with the exact same name as the previous one (see note on Step 6) and on the same Pool.
  9. Go to System > Services and configure the Rock-Ons services to use the rockons-root share created in Step 8.
  10. Start the Rock-Ons service.
  11. Go to System > Config backup and restore the config backup that you created in Step 2 above.

Wait for a little while as Rockstor will refresh the Rock-Ons database, and then re-install all Rock-Ons you had installed at the time the backup was created (step 2 above). The more Rock-Ons you had installed, the longer it will thus take, of course.
I usually like to monitor the system logs while I wait so that I can see what is happening. We’d like to have this sort of feedback surfaced to the UI in the future, but in the meantime you can do so by:

journalctl -f

and/or (in a separate terminal window)

tail -f /opt/rockstor/var/log/rockstor*

The system logs (journalctl) are usually the best way to see when your docker containers are done booting up so I would favor that one if you only want to monitor one. One that has settled, you should now have all your previously installed Rock-Ons up and running. Then you can try installing the Pi-Hole Rock-On again.

3 Likes

Well, ran into one more problem initially… When I go to config backups, I get this. I feel like I am being a pain here… and I apologize for that. That being said, I think I would be OK with just installing one by one without the config backup for now? I only gave a few rock-ons…

No problem at all!

Yes, you definitely can reinstall them “by hand”.

I’m concerned about the error you got, however, on the config backup page… I’m worried there’s a bigger issue somewhere that would be the root cause of all your problems.

Can you confirm Rockstor doesn’t report any issue with your Pool(s)? Do you see any warning on the Pool page? Feel free to post a screenshot of you’d like. Same for the Shares page.

3 Likes

No, no errors on that page… Yes I am concerned too, but really don’t want to do a full re-install right at the moment.

And the only other errors I see in logs for Rockstor is:

CommandException: Error running a command. cmd = /usr/sbin/smartctl --info /dev/disk/by-id/usb-Seagate_Portable_NACA404L-0:0. rc = 2. stdout = ['smartctl 7.2 2021-09-14 r5237 [x86_64-linux-6.2.2-lp154.5.g3a7e162-default] (SUSE RPM)', 'Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org', '', 'Read Device Identity failed: scsi error unsupported field in scsi command', '', "A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.", '']. stderr = ['']
[11/Mar/2023 13:53:52] ERROR [storageadmin.views.disk:479] Error running a command. cmd = /usr/sbin/smartctl --info /dev/disk/by-id/usb-Seagate_Portable_NACA404L-0:0. rc = 2. stdout = ['smartctl 7.2 2021-09-14 r5237 [x86_64-linux-6.2.2-lp154.5.g3a7e162-default] (SUSE RPM)', 'Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org', '', 'Read Device Identity failed: scsi error unsupported field in scsi command', '', "A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.", '']. stderr = ['']
Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 476, in _update_disk_state
    do.name, do.smart_options
  File "/opt/rockstor/src/rockstor/system/smart.py", line 338, in available
    [SMART, "--info"] + get_dev_options(device, custom_options)
  File "/opt/rockstor/src/rockstor/system/osi.py", line 227, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/sbin/smartctl --info /dev/disk/by-id/usb-Seagate_Portable_NACA404L-0:0. rc = 2. stdout = ['smartctl 7.2 2021-09-14 r5237 [x86_64-linux-6.2.2-lp154.5.g3a7e162-default] (SUSE RPM)', 'Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org', '', 'Read Device Identity failed: scsi error unsupported field in scsi command', '', "A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.", '']. stderr = ['']
[11/Mar/2023 13:54:57] ERROR [storageadmin.views.disk:479] Error running a command. cmd = /usr/sbin/smartctl --info /dev/disk/by-id/usb-Seagate_Portable_NACA404L-0:0. rc = 2. stdout = ['smartctl 7.2 2021-09-14 r5237 [x86_64-linux-6.2.2-lp154.5.g3a7e162-default] (SUSE RPM)', 'Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org', '', 'Read Device Identity failed: scsi error unsupported field in scsi command', '', "A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.", '']. stderr = ['']
Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 476, in _update_disk_state
    do.name, do.smart_options
  File "/opt/rockstor/src/rockstor/system/smart.py", line 338, in available
    [SMART, "--info"] + get_dev_options(device, custom_options)
  File "/opt/rockstor/src/rockstor/system/osi.py", line 227, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/sbin/smartctl --info /dev/disk/by-id/usb-Seagate_Portable_NACA404L-0:0. rc = 2. stdout = ['smartctl 7.2 2021-09-14 r5237 [x86_64-linux-6.2.2-lp154.5.g3a7e162-default] (SUSE RPM)', 'Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org', '', 'Read Device Identity failed: scsi error unsupported field in scsi command', '', "A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.", '']. stderr = ['']
[11/Mar/2023 13:56:02] ERROR [storageadmin.views.disk:479] Error running a command. cmd = /usr/sbin/smartctl --info /dev/disk/by-id/usb-Seagate_Portable_NACA404L-0:0. rc = 2. stdout = ['smartctl 7.2 2021-09-14 r5237 [x86_64-linux-6.2.2-lp154.5.g3a7e162-default] (SUSE RPM)', 'Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org', '', 'Read Device Identity failed: scsi error unsupported field in scsi command', '', "A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.", '']. stderr = ['']
Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/storageadmin/views/disk.py", line 476, in _update_disk_state
    do.name, do.smart_options
  File "/opt/rockstor/src/rockstor/system/smart.py", line 338, in available
    [SMART, "--info"] + get_dev_options(device, custom_options)
  File "/opt/rockstor/src/rockstor/system/osi.py", line 227, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/sbin/smartctl --info /dev/disk/by-id/usb-Seagate_Portable_NACA404L-0:0. rc = 2. stdout = ['smartctl 7.2 2021-09-14 r5237 [x86_64-linux-6.2.2-lp154.5.g3a7e162-default] (SUSE RPM)', 'Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org', '', 'Read Device Identity failed: scsi error unsupported field in scsi command', '', "A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.", '']. stderr = ['']

Log reading progress

1 Like