@phillxnet thank you!
Actually updated to Rockstor 4.5.9-1 + server reboot.
All seems working good.
Still have the same error message when I click on “Update” Rock-ons.
Here is a view of “All” Rock-ons page , you can clearly see 2 times the same rockon Calibre.local:
Interesting… I don’t recall seeing that issue in the past.
Ok, this will need a little look into the state of your db, then. I unfortunately can’t guide you on that right now but will try to as soon as possible. I think we will need to get the id of one of the two instances of that problematic rockon and then manually delete its record.
Ok the upside, you unearthed an unlikely but obviously possible scenario for which we need to account in the rockon-delete script.
Sorry again I can’t be of more help at this moment.
yet another question from me, did you at any point have two json files in the ‘/opt/rockstor/rocksons-metastore’ directory where the container name was the same, e.g. calibreserver
Usually that would lead to an error during refresh already, when the official and custom Rockon json files are evaluated and not finish the processing, so just want to make sure that there is a new scenario (though shouldn’t be) that would allow to load essentially duplicate data into the database…
Edit –
the flip side of this. If you did not have the above situation, did you have 2 json files with different container names (see above) but the same Rockon name Calibre.local at any point?
As we probably need to enhance the deletion script at some point, I opened a corresponding Github issue for this, as we continue to explore how this scenario came about (and hopefully find a root cause at some point).
Let’s see if we can get a better idea of what you have here. It’ll require some back-and-forth unfortunately, but I’d like to take that opportunity to have as much information as we can so that we can cover this situation in our delete-rockon script. Given it seems very hard to reproduce despite all of @Hooverdan’s efforts, I’d like to not miss anything.
We would first need to see what your relevant database looks like. As mentioned above, this would be via manual queries against your database. In the command below, the password requested is rocky.
# psql -U rocky -d storageadmin -c "SELECT id,name,state,status,taskid FROM storageadmin_rockon WHERE name = 'Calibre.local';"
Hopefully this should give us a coherent first answer.
I got the same today:
postgres@rockstor:~> psql -d storageadmin -c “SELECT id,name,state,status,taskid FROM storageadmin_rockon WHERE name = ‘Emby’;”
id | name | state | status | taskid
-----±-----±----------±--------±-------
104 | Emby | installed | started |
103 | Emby | available | stopped |
(2 rows)
I’m running my own json since in the past the rockon pull request took a long time to get in
Interesting. Another quick question from my side. The json file you’re using locally for Emby, is that the same as the official Rocklin one or did you make any changes to it?
aymen@mynas:~> psql -U rocky -d storageadmin -c "SELECT id,name,state,status,taskid FROM storageadmin_rockon WHERE name = 'Calibre.local';"
Password for user rocky:
id | name | state | status | taskid
-----+---------------+-----------+-----------------------+--------
110 | Calibre.local | available | exitcode: 137 error: |
109 | Calibre.local | available | stopped |
(2 rows)
Thanks @aymenSo and @Jorma_Tuomainen for the reports… very curious to see two reports of that peculiar case, especially given we have not touched the Rock-Ons side of things lately…
Let’s see what you have in terms of containers, then: @aymenSo :
# psql -U rocky -d storageadmin -c "SELECT * FROM storageadmin_dcontainer WHERE rockon_id IN (109,110);"
Only one of your Rock-On entries has a related container, then… It thus matches my test environment so you should be able to use the quick fix I tested to work in this case.
I unfortunately cannot upload not right now but should be able to later today.
@Jorma_Tuomainen so the script worked for you then?
The UI suggestion could be a future option, I think @Flox (or all of us, collectively ) wants to figure out first how this inconsistent state can occur in the first place, since in the past (at least as far back as I can remember) this duplication has not happened before.
@Hooverdan is correct: i really would like to understand how you could 2 of such instances to begin with but I’m afraid this is something we won’t be able to figure out. What we can do is make the whole Rock-On framework more robust and resilient to such abnormalities.
We do have such plans and have had great feedback from users here on what to do so we’ll have plenty to consider. This won’t be in the immediate future, however, but it clearly important to consider for the medium term in my opinion.
Thanks again for your reports and follow-ups, and most of all for your understanding.
aymen@mynas:~> psql -U rocky -d storageadmin -c "SELECT * FROM storageadmin_dconta iner WHERE rockon_id IN (109,110);"
Password for user rocky:
id | rockon_id | dimage_id | name | launch_order | uid
-----+-----------+-----------+-------------------+--------------+-----
123 | 110 | 111 | calibre-vnc.local | 1 |
(1 row)
After updating the script (wget), I was able to delete the 2 rockons entry:
mynas:/opt/rockstor # poetry run delete-rockon Calibre.local
The metadata for the Rock-On named Calibre.local (id: 110) has been deleted from the database
The metadata for the Rock-On named Calibre.local (id: 109) has been deleted from the database
Rock-ons list was successfully updated via webui also using the “Update” button.