Internal error doing a POST to /api/rockons/update

Hello friends!

Below is my issue:

Brief description of the problem

Cannot update the rockstor “rock-ons”.

Detailed step by step instructions to reproduce the problem

I was trying to update the rockstor “rock-ons”.

Web-UI screenshot

Error Traceback provided on the Web-UI

##### Houston, we've had a problem.

Unknown internal error doing a POST to /api/rockons/update

Are there any issues with your server? This has been going on now for about a week, but now I decided to post at the forum about it. Everything else appears to be working on my end.


@k0nsl Welcome to the Rockstor community and thanks for the report.

So I’m assuming that all you did was press the “Update” button in the top right then?

The “POST” part here indicates that the error was during an update of the database, when on occasions the server side has run out of connections we get a timeout message so I don’t think this is that. And we have made some changes in the more modern code to remove the number of redundant checks that we were making on the server side so that should happen less in the future.

This is quite curios. Does your rockstor_rockstor pool look to be rw still or has it gone read only as this would cause this same issue potentially as the post (db write) would be unable to complete. Or you system drive could be low on space, but again that would have other knock-on.

There is a pending issue related to an upstream update to python-tornado that could be breaking our background task manager that does some of the Rock-ons stuff. There may be some clues in the following file:


The command line tool “less” will help you view this file. But this issue has only been reported so far by @freaktechnik and is related to our “Build on openSUSE” variant:

But it has also been seen on customised CentOS installs where additional repo additions have brought in newer versions of python-tornado. @maxhq tracked this one down quite some time ago:

But these look different from your report currently.

Have you applied any upstream updates that coincide with this recent failure?

Hope that helps, and let us know if you find anything suspicious in the logs that coincides with this message appearing in the Web-UI.