I’ve just checked the rock-ons server and it looks to be fine. Earlier today I was changing some config on that server and needed to reload the service a few times. It may be that you were just unlucky and caught the time when the web server was re-configuring.
Give it another go and let us know if it succeeds.
Another possibility is that your rockstor system is unable to reach the rock-ons web page; ie gateway/router firewall or the like. To test this you could execute the following in a terminal on the Rockstor machine:
the terminal / console output should indicate if the web get (wget) was successful.
This will return how long it took for the wget to complete. You want to look at “real”, as this is the full amount of time. I suspect it will take longer than 10 seconds, which is how long the GUI waits for a response before timing out.
Yes me neither. But we are in the process of moving our background services over to new servers so maybe your network had cached the older ip. Bit of a long shot though as for the rock-ons that was a few days ago.
Oh well, at least it’s working for you now. Sorry to have not pinned this down further though. But do please report again if you see this again.
@vesper1978 Thanks for you improved test on this one. It prompted a thought that what we could do with is a shell script that more closely mimics what Rockstor actually does in total which is to get the root.json file and then retrieve each of the rock-on json definition files mentioned in that, but it processes (slightly) each in turn. So we would need a tiny delay between each subsequent get of each individual json after the first root.json get. The latest code that does this can be found here:
and then in turn:
So only really a quick contents tally between each json get. But at the server end I do see a bit of a range between how fast all of the files are retrieved in order and I’ve put this down to client machine speed or internet link. Hence my only other thought here was a really slow machine that just couldn’t work it’s way through all the requests in turn before timing out. But the timeout is on each json retrieval in turn so seems pretty unlikely.
Anyway, food for thought and such a script may help in the future with more properly testing this web retrieval in a way completely independant of the Rockstor code which may help with narrowing down where the problem lies in the future. We may for example have to add some kind of delay between each request, or build in a backing off mechanism if a ‘too many connections’ type thing arises. Different than what we see here but related with regard to tuning our services.
Hope this helps and anyone interested is welcome to chip in with robustness measures for the future as it’s a bad user experience when this arises and can be a challenge to exactly duplicate what Rockstor actually does. I’m also considering enabling compression at the web server end for these files so that the request call can unzip them on the fly. All in good time hopefully.
I work for a company that has to factor in worst case connection scenarios. I’d like to make some suggestions.
Would it be possible to get the “timeout” value increased from “10” to something like “30” ? Or perhaps looking at a CDN such as CloudFlare (There’s a free version that would work) to to cache rockstor.com and the Rock-On registry so that anyone making a request should be pulling from an Edge location close to them instead of directly from the rockstor.com web server 99% of the time? Or even both?
Implementing increasing the timeout in the code would probably suffice. Implementing CloudFlare would give rockstor.com resiliency against having too many requests coming in as the CDN would do all the heavy lifting.