This is a know issue where we fail to escape some of the api calls in the error message and end up inadvertently triggering some similar calls in the forum, when what we are intending to do is give an exact copy of the failed Rockstor api call call as a forum title. This is covered by the following outstanding issue:
I’ve isolated the required characters to escape when I opened that issue but we haven’t, as yet, had any takers to tackle that particular issue.
Thanks for the report, and I’ve added this forum post as an additional reference in that issue
To the main issue you report. It looks like the entered user is not known to the system. Did you add that user as a Rockstor user before entering it in the samba form?
Quite possibly. We did have time outs a while back for large systems but I was under the impression it was sorted now. Also make sure the version of Rockstor you are running is actually what it state it is running. We had a deceptive bug a while ago for those folks moving from testing to stable. Should affect you though if you went straight from ISO to stable channel. See the following to make doubly sure you have the latest code actually running:
Here I’m assuming your particular system is subscribed to the Stable channel. Could you also let us know the version you are using as we currently have a Release Candidate version where there is currently no download but there is an ability to DIY build your own installer:
Once build you will then be able to install our soon to be released Rockstor 4 variant which has a far newer openSUSE base. In both cases you can subscribe to stable if required but our CentOS variant will no longer receive any more updates (from us at least) after 3.9.2-57 (April 2020). Where as our ‘Built on openSUSE’ variant is our current and future variant due to the upstream btrfs support.
And assuming this is a new install this may be relevant. Especially given if this is a bug, we can no longer build for the CentOS variant anyway so won’t be able to fix anything in that version from here on in. Rather unfortunate but it is how things worked out and at least we are now on an upstream supported base. Just have to do some hoop jumping and we will soon have a download. But as stated in you can follow the instructions in the DIY installer build you will have a Rockstor 4 will all pending updates pre-installed.
I’ll try an look up that ‘many users’ thing that would end up with users starting with later alphabet names not being found. It was quite a strange one until worked out what was happening. I’m pretty sure it’s here on the forum somewhere.
I’m on 3.9.2-57
Shame, i’m not a fan of Suse and much prefer RHEL\Centos.
Do you know if there is a max amount of users? When i say “A LOT” of users, its 30000 !
And your copied in url shows the expected “page_size=9000”.
So it may be we have bumped, again, into this same limit!!
OK, so just seen your new post. Didn’t want to ask but yes, that would do it.
The move was forced on us actually as we were happy with our CentOS base but RHEL, the funder of CentOS (at least these days) dropped even their technical preview status for btrfs and so we were left high and dry:
My initial response at the time was here:
and here:
But ultimately we moved to rebasing on openSUSE and it was just as well given we ended up hitting a build limitation on our CentOS based at around 3.9.2-57 and so were no longer able to build there anyway. We have a non trivial amount of technical debt, Python 2 older Django etc, and were hoping to deal with it a long time ago but the btrfs news re RHEL dropping even preliminary support for it mean we had to adapt and it has been time and resource consuming. But the project is better for it as we discovered a number of deep bugs that are now sorted. Plus we now have btrfs backports managed by a company / organisation that employs a goodly number of the btrfs developers. Plus SLES default install is btrfs so they have, like us, a vested interest in it actually working.
Hope that helps, at least with narrowing down what’s going on here.
I’ve created this initial issue with my suspicions to date:
Let us know how you get on in the interim. And if this is just that same limit, what figure might be a working one here, 32000 maybe? I was a little aprehensive moving from 5000 to 9000 but I guess if they arn’t there there’s no problem and if they are, and we don’t have a time out, then we may have a potential fix by just upping that same figure again.
If you have managed to make the 4.0.0-0 installer and it’s given you a working 4.0.0-0 install then one, great, and 2 just subscribe to the testing channel updates within that install instance and it will upgrade itself to 4.0.2-0. See:
If we don’t get any show stoppers from this latest 4.0.2-0 release then I will likely upload that exact same rpm to the Stable channel which should allow us to return the testing channel to testing out some larger changes we have waiting in the wings within the testing channel as we work towards the next stable release within the testing channel. This next testing run is likely to be a much shorter one than usual though.
Let us know how this works out for you. We intentionally installed the 4.0.0-0 rpm in the DIY installer, rather than the latest, so that we could also test, pre-release of the installer downloads, the resulting systems ability to upgrade it’s self.
Hope that helps and do let us know if that small change I made in 4.0.2-0 re the 9000 to 32000 Web-UI element limit helped. I suspect it isn’t the full fix and we have more to do in that area, but it would be good to know if we have made progress on that front.
Yeah, that’s what i did.
I may have an issue with not seeing updates due to being behind a proxy.
Is there anything more that setting /etc/sysocnfig/proxy that i would need to do to get update working?
I’m not familiar with NIS itself, but I believe you stumbled into another CentOS/openSUSE difference in paths for config files. If I’m correct while the path was /etc/sysconfig/network in CentOS, it is now /etc/sysconfig/network/config that we’re looking for.
There seems to be other slight differences in variables names, etc… that we will need to update, I believe, so would you mind creating an issue for that in our rockstor-core Github repository?
As you found that bug, it will help keep proper attribution ;).
Yes, that may be it. We have had reports of update issues of this sort in some corporate networks. We use port 8999 for the update repos, and the x86_64 testing rpms for Leap 15.2 based installs are available here:
However the same retrieval of testing rpm is performed when creating the installer. Did you make the installer on a different network by chance. Or maybe on a machine that had extended port/access privileges maybe. If the latter then a fix would be to have the Rockstor instance allowed out bound port 8999 access.
I’m unsure about this. I know in our CentOS variant we had issues getting around this as the proxy setting has to be honoured by yum, and in our ‘Built on openSUSE’ variant zypper as well. We use yum to get the changelog pre-install (so the update indication and changelog show up) and zypper to do the actual install update. If you find a working setting then:
zypper refresh
should at least work without complaint after subscribing to the Rockstor testing updates channel.
@Flox Thanks for the diagnosis there. Looks like we may have another ‘nice to have’ fix before Stable channel release possibly.
@legion411 Thanks for another valuable report. A few steps forward possibly and one back unfortunately. Incidentally, since you managed to make your own installer successfully, you could always edit the rpm version that is pre-included and re-build with that version in. Doesn’t fix the outstanding directory change issue re NIS but good to know as we whittle away at these outstanding issues. The number you would need to change in your copy of the kiwi-ng based rockstor-intaller is the same one that was added in this pull request:
i.e. in this line:
Changing that to 4.0.2-0 or whatever the latest version is at the time would then pre-install that version within the installer on it’s subsequent runs.
@legion411
Given I’ve already started working on this issue I’ve gone ahead and created a GitHub issue and as per @Flox’s
I’ve attributed it accordingly and linked back to this forum thread for context:
I’ve still a little something to see to and final tests to perform there after and we should, there after, have this regression resolved in our next release. I’m unfamiliar with NIS, and it seems to be a somewhat superseded technology these days but still, it would be nice to address this regression prior to our Rockstor 4 release.