Rockstor 3.9.2-47 Stable
Had an error pop up during the removal process for a disk. Support URL refuses connection
Detailed step by step instructions to reproduce the problem
Trying to follow the 3 steps provided on the error screen. Step 3 is to Create a support ticket by click on the link, but the connection is refused.
The pre-disk removal data move appears to be continuing as I type this despite the UI error
Itās confined to the Web-UI and itās consequent db activities timing out. Take care not to attempt any more disk add / remove procedures and in time things should correct themselves and the Web-UI will, once the process is complete, sort itself out. During this time the Web-UI may well be noticeably laggy.
The issue for this interface bug is:
and we have a pending pull request that addresses this bug here:
We are due shortly to release 3.9.2-48 and the above bugfix for your issue is planned for the 3.9.2-49 release there after as itās now next in line. The fix also adds UI feedback on the progress of this disk removal via an additional Allocated% column (and a Btrfs DevID column) in the pools details page disks section but for the time being you can view your current progress via the following command run as the root user:
btrfs dev usage /mnt2/Stograge_Pool
Executed from time to time during the internal balance event and you will see the data being moved from the āremoval in progressā drive and not the others. Then once done the drive will no longer be listed as part of the pool and Rockstorās Web-UI, after a page refresh, should then function as normal.
You should see a negative unallocated value in the output of that command. This is our current best indicator: a factor in the difficulty in implementing this feature. This type of āinternalā balance does not currently show up in regular ābtrfs balance statusā reports, which we use for all other balance operations, and so we had to implement out own system and tie it into our existing user interface feedback mechanisms. That is essentially what that fix does, bar a necessary rework to treat this balance asynchronously to avoid our prior db timeout issue and while we were at it a number of back-end performance enhancements so that we function more acceptably during these events: these speed-ups were basically over-due code / method tidies following a large change we did a little while back.
The referenced fix is a large change and will need some more testing but does look to be functioning as intended so hopefully it shouldnāt be too much longer before itās release.
Thanks for the report and appologies we didnāt get this out prior to you āeventā but it is very much in the works. We just have to get 3.9.2-48 out first then itās our next goal release wise.
As for the āSupport URLā my understanding is that in itās first incarnation it failed to reach sustainability and so has been mothballed. Perhaps once we have fixes in place such as is due for 3.9.2-49 and our pending openSUSE move (with consequently newer btrfs subsystems) we will be in a better position to hopefully bring the priority support sight back on line. Bit of a chicken and egg scenario really as while we have such apparent functional āholesā people are less likely to āinvestā in a support program, yet as we approach a more viable product status less support will be required anyway. Oh well.
Hope that helps and let us know how it goes. Probably best to just leave it to itās own devices as the balance is most likely in progress and via the above command you should be able to confirm this. Once itās finished it should be business as usual on the Rockstor Web-UI front.
And thanks for helping to support Rockstorās continued development via a stable channel subscription. Much appreciated.
Take note that due to your raid5 use and your relatively large disk array this process may well take quite some time. Once this process has finished you may want to consider converting the array to use raid1 or raid10 as the btrfs parity raid levels of 5 and 6 are considered less robust currently and are notably less mature. See the btrfs wiki status page for a nice table on this.
Thanks so much for the in-depth reply. I was watching the progress from the shell before I posted using
watch -n 10 btrfs fi show
and as you mention all the errors just appear to be superficial in the UI. The drive replacement worked fine (It wasnāt needed, I just had a spare 8TB drive laying around, and it would be a shame to waste it)
Iām looking forward to trying out the new OpenSUSE based Rockstor when itās out.
Re: The support site - thatās fine but can I suggest that the link in the UI is either changed/removed (Might be a bit hard), or as there appears to be a Windows IIS service running on that server, a redirect or even a static web page stating that you should head on over to the forum instead would help newbies like me from getting confused.
Re: RAID5 To quote Austin āDangerā Powers, āDangerā is my middle name.
@gaspode Glad your replacement worked out in the end, bit of a bad bug from the user point of view that one but should be sorted in stable channel releases (3.9.2.49).
Good, and Iām looking forward to getting it out. Thanks for the encouragement. Change is often difficult but I think in this case itās necessary.
Agreed. I think it would be good to get this facility back up and running but itās not currently in my responsibilities within the Rockstor project. This may change going forward but itās not something I currently have full say over.
Thatās quite an embarrassment. Rockstor is currently, to my knowledge, pretty much self hosting. The Jenkins CI system runs as a Rock-on within a Rockstor instance, as does this forum, so itās a surprise to see the use of IIS. We did have an issue way back when we transitioned from supporting AD via old style winbind to new style sssd, but our Old hat base of CentOS let us down and we had to transition back to winbind again. Personally I was opposed to this move backwards but hopefully once we are fully transition to openSUSE with itās generally newer packages we can endeavour to move back to the more modern sssd approach. Anyway at around that time an in house windows domain controller was setup for testing Rockstor AD integration. That was the only use of a non linux OS that I was aware of in the Rockstor development pipeline. I am currently becoming more involved in the infrastructure side
of āservingā Rockstor so may well be able to deal with this aspect personally. Iām just not quite aware of some of the back-end infrastructure at, and hopefully in time the priority support ticket arrangement can be brought back into being. All depends on how things go with our current openSUSE move which is taking quite a lot of additional attention. Once thatās in place Iām hoping things will calm down again and we can settle into a more steady release / support capability.
Given btrfs can have one raid level for itās metadata and another for itās data Iām looking forward to being able to support such an arrangement within Rockstor. Currently itās hardwired to use mostly the same raid for both. But upon that being the case Iād like to introduce a raid5/6 data raid1 metadata option as that is considered to be a more practical way to use the less mature btrfs parity raid levels. But alas Iām currently a little pre-occupied with setting up some needed backend stuff to help where I can with Rockstorās infrastructure development wise. Hopefully Iāll have news on these changes when there is something concrete to present.
Thanks again for you support and report / update and hopefully post 3.9.2-49 this very poor UI, read disconcerting, behaviour will be a thing of the past.