@gaspode Welcome to the Rockstor community.
Yes this is a know Web-UI bug but as per your:
It’s confined to the Web-UI and it’s consequent db activities timing out. Take care not to attempt any more disk add / remove procedures and in time things should correct themselves and the Web-UI will, once the process is complete, sort itself out. During this time the Web-UI may well be noticeably laggy.
The issue for this interface bug is:
and we have a pending pull request that addresses this bug here:
We are due shortly to release 3.9.2-48 and the above bugfix for your issue is planned for the 3.9.2-49 release there after as it’s now next in line. The fix also adds UI feedback on the progress of this disk removal via an additional Allocated% column (and a Btrfs DevID column) in the pools details page disks section but for the time being you can view your current progress via the following command run as the root user:
btrfs dev usage /mnt2/Stograge_Pool
Executed from time to time during the internal balance event and you will see the data being moved from the ‘removal in progress’ drive and not the others. Then once done the drive will no longer be listed as part of the pool and Rockstor’s Web-UI, after a page refresh, should then function as normal.
You should see a negative unallocated value in the output of that command. This is our current best indicator: a factor in the difficulty in implementing this feature. This type of ‘internal’ balance does not currently show up in regular ‘btrfs balance status’ reports, which we use for all other balance operations, and so we had to implement out own system and tie it into our existing user interface feedback mechanisms. That is essentially what that fix does, bar a necessary rework to treat this balance asynchronously to avoid our prior db timeout issue and while we were at it a number of back-end performance enhancements so that we function more acceptably during these events: these speed-ups were basically over-due code / method tidies following a large change we did a little while back.
The referenced fix is a large change and will need some more testing but does look to be functioning as intended so hopefully it shouldn’t be too much longer before it’s release.
Thanks for the report and appologies we didn’t get this out prior to you ‘event’ but it is very much in the works. We just have to get 3.9.2-48 out first then it’s our next goal release wise.
As for the ‘Support URL’ my understanding is that in it’s first incarnation it failed to reach sustainability and so has been mothballed. Perhaps once we have fixes in place such as is due for 3.9.2-49 and our pending openSUSE move (with consequently newer btrfs subsystems) we will be in a better position to hopefully bring the priority support sight back on line. Bit of a chicken and egg scenario really as while we have such apparent functional ‘holes’ people are less likely to ‘invest’ in a support program, yet as we approach a more viable product status less support will be required anyway. Oh well.
Hope that helps and let us know how it goes. Probably best to just leave it to it’s own devices as the balance is most likely in progress and via the above command you should be able to confirm this. Once it’s finished it should be business as usual on the Rockstor Web-UI front.
And thanks for helping to support Rockstor’s continued development via a stable channel subscription. Much appreciated.
Take note that due to your raid5 use and your relatively large disk array this process may well take quite some time. Once this process has finished you may want to consider converting the array to use raid1 or raid10 as the btrfs parity raid levels of 5 and 6 are considered less robust currently and are notably less mature. See the btrfs wiki status page for a nice table on this.