Support site is down - error when removing disk

Brief description of the problem

Rockstor 3.9.2-47 Stable
Had an error pop up during the removal process for a disk. Support URL refuses connection

Detailed step by step instructions to reproduce the problem

Trying to follow the 3 steps provided on the error screen. Step 3 is to Create a support ticket by click on the link, but the connection is refused.
The pre-disk removal data move appears to be continuing as I type this despite the UI error

Web-UI screenshot

Original Error:


Extra pics in replies if it lets me

Error Traceback provided on the Web-UI

Yes, the traceback provided appear to be 4 spaces

Clicking on link (https):

@gaspode Welcome to the Rockstor community.

Yes this is a know Web-UI bug but as per your:

Itā€™s confined to the Web-UI and itā€™s consequent db activities timing out. Take care not to attempt any more disk add / remove procedures and in time things should correct themselves and the Web-UI will, once the process is complete, sort itself out. During this time the Web-UI may well be noticeably laggy.

The issue for this interface bug is:

and we have a pending pull request that addresses this bug here:

We are due shortly to release 3.9.2-48 and the above bugfix for your issue is planned for the 3.9.2-49 release there after as itā€™s now next in line. The fix also adds UI feedback on the progress of this disk removal via an additional Allocated% column (and a Btrfs DevID column) in the pools details page disks section but for the time being you can view your current progress via the following command run as the root user:

btrfs dev usage /mnt2/Stograge_Pool

Executed from time to time during the internal balance event and you will see the data being moved from the ā€˜removal in progressā€™ drive and not the others. Then once done the drive will no longer be listed as part of the pool and Rockstorā€™s Web-UI, after a page refresh, should then function as normal.

You should see a negative unallocated value in the output of that command. This is our current best indicator: a factor in the difficulty in implementing this feature. This type of ā€˜internalā€™ balance does not currently show up in regular ā€˜btrfs balance statusā€™ reports, which we use for all other balance operations, and so we had to implement out own system and tie it into our existing user interface feedback mechanisms. That is essentially what that fix does, bar a necessary rework to treat this balance asynchronously to avoid our prior db timeout issue and while we were at it a number of back-end performance enhancements so that we function more acceptably during these events: these speed-ups were basically over-due code / method tidies following a large change we did a little while back.

The referenced fix is a large change and will need some more testing but does look to be functioning as intended so hopefully it shouldnā€™t be too much longer before itā€™s release.

Thanks for the report and appologies we didnā€™t get this out prior to you ā€˜eventā€™ but it is very much in the works. We just have to get 3.9.2-48 out first then itā€™s our next goal release wise.

As for the ā€˜Support URLā€™ my understanding is that in itā€™s first incarnation it failed to reach sustainability and so has been mothballed. Perhaps once we have fixes in place such as is due for 3.9.2-49 and our pending openSUSE move (with consequently newer btrfs subsystems) we will be in a better position to hopefully bring the priority support sight back on line. Bit of a chicken and egg scenario really as while we have such apparent functional ā€˜holesā€™ people are less likely to ā€˜investā€™ in a support program, yet as we approach a more viable product status less support will be required anyway. Oh well.

Hope that helps and let us know how it goes. Probably best to just leave it to itā€™s own devices as the balance is most likely in progress and via the above command you should be able to confirm this. Once itā€™s finished it should be business as usual on the Rockstor Web-UI front.

And thanks for helping to support Rockstorā€™s continued development via a stable channel subscription. Much appreciated.

Take note that due to your raid5 use and your relatively large disk array this process may well take quite some time. Once this process has finished you may want to consider converting the array to use raid1 or raid10 as the btrfs parity raid levels of 5 and 6 are considered less robust currently and are notably less mature. See the btrfs wiki status page for a nice table on this.

1 Like

Thanks so much for the in-depth reply. I was watching the progress from the shell before I posted using

watch -n 10 btrfs fi show

and as you mention all the errors just appear to be superficial in the UI. The drive replacement worked fine (It wasnā€™t needed, I just had a spare 8TB drive laying around, and it would be a shame to waste it)

Iā€™m looking forward to trying out the new OpenSUSE based Rockstor when itā€™s out.

Re: The support site - thatā€™s fine but can I suggest that the link in the UI is either changed/removed (Might be a bit hard), or as there appears to be a Windows IIS service running on that server, a redirect or even a static web page stating that you should head on over to the forum instead would help newbies like me from getting confused.

Re: RAID5 To quote Austin ā€œDangerā€ Powers, ā€œDangerā€ is my middle name. :wink:

@gaspode Glad your replacement worked out in the end, bit of a bad bug from the user point of view that one but should be sorted in stable channel releases (3.9.2.49).

Good, and Iā€™m looking forward to getting it out. Thanks for the encouragement. Change is often difficult but I think in this case itā€™s necessary.

Agreed. I think it would be good to get this facility back up and running but itā€™s not currently in my responsibilities within the Rockstor project. This may change going forward but itā€™s not something I currently have full say over.

Thatā€™s quite an embarrassment. Rockstor is currently, to my knowledge, pretty much self hosting. The Jenkins CI system runs as a Rock-on within a Rockstor instance, as does this forum, so itā€™s a surprise to see the use of IIS. We did have an issue way back when we transitioned from supporting AD via old style winbind to new style sssd, but our Old hat base of CentOS let us down and we had to transition back to winbind again. Personally I was opposed to this move backwards but hopefully once we are fully transition to openSUSE with itā€™s generally newer packages we can endeavour to move back to the more modern sssd approach. Anyway at around that time an in house windows domain controller was setup for testing Rockstor AD integration. That was the only use of a non linux OS that I was aware of in the Rockstor development pipeline. I am currently becoming more involved in the infrastructure side

of ā€˜servingā€™ Rockstor so may well be able to deal with this aspect personally. Iā€™m just not quite aware of some of the back-end infrastructure at, and hopefully in time the priority support ticket arrangement can be brought back into being. All depends on how things go with our current openSUSE move which is taking quite a lot of additional attention. Once thatā€™s in place Iā€™m hoping things will calm down again and we can settle into a more steady release / support capability.

Given btrfs can have one raid level for itā€™s metadata and another for itā€™s data Iā€™m looking forward to being able to support such an arrangement within Rockstor. Currently itā€™s hardwired to use mostly the same raid for both. But upon that being the case Iā€™d like to introduce a raid5/6 data raid1 metadata option as that is considered to be a more practical way to use the less mature btrfs parity raid levels. But alas Iā€™m currently a little pre-occupied with setting up some needed backend stuff to help where I can with Rockstorā€™s infrastructure development wise. Hopefully Iā€™ll have news on these changes when there is something concrete to present.

Thanks again for you support and report / update and hopefully post 3.9.2-49 this very poor UI, read disconcerting, behaviour will be a thing of the past.