@clink Thanks for the feedback.
Yes, ‘errors’ can cover a variety of situations; some will be blockers and other not. But the balance enacted on adding a device is not critical to it’s addition to the pool, in a add then remove scenario. Always tricky if there are errors given the can affect the pool in so many ways.
Thanks for sharing your findings and your eventual work around, much appreciated and can serve as a guide for others facing similar dead ends re options within the Web-UI.
Definitely. Fancy putting in a pull request? Unfortunately it’s a non trivial feature that requires extensive testing across all the various drive / btrfs raid level permutations so will take quite the attention to get sorted. However from your report it looks like the the Web-UI ended up sorting itself out in the end post the replace command line intervention. So that’s good to know.
Glad you’ve got this sorted and thanks again for the feedback. We are on shifting sands with all our dependencies and btrfs is constantly being updated and poorly pools are very difficult to predict re capabilities. The balance enacted post Web-UI drive addition is non critical. If it fails it will generally mean the data is just not spread out evenly. But the ‘internal’ balance enacted on a drive removal is quite a different animal, it is auto initiated by the filesystem itself (with add drive we just run a balance afterwards as that is what most folks expect). And this integrated drive removal balance also doesn’t show up in btrfs balance status commands. We infer it’s function to try and present it similarly within the Web-UI.
Re re-size required after replace, yes that’s a regular question on the btrfs mailing list. Known behaviour currently and when we do implement a replace function we should either just do this or do a tick to ask for it or something.
Hope that helps and thanks again for the report.