@Henning Welcome to the Rockstor community and thanks for helping to support Rockstor development via your stable channel subscription from the Update Channels.
This is the result of a very recent ‘value add’ on stable channel installs and came about as from Rockstor version 3.8-14 stable channel release and was initially trialled in testing channel updates from version 3.8-13.15 onwards:
So hopefully that explains the change, i.e. stable channel subscriptions now get directed to a proper ticketing system osTicket based as it goes, while testing channel updates are directed to the forum. This is entirely intended to help and not hinder and of course all are welcome on the public forum but it is not always appropriate for especially commercial concerns to be pasting log entries and the like.
I think in the case of this forum thread it would have been beneficial to include additional information, such as excerpts from the logs, so that others here could offer more informed guidance. I did see your post when it first arrived but I’m personally outclassed by many of our generous and active forum members on the low downs of btrfs, especially when it comes to disaster scenarios. And so I tend to leave such threads to those more experienced in these area as with such a large project one is best advised to pick their battles. There are some guides within the official documentation that address some of these situations such as the Data loss Prevention and Recovery in Rockstor.
So at a guess I would say that some how you got caught between the two reporting systems but either way as @f_l_a kindly and eloquently points our there is no promise of support / response time beyond that of the incident plans which are in themselves more than many os projects have anyway.
To the nub of your issue as you succinctly put it:
When referring to drive removal notification.
Exactly, and few would argue that this is not up there with appliance NAS features on the importance scale. However an important point here is that we are wholly dependant on btrfs’s current (but very rapidly improving) facilities on this one and although we may well be able to pole logs and the like these solutions are generally unattractive to those who want to do things right or wait until they can be done right and so on occasions they are unavailable for far longer than many would also like. I would hazard a guess this includes the majority of the main contributors to Rockstor and btrfs for that matter. And in this vein the indication of ‘detached disk’ that you did see in the Disk’s menu is as a result of poling, and in turn this is another technical debt that needs to be addressed. But in this case it did correctly represent the disk status. There are currently ongoing discussions within the linux-btrfs mailing list on how best to address / surface user level notification of such things as degraded volume status and as soon as such things are resolved there, and with the help of our contributors and stable channel subscribers such as yourself Rockstor will be looking to make best use of these facilities. And while in this area please see the following forum thread for some helpful links along these lines:
where others have also discussed this shortcoming.
Another issue opened recently on this which I have just updated on considering your posts is:
Of note also is that Rockstor’s recommendation has always been that the parity profiles within btrfs (raid5 and raid6) where not recommended and although this was revised on the btrfs wiki some time ago Rockstor’s documentation and the advise given by the leading developers here in the forum remained sceptical on these levels and the Rockstor docs section on Redundancy profiles maintained their original advise against using these raid levels. However with another very recent change to the official status of these raid levels we did update our docs prior warnings and added warning to the UI as well against using raid5 and raid6 levels:
Previously the pool creation raid level selector advised referring to the above docs when considering the raid level.
These changes were a little delayed (by a few days), see:
But were never the less timely on the playing it safe side.
I hope that this helps in understanding how all this comes together (or occasionally doesn’t) and fulfils the personal obligation I feel to open development practice. But all of this takes a long time and all our failures, as well as our successes, are out in the open; my belief is however that such a development model is the only way to go and ultimately leads to a better product.
OK looks like @Suman has also replied here just now.