@ScottPSilver A follow up on:
There are few repair improvements since then in either the now much older stable channel, or the RC status testing channel:
But do make sure you at least apply all OS updates since then.
A response to this will definitely help folks here help you.
Also note, if you have lost a drive, other forum thread indication, btrfs fi show should have indicated this, you would just need to add the degrade mount option: such as the Web-UI would normally walk you through:
https://rockstor.com/docs/data_loss.html#web-ui-and-data-integrity-threat-monitoring
And reboot: if a rw mount is then achieved then you can remove the missing drive via the Web-UI or command line to return your Pool to normal function. If space allows for a disk removal: otherwise its a replace at the command-line.
Btrfs-raid5 has a one drive failure normally and btrfs-raid6 two drives if you are lucky. This all assumes all data/metadata on the pool is consistently at that raid level - we don’t have Web-UI warning about that just yet. But it can happen in pools that have been incompletely converted from one version to another. If once just switches raid at the command line without a full balance, only new data is written as the new btrfs profile. Rockstror’s Web-UI always does full balances: slower but it guards against this caveat.
So lets have the full response to that command request!
You are saying there was not missing disks indicated then ! Your post looks incomplete. Sometimes there is info before that first line you posted. We normally catch that in our Web-UI command parsing though.
Your other forum thread reference !
Your pool is likely poorly then as not getting even a ro mount with the degraded option any longer means you need to use some mount options that are likely outside of what Rockstor’s Web-UI will currently allow (newer options have been introduced of late). Take a look at this section again:
https://rockstor.com/docs/interface/storage/disks.html#import-unwell-pool
as a guide to how to mount a pool that is unwell, in that case it’s for import, but it still holds. Plus a mount attempt at the command line can help folks know the nature of the mount failure. Be sure to use the indicated mount point thought as then the Web-UI can help there-after. This way you can mount a pool with any recovery orientated mount options, and work around our restrictions on these currently.
The following is a good article.
As you system did once response to degraded,ro for a while and then failed there-after: it may just be you have a compound failure. The btrfs-parity raid levels of 5 & we default ro for quite some time in our openSUSE period. And we inherited this form our upstream. They deemed it too flaky. But you are likely now using a backported kernel and filesystems repo presumably: can you detail if that is the case.
You are a little off-the-track here. Backported kernel (and filesystem repo hopefully as per our HowTo) and you have run a degraded btrfs-parity pool until it encountered yet more problems.
If space is more important than robustness, but it’s still a drag to repair/recover from backups, consider our now Web–UI supported mixed raid approach of say:
- data btrfs-raid6
- metadata raid1c4
It is far less susceptible to the know issues that plague the parity btrfs-raid levels of 5 & 6: which should simply never have been merged as they seem not to fully conform to the rest of btrfs. See the last entry in our kernel backport HowTo:
https://rockstor.com/docs/howtos/stable_kernel_backport.html#btrfs-mixed-raid-levels
There is work on-going upstream to update the nature of the btrfs parity raid levels: and this will be most welcome.
Hope that helps, at least with some context.