@chicago First off a very belated welcome to the Rockstor community and thanks for helping to support Rockstor development via subscription to the stable updates channel.
The likelihood here is that the mount process is often when ‘problems’ show in the most obvious way. You may well find ‘tell tail’ signs that may help with the diagnosis further back in your logs. And often our stable channel subscribers will only end up rebooting upon new stable channel releases; such as we have just had. The btrfs mailing list thread you link to started by Daniel
(sorry I don’t know of Daniel’s user name on this forum) concerned, in part, a hardware disk failure on Raid5/6 (not Raid 10 as you report) ie:
quoting Roman Mamedov’s reply in that thread:
"Also one of your disks or cables is failing (was /dev/sde on that boot, but may
get a different index next boot), check SMART data for it and replace."
[ 21.230919] BTRFS info (device sdf): bdev /dev/sde errs: wr 402545, rd 234683174, flush 194501, corrupt 0, gen 0
This is obviously a tense time but I do find those comments a tad inflammatory / accusational shall we say. Also:
“failed to read the system array: -5” is not ‘the same’ as “failed to read block groups: -5” but you do acknowledge this earlier in your post here as you do in you recent linux-btrfs mailing list post. Plus btrfs Raid 1/10 are entirely different from the parity raids of btrfs 5/6 so hopefully this bodes well for your data (assuming no backups). Another note on language use: it is perplexing that you and Daniel share the “… same Rockstor install …” . Presumably you meant version. Plus you haven’t yet confirmed that your pool or it’s data are actually ‘lost’.
Linking to your linux-btrfs post for context and hopefully we can avoid duplicating effort:
As a hopefully helpful comment on your linux-btrfs posting I think it might be as well for you to report there the exact steps you took to attempt a manual mount as there you quote your own log entries thus:
[ 716.902506] BTRFS error (device sdb): failed to read the system array: -5
[ 716.918284] BTRFS error (device sdb): open_ctree failed
[ 717.004162] BTRFS warning (device sdb): 'recovery' is deprecated,
use 'usebackuproot' instead
[ 717.004165] BTRFS info (device sdb): trying to use backup root at mount time
[ 717.004167] BTRFS info (device sdb): disk space caching is enabled
[ 717.004168] BTRFS info (device sdb): has skinny extents
[ 717.005673] BTRFS error (device sdb): failed to read the system array: -5
[ 717.020248] BTRFS error (device sdb): open_ctree failed
which indicates the use of a recovery mount option. This deprecated mount option is NOT one that Rockstor exercises or even allows via the UI. I would as a consequence suggest that you report at lest when talking to the canonical authors of the filesystem you are experiencing trouble with ‘the whole story’. Akin to this in your same linux-btrfs mailing list thread (and not here) we have:
“I had also added a new 6TB disk a few days ago but I’m not sure if the balance finished as it locked up sometime today when I was at work. Any ideas how I can recover?”
accompanied by your other log entry excerpt:
… [ 969.375296] BUG: unable to handle kernel NULL pointer dereference
[ 969.376583] IP: can_overcommit+0x1d/0x110 [btrfs] …
All core Rockstor developers are obviously avid readers of the linux-btrfs mailing list. Please try in the future to take more care with premature attribution of blame or aspersions. We all make mistakes and the important thing is to work together to try and make better software / file systems (which are hard) and that is what I hope we are all trying to do here. Potentially belying the work of all Rockstor developers is probably not the best route to gaining assistance where and when you might need it. We have to work together to succeed.
Lets hope the btrfs experts can shed some light on your experience with mounting your pool.