Kernel panic on mount

Hi,

i have two BTRFS pools, both in the same server. one is working fine.

the second one, anytime it tries to mount, i get a kernel panic.

that is either when rockstor tries to mount it on boot, or if i manually try to mount it… it thinks for about 20 seconds, then kernal panics

what are the troubleshooting steps from here?

Ultimately this may have to go to the btrfs mailing list. But a good start is to post the following details

  1. The Pool profile, how many disks, what raid type, current usage etc… btrfs fi show gives you most of that info.

  2. The stack trace displayed on panic itself.

  3. What is the rockstor version? Kernel version?

Its RAID 6, 12 devices. I was removing one of the devices, and i rebooted (via a graceful shut down - which i have done previously and it just stops the removal process and you can retsart it after the reboot)

Label: 'Bottom'  uuid: 80fdc073-9d61-40f6-90af-a3174877aa04
        Total devices 12 FS bytes used 42.88TiB
        devid    1 size 5.46TiB used 4.50TiB path /dev/sdb
        devid    2 size 5.46TiB used 4.50TiB path /dev/sdc
        devid    3 size 5.46TiB used 3.10TiB path /dev/sdd
        devid    4 size 5.46TiB used 4.50TiB path /dev/sde
        devid    5 size 5.46TiB used 4.50TiB path /dev/sdf
        devid    6 size 5.46TiB used 4.50TiB path /dev/sdg
        devid    7 size 5.46TiB used 4.50TiB path /dev/sdh
        devid    8 size 5.46TiB used 4.50TiB path /dev/sdi
        devid    9 size 5.46TiB used 4.50TiB path /dev/sdj
        devid   10 size 5.46TiB used 4.50TiB path /dev/sdk
        devid   11 size 5.46TiB used 4.50TiB path /dev/sdl
        devid   12 size 5.46TiB used 4.50TiB path /dev/sdm

It is a fresh install of 3.8-13 (so whatever kernel version that is)

is there a graceful way to capture what the kernel panic says other than taking a photo of the monitor?

Just some additional info - i had the idea to mount it RO… and that works OK (which is obviously a HUGE relief as it means i can get all my stuff off it at least!

I ran a read-only check on it while it was unmounted - does this give you any hints?

[root@rockstor ~]# btrfs check --readonly /dev/sdb
Checking filesystem on /dev/sdb
UUID: 80fdc073-9d61-40f6-90af-a3174877aa04
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
checking quota groups
Ignoring qgroup relation key 258
Ignoring qgroup relation key 5699
Ignoring qgroup relation key 567172078071971841
Ignoring qgroup relation key 567172078071971842
Counts for qgroup id: 258 are different
our:            referenced 46821300232192 referenced compressed 46821300232192
disk:           referenced 26309931741184 referenced compressed 26309931741184
diff:           referenced 20511368491008 referenced compressed 20511368491008
our:            exclusive 46821300232192 exclusive compressed 46821300232192
disk:           exclusive 26309931741184 exclusive compressed 26309931741184
diff:           exclusive 20511368491008 exclusive compressed 20511368491008
Counts for qgroup id: 5699 are different
our:            referenced 261740437504 referenced compressed 261740437504
disk:           referenced 207650816 referenced compressed 207650816
diff:           referenced 261532786688 referenced compressed 261532786688
our:            exclusive 261740437504 exclusive compressed 261740437504
disk:           exclusive 207650816 exclusive compressed 207650816
diff:           exclusive 261532786688 exclusive compressed 261532786688
found 47140756672831 bytes used err is 0
total csum bytes: 45983804656
total tree bytes: 53285126144
total fs tree bytes: 342982656
total extent tree bytes: 539246592
btree space waste bytes: 5886386880
file data blocks allocated: 47098565607424
 referenced 47098565558272
extent buffer leak: start 73027437395968 len 16384
extent buffer leak: start 73027468640256 len 16384
extent buffer leak: start 74604983697408 len 16384

any ideas on how to mount it RW without a kernel panic?