Importing an unlabeled btrfs RAID5 volume (pool!) fails

Morning Everybody,

Yesterday I found a minor bug when trying to import a btrfs RAID5 5 disk array into my new system. Another similar array with a label in it worked flawlessly, but the unlabeled one got succesfully imported, mounted and even displayed for a seconds, then dissapeared from the GUI but remained mounted under /mnt2/none (none is how it got temporaly labeled).

Just manually unmounting it, setting a label and remounting did the trick and made it work. As I said, minor annoyance and we all should label our filesystems after all, don’t we??? :laughing:

Oh, and I’m on latest stable yet, can’t reboot to upgrade to latest dev until I finish migrating my data.

Edit: Sorry, not a volume, this is not ZFS any more, it’s a POOL…

1 Like

@AngeleToR Thanks for another great find and report.
Just linking to your GitHub issue so that we can tie the two together and avoid accidental duplication.

In the above GitHub issue I have sketched out my initial reasoning that I think explains the behaviour you describe so I think this is now an understood (but obviously buggy / inconsistent) behaviour so thanks again for taking the trouble to report and detail this. There are however some pending changes that I think may need to be applied first before we can approach this one (detailed in the GitHub issue).

I have in turn linked back to this forum post from your GitHub issue.

Well done for finding and sharing the work around by the way.