Yes, that one is on me. Ive only had 1 cup of coffee this morning. lol
Actually, multipath tools are for when you have remote drives (Iscsi or SAN) and multiple paths to those drives. However, you are correct in that zfs installs the multi-path tools as a dependency.
If I remember correctly, that is a misconfigured multipathd service. When it is configured with the UUID’s of the drives you should only see a single drive from the OS level. But it has been a few years since I configured multipathing. There is a story in there about getting it to work in the second stage initial ram disk of linux so the system could not only boot from the multipath device but detect which data center it was in and if it was in the DR Data center it would boot from the DR copy of the san. lol
There is, however you can recover the pool even if it was not exported. Ive had to do that a couple of times.
This was one of the reasons I was looking at the migration before it failed. one of my pools is at 90% and causing issues. The existing system could not handle any more drives. The new one I ordered will hold more drives and I’ve ordered 4tb SAS drives as the primary to create the new BTRFS pool on. The old system had 3tb SATA drives.
Ok, what is the current status of 4? Alpha, Beta, Stable??
I may be giving Rockstor a workout. My current HOME setup is
Freenas (now dead) with two pools of 3x3tb drives Pool #1 is a raidz1 (Backups of home directories and persistent storage for the cloud) and Pool #2 is a ZFS stripe. (Media/video files)
Linux Nas Server with 1 pool of 6x200gb ssd drives (raidz1) connected via 40g infinaband to the below server.
Blade server with 4 blades, each with 24 cores and 128g of ram running CentOS7 with kvm-qemu VM’s whos boot/drive images are on the Liunx nas. There are a hand full of infrastruture VMs and the other VMs run docker in a swarm configuration (total 20 node docker swarm) With persistent storage on the freenas server. (Most of which is down due to freenas dying.)