On same machine:
1.rockstor with BtrFS (one SSD with OS and 3 SSD for testing pools)
Found that RAID1 is not really RAID1 but RAID1E…
2.Therfore testing FreeNAS with same SSD’s and filesystem ZFS (because this should have a N-Way RAID1 for more than 2 disks)
3.After that testing also NAS4Free (XigmaNAS) with same SSD’s and filesystem ZFS .
4.Back to rockstore on same server.
Wiped all disks because they are not usable otherwise.
Creating pools and shares.
After that nearly every action on the shares will run in a “Houston we’ve had a problem” message. Traceback (most recent call last) is always empty.
Reboots between deleting all shares and pools… always same picture.
Is there a common problem with former ZFS disks? Is wiping whole disk not enough?