Houston we've had a problem after testing FreeNAS and NAS4Free (XigmaNAS) with ZFS in between

On same machine:
1.rockstor with BtrFS (one SSD with OS and 3 SSD for testing pools)
Found that RAID1 is not really RAID1 but RAID1E…

2.Therfore testing FreeNAS with same SSD’s and filesystem ZFS (because this should have a N-Way RAID1 for more than 2 disks)

3.After that testing also NAS4Free (XigmaNAS) with same SSD’s and filesystem ZFS .

4.Back to rockstore on same server.
Wiped all disks because they are not usable otherwise.
Creating pools and shares.

After that nearly every action on the shares will run in a “Houston we’ve had a problem” message. Traceback (most recent call last) is always empty.

Reboots between deleting all shares and pools… always same picture.

Is there a common problem with former ZFS disks? Is wiping whole disk not enough?

I’m new here so take this with a grain of salt but I just pulled 7x3TB’s WD Reds from my FreeNAS server to test Rockstor as well.

During the drive setup I clicked the little “wipe drive” box and everything has been working flawlessly since.

@TB-UB Wiping the whole disk should be fine, as long as you wipe the partition tables as well.
You can try:

dd if=/dev/zero of=/dev/<DEVICE> bs=1M count=1

Where <DEVICE> is your disk device. This will wipe the initial 1Mb of data on the disk where the partition tables are stored.

Rockstor doesn’t play nice with disks that are formatted for another purpose, it’s designed to use whole disk btrfs and subvol out from there, not partition level.