Please note that btrfs raid 5/6, the parity raid variants, are not recommended for production use. Anyway aside from that there have been quite a few improvements made in the current testing channel that improve Rockstor’s behaviour in this regard but a key one is yet to be release ie:
as it is pending review. If this review goes well it should be included in the next testing channel release.
The disks table indicated in your screen grap picture is as intended, ie the removed drive has been renamed as 'detached-long-random-uuid and is correctly still associated with it’s prior pool.
The default behaviour of btrfs, which Rockstor doesn’t change, is to refuse to automatically mount the pool until manual intervention on the mount options front is applied. This differs from virtually all other raid options and is often discussed on the btrfs mailing list. This is another element that has also been addressed in part in the testing channel updates (by adding ro,degrade etc as possible mount options), as well as an indication of all pool and share current mount states; which in your case would indicate that your pool is no longer mounted (on the Pools page).
Repair of raid5/6 is potentially problematic given it’s currently unstable nature and currently Rockstor offers not UI components to assist with this so you are going to have to repair this pool via the command line. Searches within this forum or generally on the web should yield the best cause of action.
I have been a FreeNAS user in the past, and shifted to OMV but very poor support led me to Rockstor. I am very much confused to move ahead wit Rockstor.
I use NAS for some file storage, besides its mostly DLNA and Transmission. MiniDLNA didn’t work as anticipated and then this.
I believe RAID 6 is better that RAID 10 as RAID 10 is more about luck in the case of a 2 disk crash. Can Rockstor on RAID 6 be expected to be stable in future like FreeNAS or OMV ? Your comments lead me to read upon BTRFS RAID 6 issue, and it doesn’t seems like its a good idea to use RAID 6 with BTRFS.
BUt, http://email@example.com/msg66472.html say that a patch is around the corner. My install was a 3.8.15 since my hardware had issues with the latest stable. I update via the testing channel 3.9.1-8 and it showed me I was running on 4.12.4-1.el7.elrepo.x86_64. Does this mean that this is with the patch applied ?
Would I get back the pool after adding a new disk ? What would you suggest for my configuration, Yes data is very important. Or should I just go back to FreeNAS or OMV
Would have responded sooner but didn’t see this thread until just now.
It is possible to mount a pool created by rockstor in OMV but it isn’t pretty to do, so if you are going back to OMV I’d suggest copying the data rather than trying to mount disks created with rockstor in OMV (Yes I’ve done it, but I also upgraded the Kernel and BTRFS on the OMV VM first before trying it and there are some issues around the OMV UI not knowing how to handle a BRTFS raid pool given their implementation appears to be BTRFS ontop of MD Raid)
If you are running 4.12 or later kernel you should be in a better position with regards to BTRFS raid6 than you otherwise would although I’m not sure if it needs BTRFS-Progs to be at least 4.12 as well to ensure the fixed scrub code is used.
Either way providing the degraded pool is still there then it’s possible to either force it to mount and copy the data elsewhere or to add another disk and then force it to rebuild.
Personally if there’s important data on the pool and you have space elsewhere it might be worth copying it off first just in case re-adding a new drive ends badly (It shouldn’t but the raid 5/6 code in BTRFS was considered to have serious data loss bugs until very recently)