I’m new to Rockstor. I’m considering it for a new NAS build at home. I’m doing some reading up on it before I install it.
I’m familiar with standard RAID terms and definitions. What I wasn’t aware of (until today) is that some of these terms have been given new definitions in Btrfs, and hence Rockstor. As I set about to try to determine, definitively, what those definitions are, I found conflicting information in various posts online about Rockstor and Btrfs and in the Rockstor documentation itself. This could have been due to changes over time or it could have been due to misunderstandings by the authors.
Is there one, up-to-date, definitive source for how the Rockstor/Btrfs RAID terms are defined? In particular, I’m looking at information that will explain how the data is stored on the drive arrays and how many disk failures result in data loss. I’m aware of the write hole issue in Btrfs, but I’m not talking about that level of detail (failure of the file system in various power outage or non-disk failure conditions).
It might not be in the same place, but if someone can point me to the up-to-date information on how to recover from driver failures with Rockstor, that would be helpful as well. I’m not asking for anyone to write a post explaining these things. I just want a pointer to the correct documentation that I should refer to.
As an example of what I’m talking about, there are some sources that state that a Rockstor/Btrfs RAID10 array can only sustain a single drive failure and that a second drive failure will result in data loss (regardless of the number of disks in the array). Other sources state that it works the same way as traditional RAID10, where there is no loss with a single disk failure, but the probability of loss after a second disk failure is 1/N.
Again, I’m not looking for a post that explains this. I’m looking for documentation, so I can read up on it completely, from a definitive source.
I’ll be sure to ask clarifying questions after I read it, to be sure I understand it fully.
Thanks.