That’s a rather curious finding, my current guess is that we have insufficient info to pin that one down. It may just be that something scanned the drives and stopped them from sleeping, or that there is a caveat in the firmware concerning the first time the wake. Currently too little info to surmise for sure. Interesting though but do keep in mind that this is not a hard and fast setting. Each disk can choose to do what ever it fancies, depending on it’s settings (APM etc) and firmware. It may have been that there was for example a coincidental S.M.A.R.T access from smartmontools by way of a periodic check. Difficult to tell with info so far but keep an eye on this as all info can be useful. Although I’m not sure what else we could do as our inter-operation with these settings is actually fairly light. Good to know what we can though, especially with commonly used disks such as yours (WD reds).
In btrfs the parity raid levels of 5 and 6 have known outstanding issues so raid1 is really the safest bet. As to the pairs concept btrfs raid1 works at the block level not the drive level; this is the case for all the btrfs raid levels:
From https://btrfs.wiki.kernel.org/index.php/Glossary we have:
“Traditional RAID methods operate across multiple devices of equal size, whereas btrfs’s RAID implementation works inside block groups.”
Also worth looking at the RAID-0 to RAID-10 sections in that same reference as they all differ slightly (given the block basis) from what is normally associated with pure disk based raid concepts.
So there is no requirement to match drive sizes at all. If space is left on at least 2 drives in a pool (Rockstor speak for a btrfs volume - which can span multiple disks) then btrfs raid1 can continue to use that space. Also note that you can ‘transition’ live from one raid level to another - although it can take ages, it is possible: given enough free space of course. Best to just go with raid1 really until the parity raid levels improve. Although, given btrfs can have one raid level for data and another for metadata (within the same pool/volume), one current weakness of the parity raid levels can be circumvented by not using that raid level for the metadata: ie raid 5 or 6 for data and raid1 for metadata. This is not something Rockstor can deal with currently but it might be a nice addition while the btrfs parity raid levels mature. See the following forum thread and it’s given linux-btrfs mailing list links for a discussion on the current btrfs parity raid issues:
and in turn:
The btrfs parity raid code is a lot younger than it’s equivalent raid1 / raid10 code which has lead to some reputation issues all around. Especially given raid5/6 is a common user favourite, hence entertaining the idea of extending Rockstor’s capability to deal with different raid levels for data and metadata. I personally would like this but it has to be done in a way that doesn’t complicate things as I see one of Rockstor’s strengths as it’s usability. Down in no small part to btrfs’s extreme flexibility which in turn presents few barriers, if some considerable challenges UI wise.
All unnecessary and also unsupported (unrecognised) in and by Rockstor given the block level raid nature. Btrfs can already pool drives of varying sizes and present them as a single Pool (btrfs volume).
Hope that helps.