More testing continues, and I’ve noticed something with regards to my RAID-1 pool which consists of two drives for testing.
On my NAS, the individual drives indicate when they’re being written to. I’m only seeing one drive being written to now; is this normal, or should the raid-1 write both at the same time?
I’ll verify this when I attempt to do a restore/rebuild, but it’s a little odd?
Currently btrfs is not as optimised as something like mdraid and generally does a kind of round robin access pattern where the drive to be accessed is based on the pid number of the process or something like that. So it should switch between the drives but may only access one at a time per process. But multiple process are often involved so it should pan out in the end.
This shouldn’t make a difference as it just abstracts the block device that btrfs accesses from the device itself to the Open Luks Volume which acts as a proxy for the actual device.
Thanks for the replies. I moved the server into a limited production test today alongside the old NAS.
I’m not an expert on btrfs by any means; is there an easy way to confirm that both of the encrypted volumes are synchronizing eventually without forcing a rebuild? Will do so anyway as it’s best practice but it would be nice to confirm.