I’ve had a drive in my raid1 pool looking like it might be on the way out, so I bought a matching replacement but hadn’t put it in yet.
Last night my rockstor crashed and when I plugged a monitor in it was mentioning some read or write errors in /dev/sdb. I rebooted this morning and it seemed ok but I felt it was time to swap that drive. I have been following http://rockstor.com/docs/data_loss.html guide, including a reboot because my device doesn’t allow hot swap.
It’s a fresh drive, so I didn’t wipe it. I’m up to:
btrfs replace start <devid_of_the_failed_drive> /dev/sdb /mnt2/mypool
In my case, the removed drive was /dev/sdb (new one has same letter) and the existing one from the pool is /dev/sda. The opposite to the docs. So I typed:
btrfs replace start /dev/sdb /dev/sda /mnt2/mypool
The output was:
btrfs replace start /dev/sdb /dev/sda /mnt2/Red2x3
/dev/sda appears to contain an existing filesystem (btrfs).
ERROR: use the -f option to force overwrite of /dev/sda
Naturally I’m hesitant to force this, as I don’t want it to remove data from the good drive. Can I get a little guidance? The btrfs manual online says “start srcdev targetdev path”. I would have thought srcdev would be the good drive with data, but this is the opposite to the rockstor docs.
I also didn’t do anything like a balance prior to removing the dying drive, should I be worried that some data could be lost or should the good drive have been managed safely?
Thanks in advance