Hi, not sure if I should put this under Troubleshooting or Support.
I’m running a RAID10 with 4 disks of the same size.
Now one disk has failed causing the pool failing. I’ve ordered the same type of disk again to replace the failing one.
ls -la /dev/sd*
brw-rw---- 1 root disk 8, 0 Dec 4 10:37 /dev/sda
brw-rw---- 1 root disk 8, 1 Dec 4 10:37 /dev/sda1
brw-rw---- 1 root disk 8, 2 Dec 4 10:37 /dev/sda2
brw-rw---- 1 root disk 8, 3 Dec 4 10:37 /dev/sda3
brw-rw---- 1 root disk 8, 16 Dec 4 10:37 /dev/sdb
brw-rw---- 1 root disk 8, 32 Dec 4 10:37 /dev/sdc
brw-rw---- 1 root disk 8, 48 Dec 4 10:37 /dev/sdd
brw-rw---- 1 root disk 8, 64 Dec 4 10:37 /dev/sde
fdisk -l
Disk /dev/sda: 16.1 GB, 16106127360 bytes, 31457280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000cb919Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 3123199 1048576 82 Linux swap / Solaris
/dev/sda3 3123200 20971519 8924160 83 LinuxDisk /dev/sdb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/sdc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/sdd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/sde: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
btrfs filesystem show:
Label: ‘rockstor_rockstor’ uuid: 36a00428-512d-4353-931b-d1af4fae2ba7
Total devices 1 FS bytes used 6.37GiB
devid 1 size 8.51GiB used 8.51GiB path /dev/sda3warning, device 2 is missing
Label: ‘StoragePool’ uuid: 7253a3b0-2ddc-41da-a2b8-0b3113f8007b
Total devices 4 FS bytes used 4.40TiB
devid 1 size 2.73TiB used 2.21TiB path /dev/sdb
devid 3 size 2.73TiB used 2.21TiB path /dev/sdd
devid 4 size 2.73TiB used 2.21TiB path /dev/sde
*** Some devices missing
/dev/sdc is missing.
Reading the documentation at http://rockstor.com/docs/data_loss.html#data-loss-prevention-and-recovery-in-raid10-pools
the example mentions /dev/sda to be missing and then to mount in degraded mode the device /dev/sdb
This means I should mount the still working disk?
When reading the btrfs wiki the example and text more points to mounting the failing disk in degraded mode.
So I’m unsure on what to do. So my question: what should the commands be for a RAID10?
Do I need to mount all 3 remaining devices in degraded mode, mount the failing disk or the still working paired disk (how do I find out which one it would be then)?
mount -o degraded /dev/sdX /mnt2/mypool
since btrfs fi show does not show the devid of the failing disk, I would assume it to be “2” based on the printout:
btrfs replace start 2 /dev/sdc /mnt2/mypool
Anyone can help me out here with the commands?
Thanks in advance.