I’m writing about my experience installing Rockstor on a Linux software raid. I have recently installed Rockstor for my second time, following the wonderful Mirroring Rockstor OS using Linux Raid guide, after my previous Rockstor install with a BTRFS RAID1 became unstable, link here. BTW, thank you Philip Guyton for the original Linux Raid install guide write up.
First, I just would emphasize the key steps one needs to follow to be successful. During the first boot in the installation, one needs to select “standard partitioning” for new mounts, and then select “BTRFS” for new mounts during the last boot during the install. Also, rebooting three times may not be needed. One can use ALT-CTRL-F3 or another virtual terminal during the install to format the raid and then refresh the disk view during the install to format the partition with BTRFS.
Anyway, I selected two drive during the install and the install required me to create a biosboot partition. Drives that are larger than 2 TB will need a GPT partition table and a drive with a GPT partition table requires a biosboot partition to be bootable. Originally I tried to then modify the partition layout from standard to RAID on the mount details. Although when done the installer gave me an error with the biosboot being in a raid. So I left the biosboot partitioning as a standard partition. The installer still said it was using the two drives. So I was hoping it meant there was going to be a copy on each partition. I planned that the biosboot partition would just need to be manually maintained after the install. I also left the swap partitions as standard partitions. So, I expected there to be a biosboot and a swap partition on both drives. But there was only one biosboot and one swap partition created during the install.
This is the raid after the install:
[root ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdb3[0] sdc2[1]
5847106560 blocks super 1.2 [2/2] [UU]
bitmap: 1/44 pages [4KB], 65536KB chunk
md127 : active raid1 sdb2[0] sdc1[1]
1562560 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
This is the partition layout after the install:
[root ~]# sfdisk -l /dev/sdb
Disk /dev/sdb: 729601 cylinders, 255 heads, 63 sectors/track
Units: cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sdb1 * 0+ 267349- 267350- 2147483647+ ee GPT
sfdisk: start: (c,h,s) expected (0,0,2) found (0,0,1)
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
[root ~]# sfdisk -l /dev/sdc
Disk /dev/sdc: 729601 cylinders, 255 heads, 63 sectors/track
Units: cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sdc1 * 0+ 267349- 267350- 2147483647+ ee GPT
sfdisk: start: (c,h,s) expected (0,0,2) found (0,0,1)
/dev/sdc2 0 - 0 0 0 Empty
/dev/sdc3 0 - 0 0 0 Empty
/dev/sdc4 0 - 0 0 0 Empty
I’m planning on rebuilding the raid to make any disk replacement easier in the future. I should be able to get the serial from the drives and match it with the two partition layouts. I can then use sfdisk to copy the partition layout from the drive with the four partitions after the install to the second drive with only two partitions after the install. Although, I’ll first need to investigate what is in the biosboot partition and if I can just copy the contents to the second drive or easily create the contents of the biosboot partition.
I’m ready to rock, and I’ll update this as I have more information!