Notes for Installing Rockstor on a Bootable Linux RAID1 with GPT Partition Table

I’m writing about my experience installing Rockstor on a Linux software raid. I have recently installed Rockstor for my second time, following the wonderful Mirroring Rockstor OS using Linux Raid guide, after my previous Rockstor install with a BTRFS RAID1 became unstable, link here. BTW, thank you Philip Guyton for the original Linux Raid install guide write up.

First, I just would emphasize the key steps one needs to follow to be successful. During the first boot in the installation, one needs to select “standard partitioning” for new mounts, and then select “BTRFS” for new mounts during the last boot during the install. Also, rebooting three times may not be needed. One can use ALT-CTRL-F3 or another virtual terminal during the install to format the raid and then refresh the disk view during the install to format the partition with BTRFS.

Anyway, I selected two drive during the install and the install required me to create a biosboot partition. Drives that are larger than 2 TB will need a GPT partition table and a drive with a GPT partition table requires a biosboot partition to be bootable. Originally I tried to then modify the partition layout from standard to RAID on the mount details. Although when done the installer gave me an error with the biosboot being in a raid. So I left the biosboot partitioning as a standard partition. The installer still said it was using the two drives. So I was hoping it meant there was going to be a copy on each partition. I planned that the biosboot partition would just need to be manually maintained after the install. I also left the swap partitions as standard partitions. So, I expected there to be a biosboot and a swap partition on both drives. But there was only one biosboot and one swap partition created during the install.

This is the raid after the install:

[root ~]# cat /proc/mdstat
Personalities : [raid1] 
md126 : active raid1 sdb3[0] sdc2[1]
      5847106560 blocks super 1.2 [2/2] [UU]
      bitmap: 1/44 pages [4KB], 65536KB chunk

md127 : active raid1 sdb2[0] sdc1[1]
      1562560 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

This is the partition layout after the install:

[root ~]# sfdisk -l /dev/sdb

Disk /dev/sdb: 729601 cylinders, 255 heads, 63 sectors/track
Units: cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdb1   *      0+ 267349- 267350- 2147483647+  ee  GPT
sfdisk:                 start: (c,h,s) expected (0,0,2) found (0,0,1)

/dev/sdb2          0       -       0          0    0  Empty
/dev/sdb3          0       -       0          0    0  Empty
/dev/sdb4          0       -       0          0    0  Empty
[root ~]# sfdisk -l /dev/sdc

Disk /dev/sdc: 729601 cylinders, 255 heads, 63 sectors/track
Units: cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdc1   *      0+ 267349- 267350- 2147483647+  ee  GPT
sfdisk:                 start: (c,h,s) expected (0,0,2) found (0,0,1)

/dev/sdc2          0       -       0          0    0  Empty
/dev/sdc3          0       -       0          0    0  Empty
/dev/sdc4          0       -       0          0    0  Empty

I’m planning on rebuilding the raid to make any disk replacement easier in the future. I should be able to get the serial from the drives and match it with the two partition layouts. I can then use sfdisk to copy the partition layout from the drive with the four partitions after the install to the second drive with only two partitions after the install. Although, I’ll first need to investigate what is in the biosboot partition and if I can just copy the contents to the second drive or easily create the contents of the biosboot partition.

I’m ready to rock, and I’ll update this as I have more information!

1 Like

@Thailgrott Thanks for posting your findings; nice write-up.

Your welcome, although this howto was originally from @suman and I and then in turn @HBDK made alterations/updates/corrections as things changed under us. Glad it worked for you.

Bit of a pain that we have to go through such a rigmarole but it is what it is, at least for the time being. All just to get around our inherited and otherwise good installer’s limitations really.

Let us know how it goes: and a picture of your disks and pools page would also be nice, at least those entries for the system disk anyway.

Thanks again for sharing your findings / adventure.

Thank you for the reply @phillxnet and your work in the write up :slight_smile: So, I owe a thank you to @suman for the original write up, and a thank you to @HBDK for your updates to the article. Hopefully @HBDK has been able to resolve the Unknown internal error loading Web UI pages issue rendering the appliance unusable from the UI. I noticed that @HBDK had written in the forums about the Linux RAID guide. I gather that @HBDK may have been using the Linux software raid (MDRAID) to manage a RAID10 while I had BTRFS managing a RAID1 at the time the UI was failing on my appliance. I’m still interested in comparing the recovery of an MDRAID to a BTRFS RAID.

And I’ll be glad to send some pictures of the disk and pool page as I have them. They haven’t changed since the setup except for me setting one drive in power standby.


I have some reading to do before I make further changes. So I may just link back to this when I get to rebuilding this MDRAID RAID 1 setup.

Cheers!