Setting up Rockstor virtualized on Xen

Hi @maverick. I just stumbled across this inquiry. Although my response may be significantly aged, I thought I’d respond as we have been operating Rockstor as a VM in XCP-NG for several years. We too, ran into the serial number issue and never found a reliable way to inject serial numbers on devices provided by the host. We had found mentions of a XEN related toolkit that contained an application to accomplish this task, but never successfully discovered a downloadable copy.

Our solution ( Details are in the XenServer 7 Backstory.) was PCIe passthrough of the storage controllers. This presents the actual hardware to Rockstor so all is good with the exception of a BIOS boot partition being required to properly start the Rockstor CENTOS host. We eventually moved our SWAP partition to the /dev/xvda provided disk as well to keep the Rockstor Host OS clean as we always implemented a mdraid1 mirror on SSDs.

Today, we are moving to the OpenSUSE deployment driven by the newer hardware we are utilizing for our XCP-NG deployments, i.e. AMD Ryzen 5900 series CPU and Broadcom 9500 series Tri-mode storage controllers with LSI SAS 3616 controllers that even XCP-NG 8.2’s kernel could not recognize properly. Based on successful testing of an OpenSUSE Leap 15.2 VM utilizing the aforementioned controller with 14TB SAS 12 Gbps drives, we investigated and have successfully deployed Rockstor 4.0.4-0 as a VM hosting the Rockstor OS on a XEN /dev/xvda virtual drive. Rockstor seems quite content allowing the installation of the OS in this manner. We’re excited about this approach as it allows the use of XEN snapshots, which was sorely missed in the strictly controller passthrough environment. Obviously, with passthrough controllers, Rockstor 4 works as effectively with those as Rockstor 3 achieving bare metal performance.

One last issue we did discover which does allow the utilization of XEN hosted storage repositories of passed through “disks”; if an mdraid is created with those disks utilizing the any configuration of a mdraid similar to the mirror procedure mentioned previously, Rockstor will recognize it as a “Disk Pool” that shares can be deployed upon. The key to this approach is being able to mkfs.btrfs -f -L data /dev/md### the drives while offline. We accomplished this by booting into Rockstor’s “Rescue Mode” just as building the OS on an mdraid. We did find it to be more reliable to do this after the OS has already been successfully deployed on its mdraid first. It seems if one attempts to do them both at the same time, the anaconda installer most often gets things mixed up and the system eventually doesn’t boot properly. Hope this helps if you’re still interested in the unique but highly effective combination of XCP-NG and Rockstor! Feel free to reach out if you have any questions. Take care.

5 Likes