Rockstor and virtual machines

I am trying to decide between unRAID and Rockstor. It seems the only major difference seems to be VM support. Is Rockstor able to do full VMs or only Dockers?

1 Like

Sadly Rockstor does not support VMā€™s. Not to my knowledge anyway.

It would be a nice addition to the functionality thogh, as it could increase the usability of a machine that most peoble have running 24/7.

I loaded Rockstor into a VM last night. If they would allow VMs, and allow any docker image to be loaded through the webgui. It would compete with unRAID in terms of functionality.

I have tried to install a headless VM (Virtualbox), not trying to use a dockers approach but trying various tutorials that had been published for CentOS. I finally gave up, as I did not have enough technical knowledge to get through some of the release/dependency discrepancies. But I agree, it would be an excellent addition to be able to run a VM on the Rockstor server to manage some VirtualPC type images.

Adding the capability to run and manage virtual machines would be a great addition, and I would use it for sure.

It would actually help me out with some of my speculations about running Rockstor and Ubuntu side by side inside Proxmox.

With VM build into Rockstor, I would only have to install Rockstor, create a virtual machine for Ubuntu, and that would be that. Much easier than the other setup.
I would then probably run my Plex server and other things inside the Ubuntu VM.

But I donā€™t know how the peoble behind Rockstor would stand legally if they added it, as Rockstor is a product they are also making money on, through the subscriptions.
Many free virtual environments are AFAIK only free for private / personal usage?

unRAID is able to build in KVM support without legal issues. I would think Rockstor could as well.

My plan was to run all my media stuff in Dockers, then my windows desktop with a VM.

@ataylor1988 and @KarstenV

KVM (Kernel Virtual Machine) is already in the kernel of Rockstor as it is in virtually all modern linux distributions. In fact due to Rockstor having a fairly recent kernel the version included is consequently also fairly recent. My understanding is that unRAID and Proxmox also both use KVM as their virtual machine technology, except older versions than those found in Rockstorā€™s kernel of course. The latest Proxmox uses an Ubuntu based kernel version 4.4 and unRAID appears to use around the same era of kernel ie 4.4 in their current 6.2.2 release. While Rockstor currently uses the elrepo kernel-ml version 4.8.7 kernel, which is likely to receive an update after the next stable channel release, as elrepo currently has available version 4.9.5.

And there is of course Digital Ocean as another example of a KVM user. Pretty sure their entire setup is based on it actually. I expect others that know more of such things will add to this thread as itā€™s not really something I have a great deal of experience with. And a quick look shows that the Google Compute Engine also uses KVM.

If Rockstor was to include a KVM UI component we would of course have to ensure we have sufficiently up to date KVM ā€˜user landā€™ tools for such things, ie qemu-kvm because as is we have the default and rather old versions from CentOS7. However that might be doable via:

http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHEV/SRPMS/ 

Sighted from: https://lists.gt.net/openstack/operators/50952

Non trivial but possible. However all projects benefit from a focus and Proxmox for one has a Virtual Machine focus where as Rockstor currently has a NAS / btrfs focus. I would say itā€™s not an impossibility that at least a subset of KVMā€™s capabilities would be included in the future of Rockstorā€™s UI but as always itā€™s down to contributions and what they can amount to; as this is no small task that is akin to building a NAS interface.

Thanks for the, as usual, very detailed and precise explanation.

I recognize the fact that it would be a large undertaking to build a VM administrative interface from scratch.

My thoughts was something along the lines as running e.g. Virtualbox as a Rock-On, from which you could then run your other VMā€™s.
But I do not know if it is possible to do this (doubt it), and then perhaps there could be legal issues.

Iā€™m not well informed about these kind of things :slight_smile:

But some kind of Virtual Machine handling inside Rockstor would still be nice :slight_smile:

Is this really a big feature add? I could figure it out Iā€™m sure but almost anything you can run on a vm, you could do in a container. It also will affect the storage appliance quite a bit but Iā€™m assuming thatā€™s acceptable?

Would a manual guide on installing help?

I donā€™t know if itā€™s a big feature for other users.

It would be for me.

But a good precise guide to installing it manually (and how to get in touch with the management of the VMā€™s) would be fine for me.

Iā€™m putting this as a to-do. I made a guide for selinux on rockstor as a wikipedia-type entry, iā€™ll try update on progress and link to that post when I start (I canā€™t promise a ETA though).

2 Likes

One issue I see is that afaik BTRFS is specifically not recommended for large files that change frequently e.g. Vm images. Has that recommendation been rescinded?

I donā€™t think it would matter unless youā€™re looking at a huge stress on the system, this is ALL experimental and I would think VMs are meant for lightweight last resort.

First, I would highly suggest making the VM a container so itā€™s integrated. This works for all linux flavors already (they all work vanilla as a container). This doesnā€™t give a lot of containers benefits (since youā€™re wrapping a VM with a container, but it DOES allow you to run things).

Second, we can use KVM but youā€™re not going to have a GUI (or you shouldnā€™t) this means you need to get crafty with the CLI and virt-intall and virsh commands.

Ok letā€™s do this. Donā€™t use your prod system unless youā€™re ok breaking it. For VM testing you MUST have CPU-Passthrough, you MUST have the vmx/svm flags enabled (this is know as ā€œnested virtualizationā€). Some CPUs donā€™t have this, so make sure you have this before all else.

ASSUMPTIONS:
You know your way around the bash shell (ssh to the box, or better yet, use local terminal (drac/ilo is ok especially when we create the network bridge).

  1. Check for the VT-enabled flag (or similar) by running the command below. If you donā€™t get anything back, donā€™t proceed, google how to enable virtualization instructions and fix it or give up
    grep -E ā€˜(vmx|svm)ā€™ /proc/cpuinfo

  2. install dependencies
    yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client
    virt-install virt-viewer bridge-utils

  3. start and enable kvm
    systemctl start libvirtd
    systemctl enable libvirtd

  4. confirm we are in business (we should see something after running this)
    lsmod | grep kvm

  5. now we need to modify the ifcfg files for the network. We are going to make a bridge (I called it br0 but you can choose another name) with the main interface (eth0 for me but yours could be enp5s0 or something longer). Replace these values below as needed.
    cd /etc/sysconfig/network-scripts
    cp ifcfg-eth0 ifcfg-br0
    cp ifcfg-eth0 ~ #for a backup
    vi ifcfg-eth0

make it look similar to whatā€™s below (you want to remove the HWADDR UUID and other lines)
TYPE=Ethernet
BOOTPROTO=static
DEVICE=eth0
ONBOOT=yes
BRIDGE=br0

Now edit the ifcfg-br0 file
vi ifcfg-br0

make it look similar to whatā€™s below, I removed the UUID and HWADDR stuff. NETMASK replaced PREFIX if you use that. I also removed all the IPV6 stuff

TYPE=Bridge
BOOTPROTO=static
DEVICE=br0
ONBOOT=yes
IPADDR=10.1.10.93
PREFIX=24
GATEWAY=10.1.10.1
DNS1=10.1.10.1

  1. Now i want you to pause and think about a few things. We are going to restart the network. If you are connected to the interface and something goes wrong, you wonā€™t be connected anymore. You may want to connect via the terminal or remote access for this next command. There isnā€™t a good recovery if you kill your own access here other than reverting from a backup. If you must revert, copy the ~/ifcfg-eth0 over the one in /etc/sysconfig/network-scripts and remove the ifcfg-br0 file

  2. restart the network
    systemctl restart network

check
ip addr show br0

  1. if using SELinux
    SELINUX
    if you use any other directories (like in /mnt2) for VMs, set the context
    semanage fcontext -a -t virt_image_t ā€œ/vm(/.*)?ā€; restorecon -R /vm

EXTRA CREDIT
you can install CIRROS by using the following. One note, I didnā€™t see mine complete but I did see the VM running so it worked but how you manipulate VMs is up to you.
yum install -y wget
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

virt-install --name=cirros --ram=256 --vcpus=1 --disk path=/mnt2/ISO/cirros-0.3.4-x86_64-disk.img,format=qcow2 --import --vnc

Mine hung here (maybe Iā€™m not patient enough) butā€¦
[root@rockstor3 ~]# virsh list
Id Name State

1 cirros running

My 2 cents:

If you decide to try your luck, some considerations:

  • please donā€™t play both with qemu snapshots and Rockstor (btrfs) snapshots
  • avoid qcow2 inside btrfs, use instead raw (ex. my Rockstor Dev Env is over a Proxmox and while playing with shares/pools tests had boot disk on raw -ok- and wrongly 2 qcow 2GB disks. 30 Proxmox snapshots during tests -> 900GB dataā€¦qcow2 is like a binge eater, remember this)
  • please please do this on a testing env and not on a Rockstor instance with important data

Mirko

Call me crazy, but Iā€™m seriously considering putting unraid in a KVM to handle SMB with an SSD cache drive. It would probably be easier to do a GUI installer like FreeNAS and just use VirtManager, compared to building out even the most basic VM web gui manager.

#IHaveANeedForSpeed

This got me thinking about one of my use cases. I have a separate machine running the free VMWare ESXi. The datastore I am using for my VMs is a share exported from Rockstar via NFS.

Is there anything I should do, be aware of, if using Rockstor as a datastore for my VMs?

@kupan787,

No serious detriment to data as long as the FS is kept in good order (scrub, scrub, scrub!), but youā€™ll likely experience some performance issues due to fragmentation - depending on the image size.

According to the BTRFS Wiki, this happens around 10k+ extents, which default to 256Kb, thus images less than around 2.5Gb should be unaffected (though this is quite small for a whole VM)

This can be alleviated with the mount option ā€˜nodatacowā€™ at the subvolume level, however this requires a few implicit things

  • The file must be new to the nodatacow subvol.
    Setting nodatacow on an existing subvol will not affect existing files.
  • The file cannot be reflinked from somewhere else
  • compression cannot be enabled
  • snapshots will disable nodatacow until the first subsequent block written after the snapshot is completed
  • A few more things.

Iā€™m not sure if this is still the case, however you previously couldnā€™t mount different subvolumes from the same BTRFS filesystem with different CoW options. If any were mounted without nodatacow, the option was ignored on any that did specify it.

If I mount with autodefrag, would that help alleviate this? Or would it just make things worse overall?

Also, in terms of nodatacow, if we canā€™t have subvolumes with different CoW options, can I just disable CoW on a directory? I saw the following:

Disable COW on a file

You can disable COW on a directory or file. This will avoid fragmentation, but at a cost:

  • Data checksumming is disabled.

  • Snapshots wonā€™t work (because they rely on COW)

You very probably do not want to do this.

$ chattr +C /dir/file

So just disable CoW on my VM images folder?

@kupan787

Autodefrag should alleviate this, however Iā€™m not sure how much testing has been done on this. If you give it a shot, Iā€™m sure that the BTRFS subreddit might be interested in your experiences.

Iā€™d be hesitant to use it because of the potential difference in disk lifetime (excessive writes).

I havenā€™t played with directory level CoW settings, Iā€™m not sure how this is handled, but it does seem like this might be a good resolution for the problem in terms of speed - I donā€™t know if it will help with the fragmentation in general tho.

So the directory level CoW settings made a world of difference.

I applied chattr +c to my VM directory. I also setup MPIO with NFSv4 on my ESXi host.

Previously, before the nodatacow, I was getting a max of 50 MBps.

Now when I do a dd to /tmp on one of my VMs, it is saturating both of my links! So Iā€™ve got MPIO running successfully with NFSv4.

appliance@zabbix:~$ dd if=/dev/zero of=testfile bs=1G count=16 conv=fdatasync status=progress
17179869184 bytes (17 GB, 16 GiB) copied, 87.0374 s, 197 MB/s 
16+0 records in
16+0 records out
17179869184 bytes (17 GB, 16 GiB) copied, 87.3625 s, 197 MB/s

Here is an image showing both my NICs on the ESCi host being lit up:
https://imgur.com/a/CqYSq

So anyone interested in this, I suggest using the chattr +c option. Keep in mind, youā€™ll have to cp and then mv your files, so that they get rewritten with the nodatacow flag set.