Problem Building Rockstor 4 with Vagrant

@2fatdogs Thanks for opening this thread.

Calling on @mikemonkers who added our ‘alternative’ vagrant installer builder as I’m fairly sure they will have an idea on this one.

I’m not that familiar with this method myself, bar my review of it’s linux utility in the original pr review:

and it’s follow up update here:

And we have had reports of it working on the MacOS & windows side but that shortly after it was merged.

If those more familiar with this method could chip in that would be great as it would be nice to ensure this method is as easy as possible given it’s utility on the Windows and MacOS side.

Hope that helps.

1 Like

To be sure it wasn’t some local environment issue, I tried this on an entirely different machine. Same error message. Identical.

One other noted issue, I saw this on the other machine too:

first time you run the Vagrant box:

to fix this you need to Open the file creator_uid file as noted below
image

and change 0 to 1001 or whatever your UID Is per the message
image

Anyway, no one has posted any help on this so I’m still a point of failure. Thx. --Dan

2 Likes

@2fatdogs, I got one step further than you on windows. After I got the first error about the shared folder mount (like you):

I then went ahead and changed the creator_uid to 1000 and ran vagrant up again.
Then I got the similar message you are showing above:

PS D:\rockstor-installer\vagrant_env> vagrant up
The plugin ‘vagrant-vbguest’ is not currently installed.
The VirtualBox VM was created with a user that doesn’t match the
current user running Vagrant. VirtualBox requires that the same user
be used to manage the VM that was created. Please re-run Vagrant with
that user. This is not a Vagrant issue.
The UID used to create the VM was: 1000
Your UID is: 1001
Only difference was it now showing the the VM creation UID as 1000.

I then changed the creator_uid to 1001 and ran vagrant up again. Now it proceeded to build without error (as far as I can tell).

However, executing the actual kiwi installer to build the iso didn’t work. I think it has to do with the path inside the vagrant machine and what the run_kiwi.sh file shows as repository directory … after I finally ssh’d in, that path didn’t match …

Too late here now, I think I will continue tomorrow, but maybe this was some info that helps you to get it done all the way …
I’ll post more as I find out from my side.

2 Likes

Interim update - I’ve got further along, but I am also stumped by the setup with shared folders that then are not shared (since the vbguest plugin is not installed or forcibly uninstalled). After essentially manually executing the scripts (with additional path adjsutments as well as installing things like qemu-tools that weren’t there) inside the vagrant machine I finally got it to build a .raw image. the last step I am stuck on is that I am now getting an error message that sgdisk is not found by kiwi (which I assume is for turning that .raw image into an .iso file?) Even though I installed the gptfdisk package, which supposedly contains that program …
the bigger question, if the shared folders are not mounted/visible, how do I get the iso out of the vagrant machine :slight_smile:
Maybe it’s just the fact that running vagrant on windows and using Virtualbox as the provider is not “bee’s knee’s”.
At least I am learning a ton about vagrant. @mikemonkers not sure whether you can shed any light on this, I believe you created this vagrant recipe on a MacOS platform, so this might not be your area with windows and all …
At this point it would still be easier for me to put Leap on a VM (using Virtualbox) and then build using the published build recipe. But, I am curious on how I can make this work using vagrant.

2 Likes

Hi @2fatdogs, @Hooverdan, @phillxnet,

Apologies I only just noticed the alerts on this thread… let me see if I can reproduce this issue.

I am currently using the following versions on my Mac:

VirtualBox: 6.1.12
Vagrant: 2.2.9
Vagrant Plugins:
vagrant-host-shell (0.0.4, global)
vagrant-sshfs (1.3.5, global)

When I run ‘vagrant up’ it does mount the shared folders, for me at least!

A note on the vbguest plugin logic, I forcibly removed the plugin from the host because it installs the vbguest additions on the VM and this was (at least at the time), breaking the guest additions that are already installed on the chosen vbox image. So we didn’t need the plugin.

A failure to mount shared folders is often an issue with the Virtualbox Extension Pack on the host side which should be installed by default now and usually require a host restart to load the underlying vbfs filesystem drivers. I’ve certainly seen and been frustrated by issues here in the past.

It has also happened in the past that Vagrant and VirtualBox have gotten out of step release-wise and this has caused additional similar issues.

Let me also try updating to latest and greatest everything and see if there’s an issue there. I tend to use vagrant and virtualbox on a daily basis for work so tend to shy away from updating too often for the above reason.

In the mean time could I ask what versions you are using and on what platforms please?

Thanks
Mike

2 Likes

Strike that… I was using an old master branch on my fork which was using the older bento box.

Having pull the rockstar-installer:master into my fork I now see the issue on the opensuse official image.

Investigating further…

2 Likes

Ok so I’ve updated my everything!

VirtualBox 6.1.18
Vagrant 2.2.14

One note that isn’t clear in the vagrant_env/README.md is that you MUST install the ‘Oracle VM VirtualBox Extension Pack’ (https://www.virtualbox.org/wiki/Downloads) - I have an update for this.

I have it working but I had to switched back to the VM Box:

#             v.vm.box = 'opensuse/Leap-15.2.x86_64'
            v.vm.box = 'bento/opensuse-leap-15'

… because it behaves with the vbguest plugin. Which I re-enabled to force the guest additions to update to 6.1.18.

In doing so I stumbled across an issue with python3 lxml. Which I solved in the Vagrantfile with this:

# Fix for broken python lxml (see: https://www.suse.com/support/kb/doc/?id=000019818)
pip install --force-reinstall lxml

… and then when I run the build.sh I get a successful build.

Let’s see if I can get back to the official opensuse box image… watch this space.

3 Likes

Yes, that’s what I have been using, though with Windows 10 as underpinning.
The other 2 plugins you asked about earlier are on these versions:
vagrant-host-shell (0.0.4)
vagrant-sshfs (1.3.5)
and vagrant-vbguest (0.29.0)

Interestingly, when testing just a Ubuntu vagrant box (no Rockstor, just the plain box mounting a couple of shared folders), the mounting of shared folders was not a problem at all. But OpenSUSE has always been a bit of a challenge for me on VirtualBox, vagrant non-withstanding.

On the vbguest plug-in I also tried the additional setting of not “auto-updating”.

>   if vagrant.has_plugin?("vagrant-vbguest")
>     config.vbguest.auto_update = false
>   end

which seems to then leave the already existing guest extensions alone, but also doesn’t require the plugin to be removed (in case somebody is using the plugin for other scenarios).
Albeit, that didn’t solve the “mounting” problem of course…

Ok, so if I pin the version of the official opensuse VM box image to version 15.2.31.328 the mounting works.

This can be done as follows (ignore the above exploratory findings):

            v.vm.box = 'opensuse/Leap-15.2.x86_64'
            v.vm.box_version = "15.2.31.328"

It would seem that the versions after this include a broken virtual-box guest additions in the opensuse repo. (a zipper upgrade breaks the image version above).

However, this image seems to break a bunch of kiwi dependences including:

qemu-img
sgdisk

I’ll raise a PR with the change back to the bento box in the meantime, but do these broken dependencies ring a bell to anyone?

PR: https://github.com/rockstor/rockstor-installer/pull/39

3 Likes

great findings!

And yes, when I essentially tried to execute the scripts step by step within the vagrant image, I kept running into missing dependencies.
qemu-img (required me get the qemu-tools)
sgdisk (I got the gptfdisk package, but it still wasn’t able to find sgdisk, not sure something went wrong during my installations or not).
That’s how I eventually got at least to the .raw file but no iso

1 Like

@mikemonkers Re:

NIce. And now merged.

I’ve not tested it myself, but given the activity in this thread I thought once merged it would make it easier for others to chip in and test the resulting repo. Bit of a paint all these moving targets re vagrant boxes/vm hosts etc. But when it works it nice. Just a pain when they drift apart.

Thanks for stepping up to this. Am I right in thinking this is now a working ‘method’ again? As it was for a while after being submitted at least.

Our official advise has to recommend the use of a vanilla openSUSE Leap within a VM, as per the repo top readme, but for those less familiar with such things your vagrant approach has been super helpful. Thanks for the continued support of this method. Much appreciated.

If anyone could chip in to confirm this method in it’s current form that would be great. And be sure to mention the OS and version used in the testing.Obviously a successful install from the resulting DIY installer would be a bonus validation.

Thanks folks. It would be nice if we could continue to offer this method as it looks to have helped a number of folks on the OSX / Windows side already who are less familiar / happy setting up a VM by hand.

2 Likes

Hi @phillxnet, certainly agree we need to get back to the official images. I’d like to do some sort of comparison with the VM images. You’d imagine the VM official images where updated in line with the ISO images!? :wink:

Am I being blind but I don’t actually see a reference to a particular vanilla openSUSE Leap, other than referring to it as Leap 15.2. What was the last ISO version you successfully tested against? Do you have a link?

Thanks
Mike

1 Like

@mikemonkers, @phillxnet thanks for the updated github files. Below is my experience over the last hour.
TL;DR, it worked for me with a couple of tweaks!
Here’s my experience with the updated vagrant config on github …

Directory Structure Setup. Vagrant_Leap directory right under the drive root, in this case D:
image

Execute the Vagrant File below the vagrant_env folder, i.e. vagrant up in that folder.

Machine provisioned, updated, etc. Then for good measure executed vagrant reload

Interestingly, still got the Guest Extension version mismatch:
image

In the run_kiwi.sh file (located in the vagrant_env folder on my host machine) I had to make one change to make it work:

In Line 10, I had to change the path from

REPO_DIR="rockstor-installer/"

To

REPO_DIR="/home/vagrant/rockstor-installer/"

Otherwise, the build would fail right away with “no such directory”

Also, in the Read me file, for windows users without bash, it states after vagrant up to run

vagrant ssh -c "cd /home/vagrant; /vagrant/run_kiwi.sh"

This failed for me. After a little investigation I found I only have the bin and rockstor-installer directory and nothing else in that path. So, I assumed I had to run it with this:

vagrant ssh -c "cd /home/vagrant/rockstor-installer/vagrant_env; ./run_kiwi.sh"

which then started working (takes a little bit, and appears to pause for a bit, but it’s processing …
image

I ended up with an iso in my shared folder on my local hard drive:
image
Loading the installation iso into a virtual machine, I was up and running within 10 minutes and able to connect to the WebUI:

image

I didn’t go any further, but am assuming at this time that all the major components work.

2 Likes

@mikemonkers Thanks for following all this up:

Good point, it’s likely only implied currently in the second line of the Howto section which reads:

Given our image target OS is exclusively openSUSE, a Leap 15.1 or 15.2 install is recommended as the host operating system.

There by, via concatenation, we have openSUSE Leap 15.2. Modified versions of openSUSE are required to use a certain vernacular such as we do with Rockstor’s “Build on openSUSE”, or “Uses openSUSE” as per their marks guidelines which came up more recently in the following forum thread:

Couple of issues: dnsmasq and sendmail - #5 by phillxnet

copied in here for convenience:

openSUSE:Trademark guidelines

https://en.opensuse.org/openSUSE:Trademark_guidelines

subsection:

Distributing openSUSE With Modifications:

https://en.opensuse.org/openSUSE:Trademark_guidelines#Distributing_openSUSE_With_Modifications

Where these two variants are among those suggested.

So in line with this I’ve created the following issue bring this Readme more in line on that front, as per our Web-UI (“Uses openSUSE”) and here on the forum where we promote the “Built on openSUSE”:

https://github.com/rockstor/rockstor-installer/issues/40

I’m actually keen to follow up on the build in virtualisation capability within kiwi-ng itself, this may well help us with the problems we are seeing here, that of the build OS having to server both the needs of it’s virtualisation environment, virtual box in the vagrant build mainly, and the availability of pre-build boxes for the automation system you’ve setup for us, Vagrant.

This is the note within the top level Readme where I’ve introduced this ‘plan’:

We hope later to transition to a newer mechanism within kiwi-ng where by KVM is employed to create the required build environment. Any assistance with this endeavour would be much appreciated. See: kiwi-ng’s new boxbuild capability.

I’ve not gotten around to trying this yet and to introduce it ‘proper’ to the repo I’d also have to prove it’s function within our backend buildbot system as it, in turn, pulls directly from the rockstor-installer repo to do it’s ‘closed beta’ ISO publishing for our partners such as Ten64. But it shouldn’t present a problem, hopefully. That way our build OS would be one step removed which would be great. It does however introduce another layer which is always a challenge, as we see with the Vagrant method, and adds the qemu dependencies needed by newer versions of kiwi-ng where the boxbuild capability was added. As referenced by @Flox in your recent issue/pr in this repository:

Vagrant boxes not mounting shared folder · Issue #38 · rockstor/rockstor-installer · GitHub

Hope that helps and thanks again for your diligence on this build method submission.

1 Like

@Hooverdan Thanks for proving @mikemonkers contributed Vagrant approach here.

Given we seem to have experience some drift over time, re your noted tweak requirements, do you fancy submitting a pull request to address those in the repo itself?

And thanks for testing the resulting ISO. Most reassuring.

1 Like

@phillxnet, @mikemonkers
created a new issue and linked a pull request:

Please review and let me know if that works.

2 Likes

@Hooverdan @mikemonkers @2fatdogs
I’ve now just merged @Hooverdan’s fix as well (Thanks again @Hooverdan).

Unfortunately I wasn’t able to test it myself without additional delay but on the strength of @Hooverdan’s record and exposition here in this thread I think it’s fairly safe to do so given the included changes.

However it would be grand if we could have this alternative vagrant method tested in it’s current master branch form. And as before it would be critical to know the OS it was tested under, and the associated program versions, so we can build up a list of know working host OS’s for this alternative method as it stands currently.

Thanks folks, and apologies for me holding things up on this one.

2 Likes

@phillxnet, @mikemonkers, @2fatdogs
I tried to get the standard OpenSUSE vagrant box to work … the comparison of installed packages between the bento version and opensuse version was a 600+ package difference, so that didn’t really seem to help.

I then took the piecemeal approach of trying to get through this one step at a time …
Here is what I found so far:
the culprit really seems to be a bogged guest addition installation on the opensuse image. When running vagrant up (following the readme for the vagrant load), then it will typically fail with the same/similar message we’ve seen earlier in the thread:

and I tried to get this rectified by adding updates/upgrades to the vagrant file, however since the script already falls on its nose before it even gets to anything else, it seems to me (and, of course, I might be wrong), that I have to force the installation/reinstallation of the guest additions before the rest of the script will even run through.

So, since - despite the error mounting the shared folders - the vagrant machine is running, I did the following:
vagrant ssh
sudo zypper -n --non-interactive-include-reboot-patches update
which could of course be written like this in one line:
vagrant ssh -c "sudo zypper -n --non-interactive-include-reboot-patches update"
this in turn highlighted mostly these three items to be upgraded, all related to the Virtualbox guest additions:
virtualbox-guest-tools, virtualbox-guest-x11 and virtualbox-kmp-default

As the vagrant box ages (and I purposefully removed a pinned version from the vagrant file to get the most recent one) it will likely install more patches/updates, but that should be ok.

The vagrant plugin vbguest didn’t attempt an auto-update of the guest additions or anything helpful at this time (so forced to not auto-update at this time in the vagrant file), this was the only way that I could continue.

So, once that installation/update is completed. I reloaded the vagrant box with vagrant reload and, it started going through the rest of the vagrant script namely the shell command in the bottom section.

Now, after a few more attempts I had to amend @mikemonkers installation additions.
I found that in order to get the actual build of the Rockstor ISO to work I had to add a few more installations. Furthermore, pip was not installed so the pip reinstall command at the bottom of the file failed :slight_smile:
So, here are the dependencies for our kiwi ng installer/image creator:
git - already pointed out by @mikemonkers
btrfsprogs - already pointed out by @mikemonkers
gfxboot - already pointed out by @mikemonkers
qemu-tools
gptfdisk
e2fsprogs
squashfs
xorriso

As you can tell the last 5 packages all related to creating an image/live CD type output in the end.

And in order to address the issue with the lxml package I had to also install
python3-pip
upgrade pip for good measure
force reinstall lxml as pointed out by @mikemonkers

Once the vagrant box finishes the script, I performed another vagrant reload as I wasn’t sure what happens if I were to put a sudo reboot into the vagrantfile (and ran out of energy to experiment with that) and then executed the:
vagrant ssh -c "cd /home/vagrant/rockstor-installer/vagrant_env; ./run_kiwi.sh"
and after some time (seemed longer than on the bento box for some reason) … and a few warnings (@phillxnet not sure whether these are expected during the kiwi ng install procedure or whether some action needs to be taken on the overall installer) the iso file was generated and available in the vagrant_env folder. I ran a test installation up until I can get to the webui for the first configuration, so I presume it’s all good now.

Now, here’s the updated excerpt of the vagrant file, but I have no idea whether there is a better/different way to avoid the guest addition re-installation, that where smarter folks than me need to come in. But I think I at least narrowed the “missing” items from the opensuse image to get a successful image built:

Vagrant File - click to expand
# -*- mode: ruby -*-
# vi: set ft=ruby :

# Required plugins
required_plugins = %w(
    vagrant-host-shell
    vagrant-sshfs
    vagrant-vbguest
    )
required_plugins.each do |plugin|
  system "vagrant plugin install #{plugin}" unless Vagrant.has_plugin? plugin
end

MEM = 2048
CPU = 2
PROFILE = ENV['PROFILE'] || 'x86_64'

VAGRANTFILE_API_VERSION = '2'
#
#  Fully documented Vagrantfile available
#  in the wiki:  https://github.com/josenk/vagrant-vmware-esxi/wiki
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|


	#commented out, since this folder is mounted by default via vagrant
	#config.vm.synced_folder "./", "/vagrant"
    config.vm.synced_folder "../", "/home/vagrant/rockstor-installer"
    
	# Disable update of guest additions when box is opensuse
    if Vagrant.has_plugin?("vagrant-vbguest")
      config.vbguest.auto_update = false
    end

    config.vm.define "rockstor-installer" do |v|
        v.vm.hostname = "rockstor-installer"
        if PROFILE == "x86_64" then
            # Switch back to bento until we figure out what broke in opensuse
            v.vm.box = 'opensuse/Leap-15.2.x86_64'

			# disabled, want to pull latest vagrant box version
            #v.vm.box_version = "15.2.31.325"
            #v.vm.box = 'bento/opensuse-leap-15'
        else
            v.vm.box = 'opensuse/Leap-15.2.aarch64'
        end

        # Provider specific variable
        v.vm.provider :virtualbox do |vb|
            vb.memory = MEM
            vb.cpus = CPU
        end

        if PROFILE == "x86_64" then
            config.vm.provision "shell", inline: <<-SHELL
                sudo zypper --non-interactive addrepo --refresh http://download.opensuse.org/repositories/Virtualization:/Appliances:/Builder/openSUSE_Leap_15.2/ appliance-builder
                sudo zypper --non-interactive --gpg-auto-import-keys refresh
		# probably not necessary here, if it's done outside prior to the Vagrant file successfully completing
				sudo zypper -n --non-interactive-include-reboot-patches update
		# probably dangerous to try this within the vagrant file, hence left it commented
		#		sudo reboot
            SHELL
        else
            config.vm.provision "shell", inline: <<-SHELL
                sudo zypper --non-interactive addrepo --refresh http://download.opensuse.org/repositories/Virtualization:/Appliances:/Builder/openSUSE_Leap_15.2_ARM/ appliance-builder
                sudo zypper --non-interactive --gpg-auto-import-keys refresh
				sudo zypper --non-interactive-include-reboot-patches update
		# probably dangerous to try this within the vagrant file, hence left it commented		
		#		sudo reboot
            SHELL
        end

        config.vm.provision "shell", inline: <<-SHELL
            REPO_URL="https://github.com/rockstor/rockstor-installer.git"
            REPO_DIR="rockstor-installer/"
		# additional packages required to be installed
            sudo zypper --non-interactive install git btrfsprogs gfxboot qemu-tools gptfdisk e2fsprogs squashfs xorriso
			sudo zypper --non-interactive install python3-kiwi
            if [ ! -e ${REPO_DIR} ]; then
                git clone ${REPO_URL} ${REPO_DIR}
            fi
            # Fix for broken python lxml (see: https://www.suse.com/support/kb/doc/?id=000019818)
			# unfortunately requires pip installation and upgrade before reinstall can be done
            sudo zypper --non-interactive install python3-pip**
			sudo pip install --upgrade pip
			sudo pip install --force-reinstall lxml
        SHELL
    end

end
1 Like

Hi @Hooverdan, just reading through your notes and have a quick offering regarding rebooting in the vagrant file. You’re right to not use a simple ‘sudo reboot’ as it would break the flow and comms of vagrant. However, you can do something this:


        config.vm.provision "shell", inline: <<-SHELL
            # probably not necessary here, if it's done outside prior to the Vagrant file successfully completing
            sudo zypper -n --non-interactive-include-reboot-patches update
        SHELL

        config.vm.provision "shell",
            run: "always",
            reboot: true

This would install your updates with the ‘no-reboot’ option you already identified and then use a second step to do a controlled reboot that would allow vagrant to continue.


Regarding the missing dependencies, that list was a much shorter list than I imagined. Nice digging. :wink:

FYI, I’m also looking into how to un-bungle this in between the day job.

3 Likes

@mikemonkers, thanks for the information on the reboot. I changed it a little bit, because I noticed in my case that the run option always would be run everytime I ran vagrant up. So I changed it to once to only have it reboot after the provisioning is complete but not in subsequent up/halt scenarios. Again, if the sole purpose is to run vagrant to create the iso and then throw it away, it doesn’t matter, really.

    config.vm.provision "shell",
        run: "once",
        reboot: true

but I have not been able to figure out the updating/repairing the Guest Additions within the LEAP vagrant box, without the workaround I outlined above of installing the three vbox related pacakges after the first error out, and then reloading the vagrant box, which will in turn then finish the configuration. After that everything is fine, including the shared folders. Key is to not reload or halt/up the vagrant box after the first error, otherwise that UID mismatch issue comes into play (mentioned by @2fatdogs earlier)

While the box is still up after the error, the Guest Addition command needs to be issued, and only then the vagrant reload should be executed. Then the creator UID will be fixed and the shared folders are mounted correctly.

1 Like