OpenSUSE 4.0.9 post-install rpm database corrupted?

Yesterday, I finally pulled the trigger and moved from CentOS to the OpenSUSE version on my main NAS. All went well in terms of installation and being able to put my configuration in place (did not use restore from backup, as I wanted to redo a few things

On the command line (and after the initial bootup post-installation) I noticed that any zypper related commands would elicit this string of messages multiple times (I assume for each repo that’s configured):
warning: Found NDB Packages.db database while attempting bdb backend: using ndb backend.
Here’s a screenshot of the same (any variation of zypper commands by the way, I just captured the one after running the zypper up command):
image

Has anybody else experienced this recently with their installation? All my previous attempts were on a VM in Virtualbox, and I didn’t notice it (maybe it was there, or not).

Anyway, I was able to resolve it, using this little nugget that I found here:
https://www.suse.com/support/kb/doc/?id=000017180

TL:DR:
Run these steps in succession, trying an ‘rpm -qa’ after each step:

  1. rpm --rebuilddb
  2. rpm --initdb
  3. rpm --rebuilddb

This resolved the issue for me. Still curious why this would happen. I built the LEAP 15.3 variation on a Tumbleweed VM (that’s what I had available at the time).

1 Like

Hi there, @Hooverdan!

Thanks a lot for sharing your workaround.

Although I’ve never seen it in my v4 installations, I believe you nailed the issue:

My best guess is exactly that indeed… but depending on the method you used to build the installer. Here’s a resource from the kiwi folks themselves, that seems to explain things quite well:
https://groups.google.com/g/kiwi-images/c/e-RAem1_o_0?pli=1

In particular, see the following bit from the kiwi maintainer:

I am building a 15.2 target on a Tumbleweed hose using the box build.
The build host was last updated mid December. So it is rather recent.

I know TW has change the rpm database backend. Leap 15.2 still uses bdb
and TW uses “NDB Packages.db”. The suse box (I assume you used this one)
was updated to build as a TW box. I think this is causing the inconsistency
you saw. Could you do me a favor and try to build your image with:

kiwi system boxbuild … --box universal

and check if the same database issue is present. If yes we need to create
a Leap 15.2 box to allow building with a bdb host to avoid the inconsistency.

Note that they have a leap box available for that boxbuild plugin and is what we have listed in the README:

python3 -m venv kiwi-env
./kiwi-env/bin/pip install kiwi kiwi-boxed-plugin
./kiwi-env/bin/kiwi-ng --profile=Leap15.2.x86_64 --type oem \
  system boxbuild --box leap \
  -- --description ./ --target-dir ./images

Did you use the boxbuild way to build the installer or the “canonical” way (without the boxbuild plugin)? If the former, then we need to check and update some things…

Thanks again for yet another excellent report and sharing your solution!

2 Likes

@Flox ah, yes. I must have had some brain fog there, I did not use the buildbox, even though I read through it. Then I obviously got distracted by something (squirrel?) and then proceeded without it. So, on the TW VM I just did the:

git clone https://github.com/rockstor/rockstor-installer.git
cd rockstor-installer/

did the add repo and additional installation:

sudo zypper addrepo http://download.opensuse.org/repositories/Virtualization:/Appliances:/Builder/openSUSE_Leap_15.3/ appliance-builder
sudo zypper install python3-kiwi btrfsprogs gfxboot qemu-tools gptfdisk e2fsprogs squashfs xorriso

even though it explicitly said for a 15.3 host :weary:

edited the kiwi file to change the rockstor version to 4.0.9 in two places and then … skipped down to the build command in the Readme and with the leap version change to 15.3 executed finally:

kiwi-ng --profile=Leap15.3.x86_64 --type oem system build --description ./ --target-dir /home/kiwi-images/

This obviously was not the correct way to do it. I guess, I seemed to have got there in the end, too. I hope, nothing else will then be lurking underneath, for the last 11 hours it’s been running fine, so there is hope :relieved:

2 Likes

Glad it was just that, then!