Initscripts conflicts since today

Hi,
I’m running rockstor 3.9.2-57 stable release.

Today I saw the flashing sign that there were updates available.
But I also received a cron.daily message.

/etc/cron.daily/0yum-daily.cron:

Failed to check for updates with the following error message:
Failed to build transaction: initscripts conflicts with rockstor-release-3-8.16.el7.x86_64

I saw initscripts with release 3-8.16??? I’m running 3.9.2-57.

Running from the console a yum check show a lot of duplicates. Don’t know if it relates.

Cheers,

J.

@dont Hello again.

This sounds very much like you have a failed package update. That can happen if the system is rebooted mid update. To get confirmation of this you could post the output from the following command run as root:

yum update

but duplicate packages is often an indication of a large update that was interrupted.

yum info rockstor

may also help with the state of the core rockstor package. With these hopefully folks here on the forum can work out what’s going wrong here.

Hope that helps.

Hi Philip,
Made two screenshots.

To much to type :slight_smile:

Cheers,

J.

I have tried many things but the update is stalled.
It keeps stumbling over initscripts and dependencies.

I have search around on the internet but haven’t found a solution yet.

Cheers,

J.

@dont Was this system initially installed from an earlier ISO than 3.9.1 by chance.

This looks like an older issue that was resolved a while back shortly after it ocured, that’s why I was wondering if it was a failed updated.

There are some older forum entries on this issue but a newer version of the problematic package should be available in rockstor repos.
What is the output of:

rpm -qa | grep rockstor

On my CenOS rpm test system here I have:

btrfs-progs-4.12-0.rockstor.x86_64
rockstor-3.9.2-57.x86_64
rockstor-release-3-9.23.el7.x86_64

So you see that problematic file is an older version.

You said you had reports of duplicate packages:

Is that still the case.

Given your main rockstor package shouldn’t downgrade as testing is now older, you could try temporarily switching to the testing channel, do a:

yum update

then back to the stable Channel. But we haven’t had any other reports of this for ages now which is why I’m wondering if it was an older ISO or failed update of some sort. Sorry if you’ve already told me about this, I’m kind of in the middle of a few things and so can’t really go and check.

Let us know how you get on.

Hope that helps. Though this procedure shouldn’t be necessary and is not recommended except for when foks have installed from the CentOS based 3.9.1 ISO (currently the only one).

I’ll do an end to end check again soon to see if I can reproduce your issue but it’s not likely to be until after Friday now I’m afraid.

1 Like

Hi @phillxnet,
I installed Rockstor with the latest ISO available. 3.9.1 (2017-07-02).
The moment I activated the ‘stable’ channel the problems started.

First the panic crash with the kernel 4.12. So I’m always starting with the older kernel 4.10.

For the rpm -qa | grep rockstor see the screenshot. I have indeed two version of rockstor :frowning:

Cheers,

J.version

@dont As suspected this all points towards a partial/interrupted update.

Given this is a relatively new install, I think I remember that right, the easiest path may simply be to re-install and this time, after having subscribed to the channel of your choice, this bit adds the appropriate package repo. Initiate the system wide update via a (as root)

yum update

When a large rpm based update goes wrong, often via an inadvertent reboot mid process, it can be tricky to sort. Not impossible but challenging. And given there is a strong likely-hood you have no data on the system drive, folks mostly keep data on their redundant additional pool and not the system drive, you should be able to just import your existing pool using the fresh install.

That’s my ‘easy’ suggestion on the getting up and running as otherwise you are looking at a rpm db repair and sorting out all the package duplicates. And I’m assuming the rockstor-release is still only one of many here. We have seen this a number of times with folks rebooting prior to an update having finished. And given our ISO is now so old there are serveral hundred MB of updates (upstream) to download and instantiation and this can take a significant time. Especially if you have a slow system disk. And in that case the command line approach is especially helpful as it will wait for as long as is necessary. But our Web-UI indicator of an ongoing update (which should really be more ‘severe’) can time out on slow systems.

The following is our doc section on re-installing to the same system disk if that is the way you are going with this Reinstalling Rockstor.

Let us know how it goes. Otherwise, if you fancy going the repair route then a general search for duplicate packages on rpm systems is the way to go. There are some reports on this forum of the same but it’s not really Rockstor specific.

It may even be that your 4.12 kernel was also incompletely installed and that if given enough time to complete before a reboot is performed, may end up working also for your. Shame our new ISO isn’t quite ready but all in good time. So do keep us informed and if you take care with disconnecting your data drives (and copying any system resident data off) before you do the re-install. And ensure all updates are in, and then do a reboot, before re-attaching the data drives and doing a pool import then you end up using the newest code to do that import.

Hope that helps, and keep us updated.

1 Like

Hi,
I read your comments Philip thank you for that.

Currently Rockstor is functioning. I’m in a process of building a complete new system. So I leave the current one intact.

Cheers,

J.

@phillxnet

This problem was bugging me. Since I had some time of today I deep dived into the dependency HELL.

And I solved the problem.

Steps taken:

yum install epel-release (already installed btw)

yum localinstall --nogpgcheck http://ftp.jaist.ac.jp/pub/Linux/Fedora/epel//epel-release-latest-7.noarch.rpm

yum clean all

yum update

yum install dnf

dnf clean all

dnf update

dnf upgrade

Tada problems solved.

Cheers,

J.

1 Like

@dont Well done.

So does look like it was an rpm db corruption / problem then. Interesting resort to dnf there :slight_smile:.

Thanks for sharing your solution.

I also managed to fix the broken kernel 4.12.

I switched to the testing channel.

yum info rockstor

yum update rockstor

And switched back to Stable.

Cheers,

J.

1 Like

@dont Well done again. But this one is a little more curios as that kernel is available in the command repo to both stable and testing subscribed systems so the switch should not have been necessary.

Also switching to testing from stable, although fine in the CentOS variant, will soon be ill-advised on our openSUSE variant. Especially when followed by the ‘yum update rockstor’ command as it will move folks to what is about to become pretty experimental on the openSUSE side as we start to work on our technical debt, i.e. python 2 to 3 move, once we have our “Built on openSUSE” stable rpms out.

So this again point towards a corrupted rpm db scenario as a normal stable or testing channel subscription also has access to a legacy repo that contains this kernel. The fact that your system did not pick this up suggests it failed to get access to that repo. Hence the ‘repair’ potentially having corner cases such as these.

All down to our super old installer and us relying on progressively larger updates to get folks updated. But this should all change very shortly hopefully. We just have to get to feature parity and we can release a new ‘Built on openSUSE’ installer and get folks fully updated from the get-go.

Thanks again for sharing your experience. I just wanted to establish that this is not a normal experience and you definitely had rpm db issues there. But yes, good to work through them but often easier and quicker to get a clean install/update from the start. However the challenge of the fix is attractive.
So a normal Stable subscription install has the following repos:

yum repolist

repo id                                                 repo name                                                                              status
Rockstor-Stable                                        Subscription channel for stable updates                                                    72
base/x86_64                                            CentOS-7 - Base                                                                        10,070
epel/x86_64                                            Extra Packages for Enterprise Linux 7 - x86_64                                         13,250
extras/x86_64                                          CentOS-7 - Extras                                                                         392
rockstor                                               Rockstor 3 - x86_64                                                                        55
updates/x86_64                                         CentOS-7 - Updates                                                                        240

And we see the 4.12 kernel available via the common ‘rockstor’ repo via:

yum repo-pkgs rockstor list
...

rockstor-release.x86_64                                                   3-9.23.el7                                                        @rockstor
Available Packages
docker-engine.x86_64                                                      1.9.1-1.el7.centos                                                rockstor 
docker-engine-selinux.noarch                                              1.9.1-1.el7.centos                                                rockstor 
kernel-ml-devel.x86_64                                                    4.12.4-1.el7.elrepo                                               rockstor 
python-devel.x86_64                                                       2.7.5-18.el7_1.1                                                  rockstor 
rockstor-logos.noarch                                                     1.0.0-3.fc19                                                      rockstor

Hence no requirement to switch to testing to pick up the 4.12 kernel.

Just wanted to clear that up as otherwise folks experiencing similar issues in the future may quickly skip read this thread and end up jumping to Testing channel rpms which, soon, will be far more experimental than stable. But for our CentOS variants this is entirely safe as we no longer publish testing on that side (now 2.5 years old).

A scenario that has worked for me in the past to rebuild a very poorly CentOS based rpm db is as follows:

# If our db becomes corrupt we can initiate a rebuild via:
mkdir /var/lib/rpm/backup
cp -a /var/lib/rpm/__db* /var/lib/rpm/backup/
rm -f /var/lib/rpm/__db.[0-9][0-9]*
rpm --quiet -qa
rpm --rebuilddb
yum clean all

And a good refresh procedure is as follows:

yum clean expire-cache
yum check-update

However for most folks, a simple re-install is best as it can end up being simpler and often faster. And our new installer in private alpha testing currently has a blank system power-on to first Rockstor Web-UI of < 5 mins on 5 year old hardware so I think sticking to as much system / data separation as possible is good in this regard. But the facility of system drive shares is also rather nice. Always a balance I guess.

Again thanks for sharing your findings.

1 Like

Hey Philip,
Thank you for the very detailed explanation. A big thumbs up!!!

I forgot to mention that I manually deleted the 4.12 kernel on forehand.
I understand that I didn’t have to switch to TEST to see the 4.12 kernel. Ah well I learned a lot today.

As for to the problem. As you may recall I like to deep dive in Centos because we’re using RHEL at the office. Since my expertise is mainly with Windows I want to learn as much as possible about Linux, specific Centos/RHEL. Also very curious at OpenSUSE but I’ll wait till the next big update of Rockstor. I think by then I will have my new hardware ready to test :slight_smile:

Cheers,

J.

3 Likes