3.9.2 Update gone awry. Web UI and shares missing all data

I have been using Rockstor since Jan 17. I was using 3.8.16. I had 4 3TB drives configured in Raid 10 and several shares on it and I the os was installed on a usb stick. Everything was running fine so far, I decided to upgrade to the latest version and I bought the activation code for stable updates and decided to upgrade. After the upgrade the web ui wouldn’t come up and it wasn’t connecting to my router as well so I connected a monitor to it and I saw this error.
— end Kernel panic – not syncing VFS: Unable to mount root fs on unkown-block(0,0)

So I rebooted and I got the following options
Rockstor (4.12.4-1.el7.elrep.x86_64) 3 (Core)
Rockstor (4.12.4-1.el7.elrep.x86_64) 3 (Core with debugging)
Rockstor (4.12.0-1.el7.elrep.x86_64) 3 (Core)
Rockstor (0-resuce-xxxxxx)

Option 1 and 2 doesn’t work boots up but I cant access the webui.
Yum info shows this
root@rockstor ~]# yum info rockstor
Loaded plugins: changelog, fastestmirror
Loading mirror speeds from cached hostfile

  • base: ftp.usf.edu
  • epel: ewr.edge.kernel.org
  • extras: mirror.cc.columbia.edu
  • updates: mirror.trouble-free.net
    Installed Packages
    Name : rockstor
    Arch : x86_64
    Version : 3.9.2
    Release : 33
    Size : 79 M
    Repo : installed
    Summary : RockStor – Store Smartly
    License : GPL
    Description : RockStor – Store Smartly

Doing lsblk returns the following
[root@rockstor ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 2.7T 0 disk
sdb 8:16 0 2.7T 0 disk
sdc 8:32 0 2.7T 0 disk
sdd 8:48 0 2.7T 0 disk
sde 8:64 1 29.2G 0 disk
├─sde1 8:65 1 500M 0 part /boot
├─sde2 8:66 1 2.9G 0 part [SWAP]
└─sde3 8:67 1 25.8G 0 part /home

But if I go to my shares /mnt2/media I don’t see any files. Did I lose all the data that was stored there? I had over 1 TB of photos and videos stored there
I tried systemctl restart rockstor
[root@rockstor ~]# systemctl restart rockstor
A dependency job for rockstor.service failed. See ‘journalctl -xe’ for details.
[root@rockstor ~]# journalctl -xe
Aug 28 16:18:05 rockstor initrock[2988]: 2018-08-28 16:18:05,355: Checking for flash and Running flash
Aug 28 16:18:05 rockstor systemd[1]: Reloading.
Aug 28 16:18:05 rockstor initrock[2988]: 2018-08-28 16:18:05,792: Updating the timezone from the system
Aug 28 16:18:05 rockstor initrock[2988]: 2018-08-28 16:18:05,792: system timezone = America/New_York
Aug 28 16:18:05 rockstor initrock[2988]: 2018-08-28 16:18:05,793: Updating sshd_config
Aug 28 16:18:05 rockstor initrock[2988]: 2018-08-28 16:18:05,794: sshd_config already has the updates.
Aug 28 16:18:05 rockstor initrock[2988]: 2018-08-28 16:18:05,794: Running app database migrations…
Aug 28 16:18:07 rockstor initrock[2988]: Traceback (most recent call last):
Aug 28 16:18:07 rockstor initrock[2988]: File “/opt/rockstor/bin/initrock”, line 44, in
Aug 28 16:18:07 rockstor initrock[2988]: sys.exit(scripts.initrock.main())
Aug 28 16:18:07 rockstor initrock[2988]: File “/opt/rockstor/src/rockstor/scripts/initrock.py”, line 40
Aug 28 16:18:07 rockstor initrock[2988]: run_command(migration_cmd + [‘storageadmin’])
Aug 28 16:18:07 rockstor initrock[2988]: File “/opt/rockstor/src/rockstor/system/osi.py”, line 121, in
Aug 28 16:18:07 rockstor initrock[2988]: raise CommandException(cmd, out, err, rc)
Aug 28 16:18:07 rockstor initrock[2988]: system.exceptions.CommandException: Error running a command. c
Aug 28 16:18:07 rockstor systemd[1]: rockstor-pre.service: main process exited, code=exited, status=1/F
Aug 28 16:18:07 rockstor systemd[1]: Failed to start Tasks required prior to starting Rockstor.
– Subject: Unit rockstor-pre.service has failed
– Defined-By: systemd
– Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

– Unit rockstor-pre.service has failed.

– The result is failed.
Aug 28 16:18:07 rockstor systemd[1]: Dependency failed for RockStor startup script.
– Subject: Unit rockstor.service has failed
– Defined-By: systemd
– Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

– Unit rockstor.service has failed.

– The result is dependency.
Aug 28 16:18:07 rockstor systemd[1]: Job rockstor.service/start failed with result ‘dependency’.
Aug 28 16:18:07 rockstor systemd[1]: Unit rockstor-pre.service entered failed state.
Aug 28 16:18:07 rockstor systemd[1]: rockstor-pre.service failed.
Aug 28 16:18:07 rockstor polkitd[2526]: Unregistered Authentication Agent for unix-process:2982:32736 (

Is there any way to recover my data. Please help
Thanks,
Satish.

@SatishI Hello again.

You data is most likely just find. Rockstor manages the mounts of all managed pool (btrfs volumes) and their consequent shares (btrfs subvolumes). So if the rockstor.service fails there are not mounts. Which means the mount points will be just that: empty directories awaiting the pool and subsequent shares to be mounted.

3.8.16 to 3.9.2-33 (currently) is a fairly massive jump but I don’t think there were any db changes. But in the mean time there have been a huge number of upstream changes. When you initiated the stable channel subscriptions did you reboot soon there after? It can take quite a while of all of the upstream packages to go in if that is what happened.

To try and further diagnose this, given there were no real db changes, however there was a large migration mechanism change at around 3.8.15:
Upgrading to 3.8.16 from 3.8-14 or before? Read this
this shouldn’t have affected your db update, yet the dependent service of the rockstor.service

, rockstor-pre.service looks to have failed

at the database migration point:

This is odd as we haven’t seen a failure there for quite some time.

As this issue occurred during an update could you given the output of the following command, as your system may be in a partially updated state, although the new rockstor package seems to have gone in ok given your ‘yum info rockstor’ output.

yum update

That should tell us if all is well on the system packages side as it may just be related to that.

As to:

It should be safely tucked away in your data pool / drives which are I suspect, given the failed rockstor service, still unmounted.

You can see what is currently mounted via:

cat /proc/mounts

and to see an overview of all btrfs devices, and their associated pool affiliations via:

btrfs fi show

And assuming you didn’t choose to use your system disk for any data then a fresh install using a different system disk on this machine (make sure to not leave the original attached) is an option here. But note that it is best to first detach all data disks (and the prior system disk), install, subscribe, do all updates patiently (can take 10s of minutes if it’s a slow machine), then reboot to make sure all is ok, then shutdown, re-attach all data pool disks, boot up and import your freshly re-attached prior pool. If done on the same machine (motherboard) then you should get the same appliance id and so the same activation code should work). The Reinstalling Rockstor may be of use here.

But the above ‘brute force’ approach of re-installing will loose all settings. But we do now have the currently undocumented (my fault) command line config backup capability:

which should produce a file of your current settings. But I’m unsure if it will work in your setting, ie broken rockstor-pre.service.

But is an option for re-gaining access to your pool/data and setting up new exports etc for it’s network access. However it may not be all that much hassle to repair this install as we know roughly where it is currently failing.

Lets see the output of that ‘yum update’ command is first and take it from there.

Thanks @phillxnet

Here is the output of yum update

[root@rockstor ~]# ls
anaconda-ks.cfg
[root@rockstor ~]# yum update
Loaded plugins: changelog, fastestmirror
Loading mirror speeds from cached hostfile

By detaching the data disks do you mean to physically disconnect the disks from the motherboard and then reattach after reinstallation?

@SatishI

Yes, “… after reinstallation?” and channel subscription, and consequent update and (after a good 20 mins to allow for update to finish) reboot. That way you can be sure to get all the newest code prior to your import, config restore (if that is part of the plan). It’s a bit over the top but then at least you know any installer bug see:

and another example of the same thing:

which in the later elements of the associated issue have been tracked down to our upstream anaconda installer and it’s interplay with our kiskstart file:
root pool dev edit options non symmetrical · Issue #1848 · rockstor/rockstor-core · GitHub

or a miss step, accidental click, or failure to check for only one ticked disk, will likely wipe one or more data disks. Hence avoiding this completely by detaching the data disks. Just trying to play safe with data during large complex moves like a fresh install / re-install.

And just to re-emphasise, doc howto Reinstalling Rockstor is also worth reading and has a overview “Data Import” section at the end.

Good, then that narrows it down to the db migration mechanism.

I’ll try and pick this up again later, unless someone else beats me to it. @Flyer is up on db migration issues that we have had in the past and depending on timing may be able to step in here with some further diagnosis, if that is the way you end up going.

Also remember to detach all and re-attach all in the same sitting as booting up with at least one drive missing will also result in a failure to mount.

btrfs fi show

would also be helpful for those looking to help with this issue.

Hope that helps.

Thanks @phillxnet I followed your instructions and did a fresh reinstall applied all the updates and the reattached the data disks and imported them. I got all my shares and pools back.
Lesson learned I won’t fall behind on the system updates. :slight_smile:

@SatishI Thanks for the update and glad you up and running again.