Migration to Rockstor

Hi Everyone.

So my journey to Rockstor continues. I was coming from OMV 5 and was testing Rockstor 4.0.4 in a VM and ended up going with the stable subscription. I was once again able to re-register a new appliance ID with appman when I moved to my real world NAS from the VM and once backed up, nuke & paved with Rockstor.

Since I was running everything in docker container for the most part to begin with, migration was pretty straight forward and most of the differences I encountered during migration was due to distribution and it’s tools (I had to learn zypper + friends, coming from apt.) I’m pleased to say I was able to get my old “infrastructure” up and in short order under Rockstor.

Couple of things I changed/skewed from stock:

  • Installed Dockstarter. This was painless and works just as it did for me under OMV. It even updated docker-compose in the process. Quickly re-deployed all my containers.
  • Installed the Portainer “rock on” to manage all of this. UI is directly linkable via RockUI.

Some questions:

  • There’s some sort of metrics tool running, I think it’s data-collector and I assume it’s used for the dashboard. Does this facility have the capability of output to influxdb? Under OMV it was the collectd daemon which supports this but if I can leverage existing tools, awesome. I output this data in Grafana to get nice pretty dashboards about my NAS. edit I wrote some shell scripts to directly inject data into influxdb so I assume it’d be somewhat trivial or I can read this data from shell and do it myself?
  • On Sourceforge where you can download the ISO from, there’s a 4.0.5 directory with a tarball with what appears to be the update/patch. How does this install? I built 4.0.4 using the installer method.

Cons?:

  • Certificate interface needs way more polish. It accepted my SSL cert but even after install, I don’t know what/if it’s installed.
  • Seems only /opt/rockstor/var/log/rockstor.log seems to contain any logging (and thin unfortunately) and I have to determine runtime status/logging using systemctl status. Can /var/log/syslog be enabled/turned on? Makes troubleshooting easier. edit resolved by installing rsyslog and enabling all messages in /etc/rsyslog.conf

Otherwise, it’s been a pretty painless transition with most of the growing pains were around me reading the openSUSE documentation on how to use their tools. My NAS isn’t super-awesome or anything but it does host:

  • Emby
  • transmission
  • Influxdb
  • mariadb
  • pybtble
  • portainer
  • homeassistant
  • openzwave
  • mosquitto
  • heimdall
  • nodered
  • grafana

The NAS operates my home automation (and my family was relieved the transition Dad was doing went so smoothly) and various other services for my family.

Overall, I’m quite pleased and the journey continues.

3 Likes

Hi @flatine69,

Thanks a lot for sharing your feedback, that is extremely useful and greatly appreciated; let’s hope we’ll be able to improve Rockstor thanks to it!

I’ll try to answer some of your questions, or at least those for which I do have an answer.

You are correct: data-collector is used to feed data to Rockstor’s dashboard but also to run period check on Rockstor’s version, package updates, running services, check pool health, etc… You might find the code below relevant:

I unfortunately am not familiar with sending data to InfluxDb but ensuring compatibility with it could be a great addition, in my opinion. You do seem to have quite the experience in doing so, however, so any input on this would be very helpful.

4.0.5 is the latest rpm available in the Testing channel, which means it should be presented to you as a Rockstor update from within the UI if you are subscribed to the Testing channel.

Thanks a lot for your feedback on this. This is an area that has not seen much interest/feedback from users in a while, I believe, so I’m really glad you brought this to light. I agree it seems helpful to have a list of what is currently installed so we should try to implement a way to do that. Would you have any additional details on what you would like to see and/or how you would use it? Once we have a better idea, feel free to create an issue on our Github repo so that we can keep track of it and plan accordingly.

Yes, Rockstor does store all its own logs in /opt/rockstor/var/log. All other system logs should be available in their respective location, however. For instance, samba logs will be at var/log/samba, sssd logs will be at /var/log/sssd, etc… OpenSUSE Leap uses journald for all the logs so you should be able to see everything using journalctl (this is what I personally use). I’m glad you found a way to get what you wanted :slight_smile: . In case you’re still curious or are interested, you can have a look at openSUSE documentation on the matter:
https://doc.opensuse.org/documentation/leap/tuning/html/book-sle-tuning/cha-tuning-logfiles.html

That is a very nice app stack you are running here, nice! I myself have always been interested and willing to get into InfluxDb/Grafana for monitoring, for instance, but have never found the time to look seriously into it and learn.

Thanks immensely for all your feedback, by the way, I can’t emphasize enough how valuable it is for a user-driven open-source project as Rockstor, so be assured it is greatly appreciated! Hopefully the result of your feedback will be integrated sooner rather than later when somebody will find time to tackle it. If you’re interested in having a look at it yourself, feel free to read our documentation on how to contribute :wink: .

Thanks again!

1 Like

Sorry, not sure how to quote you but I’ll try to provide some feedback here.

Regarding data-collector: I took a look at the code and my python isn’t anywhere near strong enough but I can follow along. I have done some python/shell scripting to influxdb and you can post data with curl so a simple http request should do it.

Here’s an example of where I query all four of my HDD’s via the hddtemp tool, parse the output and put it into influxdb, as with anything, the data can be as simple or complex as you like but it’s basically key->value system with timestamps (added by influxdb) and I run this via a cron-job.

I have a another script in python that translates fail2ban IP into geocodes and inputs that result into another influxdb database which I then use to place pins on a world map of IPs captured by fail2ban, it uses curl via shell to post the data as I don’t know how to “talk” http with python :slight_smile:

rockstor:~/scripts/grafana # hddtemp /dev/sd{a,b,c,d}
/dev/sda: WDC WD10EADS-22M2B0: 32°C
/dev/sdb: ST2000DM006-2DM164: 28°C
/dev/sdc: ST2000DM006-2DM164: 28°C
/dev/sdd: ST2000DM006-2DM164: 29°C


rockstor:~/scripts/grafana # sh hddtemp
influxdb_url = http://192.168.1.10:8086
influxdb = hddtemp
influxdb_write = http://192.168.1.10:8086/write?db=hddtemp
processing drive sda: drive=sda temp=32
processing drive sdb: drive=sdb temp=28
processing drive sdc: drive=sdc temp=28
processing drive sdd: drive=sdd temp=29


rockstor:~/scripts/grafana # cat hddtemp
#!/bin/sh

influxdb_url="http://192.168.1.10:8086"
influxdb="hddtemp"
influxdb_write="${influxdb_url}/write?db=${influxdb}"

drives="sda sdb sdc sdd"

echo "influxdb_url = ${influxdb_url}"
echo "influxdb = ${influxdb}"
echo "influxdb_write = ${influxdb_write}"

for i in $drives
do
  result=`/usr/sbin/hddtemp /dev/$i | /usr/bin/sed -E 's/^\///g;s/^dev\///g;s/[:°C]//g'`
  drive=`echo ${result} | /usr/bin/awk '{ print $1 }'`
  temp=`echo ${result} | /usr/bin/awk '{ print $NF }'`
  echo "processing drive ${i}: drive=${drive} temp=${temp}"
  curl=`/usr/bin/curl -o /dev/null --silent -XPOST "${influxdb_write}" --data-binary "hddtemp,dev=${drive} value=1,temp=${temp}"`
done

Depending on how complex you want to store the data but it doesn’t need to be. But this should give you a rough idea how to send data to influxdb. There may even be python influxdb libraries but if I’d wager it’d be lighter with http calls similar to curl. In my example above, in Grafana I can select each individual HDD above and link it’s temperature value to a dial-gauge that shows me the value in Celsius, very similar to this:

image

Since you already have the data, I’d assume it’s trivial to ingest it into influxdb as-is, just represent it as influxdb expects it.

edit Found a quick read on the line protocol used, pretty straight forward: https://docs.influxdata.com/influxdb/v1.8/write_protocols/line_protocol_tutorial/

2 Likes

Forgot about the certificate part. Nothing fancy, just something that shows that whether a certificate is installed and which one it is (self-signed or not.)

2 Likes

Unfortunately, my journey has come to an end. I had done a balance against my data pool (5TB) and then a scheduled scrub was done. When I woke up the next morning, I discovered all my docker containers were non-functioning complaining about missing btrfs subvolumes. Nothing I could do (or Google) would get it to a workable state. The data was all still there but it simply kept complaining about missing btrfs subvolumes (same message as below but different path; I didn’t copy down the message explicitly, but there’s plenty of hits on Google around this):

stat /var/lib/docker/btrfs/subvolumes/ffd9d6af83bc8c771c4eeb63fa57218386a5d8d0581c08a4f734baf0a8466f41: no such file or directory

I even blew away all the containers and tried to rebuild them only to end up with docker complaining about missing btrfs subvolumes. Nothing I tried worked.

I was able to back up all my data without issue but unfortunately, this is not stable enough (ran about a week or two without issues) for me to run daily. Coupled with other issues I’ve come across (and posted about) I think Rockstor is an excellent project but there’s bumps in the rug I can’t get past.

I’m not looking to refund anything (stable subscription) as I still feel it is money well-spent to help the project. I hope that in the future, these problems can be more easily resolved.

1 Like

@flatine69 Thanks for the feedback, much appreciated.
Re:

One possible explanation for this is an inadvertent ‘clean up’ where rock-ons-root subvol snapshots are deleted by the user without appreciating that these are actually the rock-ons. We have definitely seen this numerous times and have the following issue open to address this in time:

You make no mention of doing any tidy-up / delete of snapshots but this would definitely kill all rock-ons as you are removing their docker images by doing so. So this may not be the cause here.

Anyway, thanks again for your feedback and do keep us in mind as we move ever forward towards our Rockstor 4 stable release. And if you do end up circling back around then PM me re your Stable subscription and I’ll be happy to sort you out.

Recommended linux based DIY NAS projects: Unraid and openmediavault, the latter of which you have already tried from your original message.

Hope that helps and good luck in your NAS adventures.

2 Likes