@suman do you know if the testing plan is ready to roll out yet?
I like to keep my systems up to date (bug fixes, security, etc), so if the testing channel is being killed in favor of a paid only subscription, please let us know. That way those of us that are in limbo waiting for updates can make a decision on how to move forward.
All updates bar those of the rockstor package, the kernel, and btrfs-progs can be installed on either stable or testing channel at any time via the flashing icon to the left of the kernel version number (top right):
And given our kernels are unmodified elrepo-ml you have the option of adding this repo (via their instructions) to gain their updates as they come out or, as @jim.allum recently posted howto indicates:
to a specific version, until our next matching release of kernel and btrfs-progs is release. This is safer option as you are a little less ‘out on your own’.
Also all code is open-source and can be built from the GitHub repo via the instructions within the Developers section. Although this is non-trivial and there exists a caveat via an outstanding @Flyer pr:
which is trivial to apply. But note you may loose all settings / db entries so be careful on that front.
The plan is to transition the ‘testing channel’ away from an rpm based update to one where you can use the code published on GitHub the minute it is available. Ie as it’s merged it will be possible to initiate a rebuild directly from that code. This will server development better as there will be less delay to each testing release and less overhead to managing 2 seperate rpm update trains. So hopefully a win win, earlier and quicker updates as soon as is possible, for those who are happy with this more edgy approach, and more developer time spend on developing rather than rpm release management.
But unfortunately this is all taking longer than anticipated and is frustrating all round so please be patient as these things are always a little more involved than at first they appear. But the intention is to maintain a “value add” (probably stability/convenience based) to those on the paid subscription as that is an important element of Rockstor’s sustainability, along with Incident-Based support of course. Without these elements we could end up being another OpenFiler.
Note also that all updates to the Rockstor package in the stable channel subscription since latest testing release have been convenience and bug fix related. No security related fixes have been committed.
Hope that helps and let us know how you get on with that update everything else button provided by @Flyer, it is in keeping with all his other contributions, quite fancy: see also pincards password recovery system.
I have been running the testing channel, and after an update a little while ago to the last testing release I am having the problem where shares are not loaded on power up, which is obviously as issue for a NAS.
From what I read this is fixed i think in later stable builds, but is not fixed in the testing build (though I’m not 100% it is fixed in the stable build). At the moment I’m working around by going and executing a sequence of commands after a power cycle (enable quotas, restart the various rockstor bits).
At this stage I’m not sure if I need to sign up for stable builds just to fix my system (if it will even), or if there is another way to solve this issue? I can’t seem to see an actual ‘solved’ in the forums for this issue.
Not sure on this one as the main recent ‘not mounting’ issue was kicked off by a move in stable updates channel only to docker-ce which inadvertently disabled quotas which in turn caused us to have to deal better with no/disabled quotas which stable now does: via a series of ‘hot fixes’. But given you’re on testing the docker-ce move shouldn’t have happened yet. But you have apparently identified quotas as an element! Could you please open a fresh forum thread with your findings and any log entries (System - Logs manager, Rockstor Logs and Dmsg) that look suspect during a boot as it should then be possible to track down what’s going wrong on your install: make sure to include your Rockstor version. I’m not myself aware of any issues in testing that should cause this that are not quotas disabled related and unless they are turned off manually all pools should have quotas enabled, and it persists over a reboot (unless docker-ce is there to disable it for us; nice). But there have been quite a few significant improvements re pool, share, snapshot management/import/refresh that have been made so lets see if the logs can tell us anything in a focused thread.
Hopefully testing updates (in their new guise) will soon return and all stable ‘hot fixes’/improvements will be available via the testing update method (when it arrives) which is likely to pull straight from GitHub where all code is always available (tagged by stable release version) as soon as it is committed. The idea going forward is to cherry pick the best from development and release it ‘easy style’ to the stable channel.
We are now 3.9.2-17 but changelog stays at 15 - I would appreciate to follow what is new or has been improved, also to test and give feedback-> can you look into this and update the release page on github?
3.9.2-18 is now available. This stable release enhancement adds the much anticipated “Disable/Enable Quotas” capability. Thanks to @maxhq and @Dragon2611 for helping to inform this feature and apologies to all who have been waiting patiently. Quotas Enabled is still the default and recommended setting; but all functionality, bar share usage reporting (0 bytes), is expected to work with Quotas Disabled.
Please note that, for the time being, disabled quotas are still an Error state within our logs; but log spamming re “quotas not enabled” should only be expected during Web-UI activity. We can revisit these behaviours going forward.
A quota rescan is automatically initiated for a given pool whenever it’s quotas are re-enabled. Please expect around 1 minute / TB of data for the share usage figures to return to normal. As always there are improvements to be had. Feel free to start a new forum thread with the details of any issue you experience.
Quick Howto prior to docs update:
Click - Select (from dropdown) - Tick to confirm.
Page refresh for current setting. See ‘mouse over’ tooltip for use context. Setting will persist over a reboot / power cycle.
This inline edit widget is available on both the Pool overview table (as indicated) and on each Pool’s details page.
Just to give it a go, I spun up a VM installed from the latest ISO (which is a few versions old btw). I then let that update to the latest testing channel (3.9.1-16). I then gave it a go doing an update from a git clone of the master branch.
I noticed that once I ran through everything, my install was no longer running out of /opt/rockstor but was running out of my build directory (in my case /src/rockstor-core/src/rockstor). This did indeed cause me to loose all settings.
The good news is that my disks were detected, so I reimported those (which brought back all my pools, shares, etc). I then restored a config backup I had taken. Everything seemingly came back up, except for my RockOns…need to spend a bit more time to find out why that service won’t start.
The bottom line is it seems to work just fine. The only thing I noticed is that the version info at the top of the page still shows as 3.9.1-16 instead of the latest 3.9.2-18. However, I do see the latest changes (like the Quota changes just made). So it is probably just a setting file some place that the version string gets read from?
I’ve yet to try this on a live system. Need to work up the courage for that
@kupan787 Well done on the successful build; so are we to expect imminent code contributions
Yes, that works purely on rpm so it would seem you still have an installed rpm version of Rockstor. We used to remove the rpm via the build process but these days it’s left. Probably need to add a warning to the dev docs on this one (do you fancy opening a rockstor-doc issue on this one). I usually uninstall that rpm prior to setting up a fresh dev system. Anyway as such there is no version information for the source build, it just errors out in the logs as an unknown version and nothing is displayed in the top right of the Web-UI. Might be nice if it could fail over to looking for a git tag or something and indicating the tag reference and that no rpm was found.
I’d re-do on your test setup first re the rpm install as you effectively now have 2 versions of Rockstor installed which is sub optimal and prone to confusion.
Keep us posted and thanks for further verifying the master branch build process.
I’m a developer by trade (specifically around SQL, BI, and data warehousing), but have limited exposure to web dev (just a bit in C# and JS). Now that I got a dev VM up and running, maybe I’ll take a look at the open issues and see if there is any low hanging fruit I can help out on.
Hey everyone,
I’m no coding / linux professional so I do not understand many thinks you’re doing…
Unfortuntely my machine is still running version 3.9.1-16 which is about six (!) month old. The only postings in the forum I can find is that you’re trying to change the way the updates were rolled out/packed. So far so good…
How is the process running? There is no current documentation. Rockstor-Docs were mostly two years old.
Am I misunderstanding something or do you try to force all testing users to pay for it? Which I would truly understand…
I already saw that I can download the updated code from github which is like 3MB. Okay, but how can I update my system with this?
If you want a more commercial product it would be fine for me, if the prices will not raise much more. But I think you should make it clearly known / understandable.
Thanks in advance & happy easter
Donald
I’m not one of the maintainers, so take whatever I say with a healthy dose of salt. But as far as I know there’s some switch going on regarding the update model, it has been going on for a while. I agree that a bit more communication about it would be wonderful, if only to prevent people from considering the project stale, which it very much is not. But the active devs only have so much time to do things and answer questions in, of course.
As for updating the software using the source from Github: personally I’d advice against doing that unless you’re comfortable with doing that, and know how to run these kinds of upgrades. If not, the chance of something going wrong beyond your capacity to fix it is quite real.
Let’s have a tad more patience and hope @suman has some happy news for us soon!
@suman is there any word on how the testing channel users will get to upgrade?
One of the big changes I would like to take advantage of is the disabling of quotas. This was added in 3.9.2-18 by @phillxnet. I’ve recently been experiecing some odd behavior with the btrfs-cleaner process using a lot of IO. From what I have read in a couple of places (here for example), disabling quota is the recommended fix.
I was considering just running a btrfs quota disable but am concerned of what issues I might have on my 3.9.1.-16 install, as the Rockstor code hasn’t been configured to handle that state yet.
Would be good to have some form off communication; it has been a long time to be stuck with no updates. If the answer is that we need to pay then so be it, at least we can make an informed decision.
@jfearon A very belated welcome to the Rockstor community.
I’ll try and chip in again on this one, communication wise. But this is my personal view as a major contributor.
The story so far on testing channel is unchanged, ie there have been no more rpm releases since 3.9.1-16. Leaving the convenience of rpm updates as a ‘value add’ to stable channel subscribers only for quite some time. But also note that this pertains to the rockstor code only:
Not entirely what we had planned (by now), but it currently fits with the available resources. More and more open source projects are having to consider their sustainable stance and this is an element of ours; currently at least. I personally see us moving more towards a model such as is instantiated by Bareos where they occasionally release a free as in beer rpm but serve many more frequent rpm updates to their subscribers: but are still free as in libre software, ie they also have their code on GitHub. This is essentially what we are in practice currently (doc updates re channels definition needed). With patience all get rockstor code updates in time as the ‘free beer’ releases do come in time, but there is a definite ‘value add’ for subscribing: ie more frequent and more convenient updates.
But all our code and development is in the open and so building from the source code, and updating your kernel, as indicated in a prior post of mine here:
And the build process is, and has been for a few years, well documented. It was one of the main elements that attracted me as a developer / contributor. This is not always the case with open source projects. But of course, like in all other areas, there are always improvements to be made.
Essentially Rockstor and it’s sustainability is a work in progress and we have to use our time and effort to best affect. I have been putting my time into trying to make disk / pool / share management more robust / capable and have also dabbled in the docs, and the forum of course. And given there is already documentation sufficient to enable any one to build and run the very latest version that day (accepting a wipe of their database) along with whatever elrepo kernel-ml they choose, I think it’s best if I at least continue to focus on those issue that I indicate on GitHub as my current focus, often along with justification for the chosen order. I am also in the process (as evident in my more recent pr’s) of improving portability so that we might ‘stand on the shoulders of a different giant’ so to speak: which in turn frees us from for example kernel and btrfs-progs details. But all this takes time and if I for example spend that time instead on what was the prior testing channel we may well end up, as we seemed to be heading towards before, with an imbalance of support and contribution.
Please see these statement in the context of our previous failure to live up to our hope of working both channels successfully. Where our divided efforts lead to stable channel subscribers discontent with update frequency. Out of necessity (and sparsity of ‘value add’ options in a fully open source project) we have now reversed that situation. This has in effect been a win win situation overall (Rockstor still exists) but does not and has never precluded those who are willing to go through a little inconvenience (git build db loss etc) in order to avail themselves of what the latest code has to offer. Also note that the build process is a minimum capability to contribute code wise and so serves as an empowerment to those who wish to be able to control / observer their own data management systems in that way. And those same individuals can in turn then contribute on that level. Regular users can in turn help to sustain the project via the ‘value add’ of convenience offered by the stable channel (rpm) updates. Without the stable channel subscriptions Rockstor may well already have disappeared or simply become unmaintained. That is by definition a loose loose as all those that work on Rockstor, as far as I’m aware, currently at lease, enjoy it. And the stable channel subscribers are now more frequently served their ‘value add’.
It is as well to remember that most open source projects are sustained on passion, but most also require funding to achieve a satisfactory level of polish, and or independence. See gnome / kde / kernel etc. I for one would like to see Rockstor prosper and simply having more users is not the answer. We need to continue to build a contributing community which is why I am taking the time to express my view on the matter here, and why I accepted the forum admin role as well. The world is now dominated by free as in beer software, on the server side, but there is almost always a backer or a ‘value add’ that helps that situation be sustainable.
All those frustrated with the lack of ‘free beer’ testing channel updates please consider attempting a build, it’s really not that difficult. Just remember that your db will be wiped and rock-ons, as per config backup and restore, are problematic so best uninstall all rock-ons (and wipe rock-ons root) and start a fresh with the new version. Also remember to uninstall your rpm rockstor package instance. If this is not to your liking then please consider alternatives such as FreeNAS (open core - given TrueNAS), Lime technologies unRAID (30 day free trial with drive count limitation per pay level there after - also only open core), or openmediavault (same originator as FreeNAS I think - opensource over the wall type with CLA). Or start a new thread on how you think we could best serve the casual user who does not intend to contribute, on a code / docs / forum level, in a way that will require no additional support. Or consider subscribing to the stable channel and encouraging others similarly if that fits their use case also. All of the indicated alternatives projects are, by many accounts, doing good work so please consider supporting them if they serve your requirements better. My personal choice is to try and support Rockstor as I like it’s simple yet capable mix, which is in a large part down to the focus on using btrfs: which I consider to be the future of file systems in this area.
Also note that pull requests are welcome, especially if they scratch an itch, as others are likely to have that same itch. And if the current ‘nuke and page’ build process leads to fresh itches then ultimately everybody will win with that particular itch satiated. We are after all not all that dissimilar. But with all the combined efforts to date we have a surprising lack of options on the fully open source NAS front. But note that no one is ‘stuck’ with Rockstor: it is a conscious choice. The part one plays in this choice is very flexible.
Hope that helps and doesn’t come across as too preachy. I will continue, for the time being, to do what I can to assist all those on the forum with heir issues as my time permits. As always we are all after a win win situation here so please keep in mind that a mutually beneficial outcome is best. This is a team effort that involves the users as an active component (mostly via this forum).
I’ll end with a graphical “Rockstor Git Gource visualisation” up to around 9 month ago:
A graphic that I created at the time to help visualise how much of a team effort the rockstor system, rockons and docs included, is. You may also be interested in how this graphic was created, and for that I created a blog post entitled: “Rockstor Git Gource – project visualization”, subtitled “Gource is Git shiny.”
I’m afraid I haven’t really followed this up properly, but my system is now magically working. I’m just not sure exactly why.
I have removed a WD Red 4 TB drive that was paired with a 10 TB drive to add capacity. This 4 TB drive seemed to slow down the pool quite a bit, and I am not sure why. After pulling that drive out though the system is now booting fine by itself - magically fixed.
It seems that either a fix came through (I don’t think I applied any updates, but possible it happened, and it was running for quite a time before this reboot), or the speed of this drive caused something to time out or something.
This thread kicked off a big discussion, but the original issue I thought it may have been looks like it has been addressed (from the thread) so I’m unsure if it’s something there. My feel is it was the drive was causing issues somehow - previously when I would do the manual little service restart process it would take much longer for the steps (minutes) , whereas now seems to be much much quicker.
In any case my original problem seems to be fixed, apologies for not debugging it properly to confirm the root cause.
@Ivan Thanks for the update and glad your now up and running.
Always a little frustrating when the exact cause hasn’t been identified but at least your sorted for the time being. If you run into any other issues then do consider starting a fresh thread for each as it’s then easier to keep it focused and in turn easier for forum members to chip in with help, ie this thread was originally for reporting on our development of 3.9.2 which has now been out for ages (3.9.1-16 testing = 3.9.2-0 stable).
I’ve also had systems grind to a near halt by drives that exhibit errors by the way. Most recently it was a scsi command time out WRITE DMA thing. Always worth checking the logs for such things.
No worries there and thanks yourself, maybe we can ‘catch the next one’ .