Dev log for 3.9.2

No reactions?
Great :unamused:

Iā€™m not one of the maintainers, so take whatever I say with a healthy dose of salt. But as far as I know thereā€™s some switch going on regarding the update model, it has been going on for a while. I agree that a bit more communication about it would be wonderful, if only to prevent people from considering the project stale, which it very much is not. But the active devs only have so much time to do things and answer questions in, of course.

As for updating the software using the source from Github: personally Iā€™d advice against doing that unless youā€™re comfortable with doing that, and know how to run these kinds of upgrades. If not, the chance of something going wrong beyond your capacity to fix it is quite real. :wink:

Letā€™s have a tad more patience and hope @suman has some happy news for us soon!

1 Like

@suman is there any word on how the testing channel users will get to upgrade?

One of the big changes I would like to take advantage of is the disabling of quotas. This was added in 3.9.2-18 by @phillxnet. Iā€™ve recently been experiecing some odd behavior with the btrfs-cleaner process using a lot of IO. From what I have read in a couple of places (here for example), disabling quota is the recommended fix.

I was considering just running a btrfs quota disable but am concerned of what issues I might have on my 3.9.1.-16 install, as the Rockstor code hasnā€™t been configured to handle that state yet.

Could anybody be so kind and explain me how to update the system ?
Iā€™m still running RockStor 3.9.1-16 as non productive.

Would be good to have some form off communication; it has been a long time to be stuck with no updates. If the answer is that we need to pay then so be it, at least we can make an informed decision.

@jfearon A very belated welcome to the Rockstor community.

Iā€™ll try and chip in again on this one, communication wise. But this is my personal view as a major contributor.

The story so far on testing channel is unchanged, ie there have been no more rpm releases since 3.9.1-16. Leaving the convenience of rpm updates as a ā€˜value addā€™ to stable channel subscribers only for quite some time. But also note that this pertains to the rockstor code only:

Not entirely what we had planned (by now), but it currently fits with the available resources. More and more open source projects are having to consider their sustainable stance and this is an element of ours; currently at least. I personally see us moving more towards a model such as is instantiated by Bareos where they occasionally release a free as in beer rpm but serve many more frequent rpm updates to their subscribers: but are still free as in libre software, ie they also have their code on GitHub. This is essentially what we are in practice currently (doc updates re channels definition needed). With patience all get rockstor code updates in time as the ā€˜free beerā€™ releases do come in time, but there is a definite ā€˜value addā€™ for subscribing: ie more frequent and more convenient updates.

But all our code and development is in the open and so building from the source code, and updating your kernel, as indicated in a prior post of mine here:

and confirmed by @kupan787 there after:

is available.

And the build process is, and has been for a few years, well documented. It was one of the main elements that attracted me as a developer / contributor. This is not always the case with open source projects. But of course, like in all other areas, there are always improvements to be made.

Essentially Rockstor and itā€™s sustainability is a work in progress and we have to use our time and effort to best affect. I have been putting my time into trying to make disk / pool / share management more robust / capable and have also dabbled in the docs, and the forum of course. And given there is already documentation sufficient to enable any one to build and run the very latest version that day (accepting a wipe of their database) along with whatever elrepo kernel-ml they choose, I think itā€™s best if I at least continue to focus on those issue that I indicate on GitHub as my current focus, often along with justification for the chosen order. I am also in the process (as evident in my more recent prā€™s) of improving portability so that we might ā€˜stand on the shoulders of a different giantā€™ so to speak: which in turn frees us from for example kernel and btrfs-progs details. But all this takes time and if I for example spend that time instead on what was the prior testing channel we may well end up, as we seemed to be heading towards before, with an imbalance of support and contribution.

Please see these statement in the context of our previous failure to live up to our hope of working both channels successfully. Where our divided efforts lead to stable channel subscribers discontent with update frequency. Out of necessity (and sparsity of ā€˜value addā€™ options in a fully open source project) we have now reversed that situation. This has in effect been a win win situation overall (Rockstor still exists) but does not and has never precluded those who are willing to go through a little inconvenience (git build db loss etc) in order to avail themselves of what the latest code has to offer. Also note that the build process is a minimum capability to contribute code wise and so serves as an empowerment to those who wish to be able to control / observer their own data management systems in that way. And those same individuals can in turn then contribute on that level. Regular users can in turn help to sustain the project via the ā€˜value addā€™ of convenience offered by the stable channel (rpm) updates. Without the stable channel subscriptions Rockstor may well already have disappeared or simply become unmaintained. That is by definition a loose loose as all those that work on Rockstor, as far as Iā€™m aware, currently at lease, enjoy it. And the stable channel subscribers are now more frequently served their ā€˜value addā€™.

It is as well to remember that most open source projects are sustained on passion, but most also require funding to achieve a satisfactory level of polish, and or independence. See gnome / kde / kernel etc. I for one would like to see Rockstor prosper and simply having more users is not the answer. We need to continue to build a contributing community which is why I am taking the time to express my view on the matter here, and why I accepted the forum admin role as well. The world is now dominated by free as in beer software, on the server side, but there is almost always a backer or a ā€˜value addā€™ that helps that situation be sustainable.

All those frustrated with the lack of ā€˜free beerā€™ testing channel updates please consider attempting a build, itā€™s really not that difficult. Just remember that your db will be wiped and rock-ons, as per config backup and restore, are problematic so best uninstall all rock-ons (and wipe rock-ons root) and start a fresh with the new version. Also remember to uninstall your rpm rockstor package instance. If this is not to your liking then please consider alternatives such as FreeNAS (open core - given TrueNAS), Lime technologies unRAID (30 day free trial with drive count limitation per pay level there after - also only open core), or openmediavault (same originator as FreeNAS I think - opensource over the wall type with CLA). Or start a new thread on how you think we could best serve the casual user who does not intend to contribute, on a code / docs / forum level, in a way that will require no additional support. Or consider subscribing to the stable channel and encouraging others similarly if that fits their use case also. All of the indicated alternatives projects are, by many accounts, doing good work so please consider supporting them if they serve your requirements better. My personal choice is to try and support Rockstor as I like itā€™s simple yet capable mix, which is in a large part down to the focus on using btrfs: which I consider to be the future of file systems in this area.

Also note that pull requests are welcome, especially if they scratch an itch, as others are likely to have that same itch. And if the current ā€˜nuke and pageā€™ build process leads to fresh itches then ultimately everybody will win with that particular itch satiated. We are after all not all that dissimilar. But with all the combined efforts to date we have a surprising lack of options on the fully open source NAS front. But note that no one is ā€˜stuckā€™ with Rockstor: it is a conscious choice. The part one plays in this choice is very flexible.

Hope that helps and doesnā€™t come across as too preachy. I will continue, for the time being, to do what I can to assist all those on the forum with heir issues as my time permits. As always we are all after a win win situation here so please keep in mind that a mutually beneficial outcome is best. This is a team effort that involves the users as an active component (mostly via this forum).

Iā€™ll end with a graphical ā€œRockstor Git Gource visualisationā€ up to around 9 month ago:

A graphic that I created at the time to help visualise how much of a team effort the rockstor system, rockons and docs included, is. You may also be interested in how this graphic was created, and for that I created a blog post entitled: ā€œRockstor Git Gource ā€“ project visualizationā€, subtitled ā€œGource is Git shiny.ā€

http://rockstor.com/blog/btrfs-nas/rockstor-git-gource-visualization/

Iā€™m afraid I havenā€™t really followed this up properly, but my system is now magically working. Iā€™m just not sure exactly why.

I have removed a WD Red 4 TB drive that was paired with a 10 TB drive to add capacity. This 4 TB drive seemed to slow down the pool quite a bit, and I am not sure why. After pulling that drive out though the system is now booting fine by itself - magically fixed.

It seems that either a fix came through (I donā€™t think I applied any updates, but possible it happened, and it was running for quite a time before this reboot), or the speed of this drive caused something to time out or something.

This thread kicked off a big discussion, but the original issue I thought it may have been looks like it has been addressed (from the thread) so Iā€™m unsure if itā€™s something there. My feel is it was the drive was causing issues somehow - previously when I would do the manual little service restart process it would take much longer for the steps (minutes) , whereas now seems to be much much quicker.

In any case my original problem seems to be fixed, apologies for not debugging it properly to confirm the root cause.

Thanks!

@Ivan Thanks for the update and glad your now up and running.

Always a little frustrating when the exact cause hasnā€™t been identified but at least your sorted for the time being. If you run into any other issues then do consider starting a fresh thread for each as itā€™s then easier to keep it focused and in turn easier for forum members to chip in with help, ie this thread was originally for reporting on our development of 3.9.2 which has now been out for ages (3.9.1-16 testing = 3.9.2-0 stable).

Iā€™ve also had systems grind to a near halt by drives that exhibit errors by the way. Most recently it was a scsi command time out WRITE DMA thing. Always worth checking the logs for such things.

No worries there and thanks yourself, maybe we can ā€˜catch the next oneā€™ :slight_smile:.