Thanks for the feedback, much appreciated.
That’s a tricky one. I’d first like to establish an opt-in utility for lets encrypt because as always there are ‘details’. Take a look at the following issue we have open for this
and that’s just the lets encrypt part:
And of course not all lets encrypt mechanisms are available to all folks, hence my gravitating towards a dedicated port 80 solution in that thread.
Pull requests are always welcome if there are any takers on this. And of course it would be great to have this, but for many folks it’s just not going to work unless they also have port forwarding enabled (in on a private network) or can only use DNS auth or whatever. So maybe this is something folks could discuss in a dedicated thread, using that issue as a base for where we are up to. And yes, once we have an established robust system we can maybe effectively intergrate it but I don’t quite see it being that easy just yet. Too many variables and the like and folks need to be able to get into their fresh install as soon and as easily as possible. If you fancy starting a thread with some ideas and link to that GitHub issue we can see if their is interest/takers on getting this sorted.
In parts very much so, in others very much not so. And remember you are also using the youngest variant via the parity raids. It’s actually quite a mix from what little I’ve heard of the architecute at this level. Again these architectural questions are better asked of the btrfs-linux mailing list, or of other btrfs specialised mailing lists / forums. The Rockstor dev team is very much a user of this software, we don’t as-yet have capability to contribute back to it. Although I’ve made a trivial doc contribution (not yet merged) and I believe a forum member has had a successful doc pr merged and Suman made some contributions to the wiki. But nothing on the programming front that I know of. It’s super highly specialised, and entirely (usually) non trivial.
Out of curiously why are you doing a balance prior to returning the pool to a healthy state, i.e. non degraded mount. Or is the ‘balance’ you are referencing here an ‘internal’ one initiated by a missing device delete command? When the pool is mounted degraded it is non representative of it’s usually state: that of not being mounted degraded. So best to get it mounting regularly before you do anything else. How many missing devices does this pool now have?
Shame about all those unused cpu cores, alas the future is as-yet not here apparently. Do keep us posted as this is turning into quite a journel. And does your use case, post restoring this pool, allow for say a raid10 migration as that, along with some of the usual tweaks, ie quotas off, noatime, & space_cache=v2. Looks to be the best all round performer. But only has a single drive redundancy unfortunately. Otherwise it’s using something like emerging mix of parity raid for data and c2/c3 for data. Not within Rockstor’s capability yet and still also too young for my liking but still. Another note on the performance front is some more recent tweaks where you can have metadata favour non rotational drives. But again not yet released but looking promising. Can’t find the linux-btrfs mailing list entry for it currently. However as we are now moving to an upstream supported kernel these goodies should trickly down to us as and when.
Also have you take a look at the default performance comparisons of the btrfs raid levels done earlier this year as part of an article by Phronix: https://www.phoronix.com/scan.php?page=article&item=linux55-ssd-raid&num=1 Might be interesting given your trials. All apples and oranges with the other file systems listed as they are using mdraid but interesting to see the difference across the btrfs raid levels for each type of load. Plus they are all ssd drives so not representative of spinning rust. so theirs that.
“Btrfs was tested with its native RAID capabilities while the other file-systems were using MD RAID. Each file-system was run with its default mount options.”
Popping in here given this thread has quite a few performance related elements now.
Hope that helps.