What is the alternative to raid56? ZFS supported in future?

I am a new user, thank you very much for your effort, it seems to be a very clever system.

I have over 30 TB of data shared through Windows Server, you might konw such data is not easy and fast to do backup or even restore in a short time.
I was using DFS to sync data to another site, but it eats a lot of disk performance,
I am looking for a NAS system, which have the shadow copy / repication feature that Truenas has, but TrueNAS is a little dazzle…

After two days of trying out Rockstor with raid56, I was a little frustrated to find that it was no longer supported,
So I want to ask, what’s the alternative file system? as it was a relatively good balance between performance and capacity and security.
Will move to ZFS in the future? Maybe it suits me better, caching is what I also care about.

Thank you all!

I have tested Synology, when I saw the inside of the box, using the rolling tape to tighten the memory, I wanted to abandon it.

I would prefer to buy a professional server like DELL and use a stable, maintainable system.

These are my thoughts, welcome to correct.

@dzmaster welcome to the Rockstor community.

RAID 5/6 is still supported, just not recommended, and now you have to jump through a couple of hoops to enable it.

For reference, you can read through this thread, where @philip outlines the options a bit more:

btrfs itself recommends RAID1c3 as a possible replacement for RAID6 with the space caveat:

https://btrfs.readthedocs.io/en/latest/mkfs.btrfs.html#profiles

Note 4: Since kernel 5.5 it’s possible to use RAID1C3 as replacement for RAID6, higher space cost but reliable.

The entire concept of the Rockstor appliance is built on btrfs, rather than being a NAS appliance under which you’d manage “any type” of filesystem/RAID approach. So, the likelihood of supporting ZFS as the NAS filesystem is probably nil.

There continues to be some movement on the RAID5/6 space in development, but since that has been going on for a number of years, it might take another couple of years for it to be determined to be stable/OK.

Of course, as seen on some other discussions on this forum, the risk of RAID5/6 diminishes both based on use case (e.g. mostly read operations like on a media server with more sparse write activities) and setup (e.g. having a UPS to ensure clean shutdown). And, as always, the NAS itself will not be a replacement for good backups, but that is now more of a philosophical debate than a technical one.

3 Likes

I think we are close to fixing the raid 5/6 implementation on Btrfs, the raid stripe tree patches have been merged on the 6.7 kernel, The next update should include raid 5/6 which (hopefully) fixes the writehole issue .

raid-stripe-tree
New tree for logical file extent mapping where the physical mapping
may not match on multiple devices. This is now used in zoned mode to
implement RAID0/RAID1* profiles, but can be used in non-zoned mode as
well. The support for RAID56 is in development and will eventually
fix the problems with the current implementation. This is a backward
incompatible feature and has to be enabled at mkfs time.

1 Like

For me, less than 50% space utilization is too bad, RAID1C3 - 33%, RAID10 - 50%.
as you said, raid56 is not recommended and I don’t want to fight the system.
we may need to think about whether we should to continue to use BTRFS, compare to ZFS, how competitive is it?
Commercial NAS using ZFS are already available, not only TrueNAS but also QNAP.

@dzmaster, yes, getting to a 33% utilization is certainly not ideal. Without getting into a deep philosophical debate, there continue to be benefits on btrfs.
Rockstor was built and designed with btrfs supported logic and functionality at its core, so if a change of direction were to be contemplated it would be a complete rewrite (i.e. essentially a new “product”), since the developers over the years have not “just” put a generic UI on top of the system where the underlying plumbing can just be swapped out.

And then there’s the at least decade-long debate about ZFS’s licensing type that continues to carry some level of risk.

From a commercial standpoint you have Synology that takes advantage of btrfs and SUSE continues to support it (even in the Enterprise space), who is not a small player in the enterprise Server OS market.

While @emanuc is pointing out the further improvements and fixes in the RAID5/6 area, I am not yet holding my breath. Though I am keeping my fingers crossed that it will happen sooner rather than later with these more promising recent merges in that area.

2 Likes

guess im done with rockstor, I need a solution now that includes raid6. ive been hearing for a year now “a fix is coming! very soon!” k, ive found forum posts going back several years about the exact same issue, done waiting. btrfs is flawed and should not be used, the excitement about the raid 1c3 levels has got to be faked, who can afford only 33% of your purchased storage space? zfs raid volumes cant be dynamically expanded. what kind of bullshit is that? I am not interested in transferring 20tb when I want to expand my array. i am astounded there is STILL not a fully functioning soft raid in linux.

@jihiggs Hello there, just wanted to chip in with something that hasn’t been brought up yet here.

Then use raid6 for data and raid1c4 for metadata, btrfs can have one raid level for data, and another for metadata. Data takes up the vast majority of actual disk use, metadata is relatively speaking tiny, but costly, ergo mixed raid, as per our slight outdated doc section here:

https://rockstor.com/docs/howtos/stable_kernel_backport.html#btrfs-raid1c3-raid1c4

As from 4.5.9-1 (RC6 in last testing phase) onwards:

We do support mixed raid levels. There is a picture in the GitHub Pull Reqeust here:

And a quick screen shot from the ReRaid

As with many things btrfs we pair down the options to those that make most sense. And support all sane combinations of parity data with raid1c* metadata. But for raid6 data one would go with raid1c4 given the only marginal extra space. I.e. raid6 data handles max 2 drive failure, where-as raid1c4 metadata could handle a 3 drive failure, i.e. 4 copies on 4 different disks. But this does increase your minimum drive count (for raid1 data) up from 3 to 4. So if this was an issue you would go for raid6-1c3 in Rockstor notation i.e. (data raid6)-(metadata raid1c3).

I entirely share your frustration, by the way - however btrfs’s flexibility with such things does negate some of the criticisms levelled at it from a perspective of software raid options that cannot handle differing raid options for data and metadata, and do not have block pointer re-write enabling such things as what we referenced as ReRaid. With the latter enabling you to switch in-play pools live from one raid level to another: again for either or both of data & metadata.

Hope that helps, you and/or others following along with this common discussion in this area.

3 Likes

does raid6-1c4 fix the raid 6 scrub speed problem?

@jihiggs Hello again.
Re:

It can only help, metadata is, as stated, a fairly costly (in cpu and io terms) part of the filesystem overall, so moving all of that to the raid1c4, or raid1c3 (likely a little quicker than raid1c4 as ‘only’ 3 copies on 3 different drives) can only help. Plus these 3&4 versions of btrfs-raid are fairly natural extensions of the now well established btrfs-raid1, where as the parity btrfs-raid level of 5 & 6 are much younger in comparison - given the ‘copy4’, ‘copy3’ are just extensions of the implied ‘copy2’ of btrfs-raid1: i.e. 2 copies on 2 devices irispective of drive count.

Sorry to not have anything more concrete, currently pretty distracted with our own updates.

Hope that helps.

2 Likes

raid6-1c4 made ZERO difference in scrub speed. dell r730, h730 raid controller in hba mode, 4 14tb exos. 128gb ram

@jihiggs Hello again.
Re:

How did you establish this? It seems unlikely that there is “ZERO” difference in performance. Did you ensure that the pool has finished it’s balance post ReRaid via Web-UI. And did you try raid6-1c3 as suggested for performance reasons? Note that if Pool raid profile is changed via command line - only new data on the pool, there-after, will be written to the new raid level. The Web-UI does a balance of all data/metadata post ReRaid to re-assert the differing raid profile however; also the nature of the Pool content can play a large part here: i.e.:

  • Few very large files (data prominent not much metadata).
  • Many smaller files (metadata prominent).

Hope that helps.

1 Like

I stand corrected, raid6c3 is 3mb/s faster than raid6. 3. 17 instead of 14. so basically the same. pool was created yesterday afternoon, copied a tb of data, movies and movie metadata, started the scrub this morning. c4 is the same.

2 Likes