SSD Read / Write Caching

I just thought I’d chime in that I’m looking at FreeNAS and Rockstor, and SSD Cache support is, I think, the sole reason I’ll probably end up on FreeNAS. It’s too important for achieving decent speeds in our environment. Otherwise, I think Rockstor looks as good or better for our needs. I guess lack of an Amazon S3 backup plugin is the other thing, but that’s easier to work around.

Not much point until NIC teaming/bonding arrives, otherwise most SSD cache users will suffer a 1gb bottleneck.

I’m definately interested in SSD caching as well, possibly using bcache or dm-cache.

I do have experiences on dm-cache on top of lvm/ext4 and it’s pretty amazing lowering response times of local loads such as sql, or more interesting on rockstor: transcoding speeds up as well!

Wow, I didn’t know Rockstor doesn’t support bonding yet. But I look at SSD cache as a latency solution for scattered reads more than a throughput solution. For sustained reads or writes, 12 GB/s SAS HDD’s in RAID 10 should be able to saturate any network connection I’m going to come up with anyway, with or without SSD cache.

Still, with a bunch of random reads on a single 10 Gb ethernet port, I suspect an SSD cache should speed things up?

But thanks for your reply, not being able to bond the two 10 Gb ports would be another advantage for FreeNAS.

LVM has a native caching module along with even mirroring of the cache device:
http://man7.org/linux/man-pages/man7/lvmcache.7.html

I second bcache if it can be implemented easily enough. Allowing a RAID 1 mirror of the cache pool would earn extra brownie points

Many of us have moved to 10GbE networks and will certainly benefit from the capability there inherently. I LOVE seeing R/W cache ssd devices SOAK in I/O behind a slow pool of spinners effectively making 6 magnetics ‘feel’ like 60. :smiley:

1 Like

As for the technical implementation of bcache with BTRFS I can’t comment. However I would like to say that qnap use bcache and have a very nice web gui of the stats. If it can be implanted that would be very good. iSCSI in the enthusiast area really need iSCSI and when the product is more stable the SMB will need it.

NFS is not really suitable for vmware in my opinion and iSCSI is far more suitable as some of the vmware features require block storage and VAAI is already build into LIO etc.

1 Like

How about Flashcache from Facebook? https://github.com/facebookarchive/flashcache

Shouldn’t be that hard to implement.
Synology for example is using it with btrfs and following options:

  • read/write cache with mirrored 2xSSDs (RAID1)
  • readcache with one SSD
1 Like

Teaming / Bonding works. Just enabeled in 3-8.13.

1 Like

@Jules81 Welcome to the Rockstor community. Just had a quick look and that project’s most recent commit, the only one since 4th September 2015 is the following change to the readme:-

 -------------------
This project is not actively maintained. Proceed at your own risk!_**
-------------------

Not sure that’s the way to go myself :slight_smile:

Personally I think it would be best to wait until we get native ssd caching support in btrfs. Akin to ZFS’s L2ARC facility.

1 Like

I have been using Bcache successfully for 6 months and the improvement is HUGE. The biggest improvement is having bcache absorb the random reads and writes, using writeback and readahead. All the reads and writes are then consolidated and sent to the disk as sequentially as possible. This really helps keeping the disks from getting bogged down allowing maximum sequential performance even when there are other random workloads happening at the same time.

The bcache cache device is a single point of failure when using the write back cache mode. To mitigate this risk I use 4 SSD in a far RAID10 using mdadm.
My setup:
Hardware:
10 X 3 TB Seagate NAS in a RAID 6
4x 120 GB OCZ-vector180 in a RAID 10

Applications using this server:
-Crashplan - backing up 4 other computers
-btsync - syncing all 18TB of data with other computers and servers
-lightroom via samba
-plexmediaserver
As you can see there is a lot of random reads and writes going on when all these applications are running.

Setup:
The setup only required installing bcache-tools because it’s built into the kernel.

Problems:
Only the web interface. For example rockstar (3.8-14.08) samba interface can not mount the shares because bcache volumes do not appear in the /dev/disk/by-id/. They also don’t have unique id’s but I haven’t seen any problems from that yet.

Info
https://wiki.archlinux.org/index.php/Bcache

What about using Areca Raidcard with BBU and hefty sized onboard cache? you can use JBOD mode for the card but still have perhaps 1G+ of onboard cache for drives.

Trouble is that the arcmsr driver is not included in kernel. @suman, is it possible to have this driver as default included in Rockstor builds? Or is it a easy way to install, and keep it installed when upgrading kernels?

Have a look here for the arc-1222 card in JBOD mode.

I suggest this or similar card, as one would not need a quick raid cpu since using JBOD mode and BTRFS will not have parity calculations. This is just to boost disk cache with loads of memory.

I’m bumping an old thread to the top here. It looks like some of the initial concerns and issues are old problems. I’m surprised I haven’t seen more on a bcache implementation.

Still on any minds? bcache support would be fantastic!

1 Like

Another BUMP here. I’m getting very close to a production implementation of Rockstor for a new NAS to be used in a media production company. In addition to already well spec’d hardware I’ve just come across a pair of 800GB OCZ R4 PCIe drives that I would love to use as a write/read cache. We’ll be on a 10Gb network so it would probably make a huge difference.

Is there any special configuration advice from those out there who have added Bcache to their setups? I’ll be serving shares over SMB2/3 since we’re 95% Apple based and apple is moving away from AFP. Any advice, suggestions, are gladly appreciated. And I would love for a formal implementation of Caching within the Rockstor UI, but I’m not afraid of command line for the time being.

Thanks in advance.

@bluesprocket Welcome to the Rockstor forum and thanks for your interest in this thread.

Bcache is not currently supported and when implemented via the command line will ‘confuse’ Rockstor’s (current) disk management due in part to it’s dependency upon available and properly formed /dev/disk/by-id names. However this element can be addressed using appropriate udev rules (I will link to a pending developer wiki post on this once I’ve written it) but thanks to forum member @ghensley for schooling me in the bcache and udev area we have a working set of bcache udev rules (almost entirely down to @ghensley this one) which I will post in the above mentioned wiki post. This however is not the whole story as there are still changes required to Rockstor’s disk management system so that it can appropriately recognise bcache designated devices, ie the bcache caching devices and backing devices; along with their associated and consequent virtual devices. These virtual devices are what constitutes the ‘end result’ block devices that become ‘whole disk’ pool members. Hence the required changes to Rockstor’s disk management system to account for this additional layer.

But we do have a rather large and as yet un-reviewed code change (pull request) pending that has (hopefully) enabled the required changes (given the prior existence of the appropriate udev rules that is):

But note that the focus of the above pull request was very much not enabling bcache compatibility, but it, along with full disk LUKS were ‘test cases’ for the enhanced capabilities that the changes were intended to enable. And as such rudimentary tests involving disk recognition and management / user were carried out. Enabling LUKS compatibility was more the focus here but to ensure the mechanism wasn’t overly specific I enabled bcache recognition as a developmental aid.

Before pending code changes but with appropriate udev rules:

and after the changes:

With appropriate explanatory tool tips:
single link icon = Bcache backing drive (UI pending).
dual link icon = Bcache caching drive (UI pending).
lock icon = Full Disk LUKS container (UI pending).

The final test of the subsystem was to have full recognition and use of Open LUKS virtual devices hosted on bcache backed and cached virtual devices as seen in the following:

With the following additional flag:
eye icon = Open LUKS container (UI pending).

No UI work is as yet planned for bcache though, although if the above changes end up passing review the underlying capability should be in place. But full UI configuration and management of LUKS is very much in the works and was, as previously stated, the reason for this disk management upgrade in the first place:

The above issue is my intended next main focus, given the stated prerequisites and my available time of course.

I would caution however that we are early days here on the bcache side particularly. But I just wanted to voice that efforts are being made in this direction, in no small part down to the behind the scenes encouragement and help of @ghensley.

Another caveat is that I have only demonstrated correct functioning using a single caching device linked to 2 backing devices. But I expect that a one to one arrangement may also work as expected. As stated, this wan’t the focus of these recent changes. Also @zorrobyte and @tekkmyster sorry I haven’t gotten as far as testing the mix of mdraid and bcache devices with this new role system. Hopefully all issues can be worked out as we go.

I will endeavour to write the above referenced bcache wiki post and link to it from within this thread.

Hope that helps to inform you of what is in the works.

My previous statement:

still represents my own position on this, however bcache does seem to have a reasonable level of respect and has been part of the kernel ‘proper’ for some time now.

Sound like your building a nice system there. Do please consider doing a forum post on your build as and when seems appropriate.

2 Likes

@bluesprocket As promised I have made a first draft of a developers wiki post which contains the udev rules used so far in bcache integration within Rockstor:

I have linked back to this forum post for general discussion on bcache in Rockstor matters.

Hope that helps anyone interested in contributing to or reviewing the proposed changes and pr.

3 Likes

This does not affect any Rockstor releases but for those experimenting with kernel versions please note that early 4.14 kernels had an issue that affected bcache (with all fs types). But this was apparently fixed in 4.14.2:
from: https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg70825.html
we have:

> There is a known bug in 4.14 that affects all bcache backed file
> systems (maybe more, I think it's not a bcache specific bug). You need
> to downgrade or upgrade your kernel. Fix should be in 4.14.2.
> 
> The bug was found quickly enough ...

I would love to use bcache with one of the new long lived Optane 900p SSDs which do 2500mBps reads. Caching is a feature that Unraid does extremely seamlessly and I would love to see it in Rockstor. It would be nice to have the option of having a slow RAID 1, 480GB Optane cache and 10GBase-T… dump the 500GB of footage on the server then edit within seconds, not 40 minutes. @Tomasz_Kusmierz yes there is a reasonable use case… linus tech tips with his 8K cameras runs into this frequently.

I’m not even sure how my name got into the mix, but hey I’ll chip in.

bcache is fantastic acceleration method … thou if something goes wrong bcache will stab you in the back with very rusty knife.

about linus tech tips … where do I begin … guys there got some know how, unfortunately difference between them and rockstor is that

  • rockstor is aimed at “unwashed masses” where if something goes wrong your collection of family pron, and cat pictures will be un-rescuable (doubt that people that don’t know how to use bash will know how to get out of imploded bcache setup)
  • linus tech tips is aimed at rgb geeks that spend way to much perfecting their rigs than actually producing anything out of them (I love computers and I appreciate good builds, but fixating on light in your rig is to me like putting a lipstick on a pig)

So bottom line - bcache is great for disposable data, for example cctv recordings or temporary storage of post processing files, not a sollution for where A) you might not have backup (lot of private users ?!) B) rescue from failure and restore from backup essentially halts business for days.

And final point :slight_smile: even if you will manage to get your bcache running on rockstor rig with your wonderful pcie SSD array that can pump 5986693485769846 petabytes per second, unfortunately rockstor will cannibalise all that in the long run due to stubborn use of qgroup. Qgroup’s are a reason why I drifted away from rockstor (they might stoped using it), qgroup are well documented to be a major performance hit on systems that have more than few months actively used fs, they are absolute performance killer on snapshots, they will make it virtually impossible to perform repair on aged FS because btrfs repair will walk very long trees of dependancies and a simple fix operations are going to take days rather than minutes. I’ve even tried to manually dissable qgroup but every now and then there was a problem with scripts that thrown toys out of the pram due to lack of qgroup, and it was simply not worth the hassle.

I don’t mean to reignite the SMB cache vs BCache debate, but it seems like SMB caching is so much easier that it could actually get off the ground.

Nikki Gordon Bloomfield just built a new NAS, she’s using Unraid with an high speed m.2 drive as cache. For 4K video production the 180TB workload rating WD Reds have becomes a serious show stopper, specially when dealing with moderately sized raids. Without a cache all of the load goes to the drives… Which means the lame SMB caching that Unraid does becomes a money saving god send.