Super.
And:
I’ve not personally tried our prior bcache ‘arrangement’ on the current Rockstor 4, which is, ideally, what you should be running now. But our older CentOS variant was proven to work with bcache as from the drive roles addition onwards (read LUKS support as it also uses roles). We ensured that our currently fledgling disk roles subsystem was flexible by developing both the LUKS and bcache capability simultaneously. They both need to recognise certain ‘categories’ of ‘special’ devices and our roles are used to label said devices so that they can be presented accordingly within the Web-UI. But the caveat is that Rockstor expects a very specific configuration regarding the naming of the bcache associated devices. If this is in place then the Web-UI should sanely represent the nature of the various devices and present proper configuration options for them in turn. These very specific naming requirements are setup by udev rules (on boot) and it is these rules that have not been established as existing or compatible within our “Built on openSUSE” variant. And given bchache compatibility within the Web-UI is not a current ‘core feature’ it’s not received the attention it needs to ensure it even works in the Rockstor 4 version. However if our base OS of openSUSE Leap 15.2/15.3 does have incompatible udev rules they could be replaced with the compatible ones we have in our technical doc/wiki here:
So that is basically the document we have for how to setup bchace device naming so that it is compatible with Rockstor 3; and hopefully still Rockstor 4, though with the aforementioned replacement of existing rules (untested). Note that if you do try this on Rockstor 4 you are likely going to be able to just install the bcache user land tools from the default repositories and not have to build them from scratch (source) due to the years newer underlying OS.
Remember that inevitably more complexity brings with it more fragility (more moving parts) so take care with ‘experimenting’ especially with important data. And if you end up scrambling a pool and then btrfs send broken subvols you may end up scrambling the received subvols as well. Bcache as a very good reputation but it does introduce another layer and can in turn weaken an otherwise complex enough setup. But at least we have dropped a number of layers in adopting btrfs in the first place, i.e. no partitions, LVM (physical/logical layers) or mdraid (physical/logical) and have these all wrapped up in the same project/layer. But it still introuduces another failure point and a single point at that if you have only a single ssd. And if you do, as some have, and use mdraid to beef up the single ssd you then add the inability of mdraid to return what it was given (no check summing) so you still have a single point of failure. I’d go for write through config at first if you try this. Unless of course you really need the apparently more instant writes.
Hope that helps.