Setup Sanity Check


Hi all. I have been lurking and reading threads both here and on the FreeNAS forums for a while. I have finally decided to order my new hardware for a home server and am leaning towards using RockStor as my NAS solution. The only thing that has me leaning towards FreeNAS is the /slightly/ more modern web GUI it has and iSCSI support. But, I would really prefer to stay Linux based and not FreeBSD which is where RockStor is winning me over. Before I get into the specifics does anyone know if there are plans on the horizon to modernize (or sexy-up) the RockStor GUI or is the desire to leave it more utilitarian (nothing wrong with that either just asking)

That said I have already purchased a full version of RockStor because I as a Software Dev I like to support these projects even if I don’t end up using them. I think there is some great work being done here and a good community to boot.

Here is the setup I am looking at:


  • 1x Cisco UCS C240 M4SX Chasis w/ 2x E5-2650v3 CPUs - 40 Cores / 64G ram
  • 1x SCSI12G Hardware Raid Card
  • 24x 600GB 10k RPM Drives in Raid 6 for a total of ~13TB

[Note: Since this is an HW Raid 6 it should work just fine with BTRFS which will just see it as contiguous storage - right?]

  • ESXi running as the Main OS
  • RockStor inside a VM
  • Serving NFS/SMB to the house - Windows Snapshot enabled.
  • ESXi spinning up VMs for the kids that can be nuked (when) they completely screw up their machines.

Primary Use Cases

  • Plex Transcoding
  • Sonos Music Hosting
  • VM Storage Share
  • PC Storage Share
  • Local Backups


[Q0: How much CPU/Ram should I allocate to RockStor - not looking for minimums here but a good solid baseline to make things run smooth.]

[Q1: Is there a guide on optimal RockStor/VM setup? I have seen a few threads but many are dated and what I would really like to know is the best way to expose the drives to RockStor / current status of network bonding support].

[Q2: Is there an optimal way to expose the drives to RockStor? Create a normal slim VMDK on the disk and let it fill up - or create it in thick mode - or link directly to the block device?]

[Q3: Network bonding - Supported? Worth it? How many NICs / what type?] : Supported and wokring, unit tests missing. @Flox to confirm. Confirmed but untested.

[Q4: Will anything in this setup severely negatively impact my disk performance?]

[Q5: Best way to share media from RockStor VM to Plex Media Server VM running on same phsyical hardware - should I just increase the compute on the RS VM and run PMS alongside or should I use NFS with 2 different purpose built VMs (I prefer the second but happy to hear other opinions)]

[Q6: Anything else I missed or should be thinking about?]

Thanks, everyone.

1 Like

@nandor Welcome to the Rockstor community.

That looks like an nice build on the way.

I can chip in on this one:

There are no plans currently, mainly as we are a small but I like to think effective team. And our resources currently are engaged with getting our base functionality as bug fee as we can and we have a little way to go on that one. We are also working on improving our ‘delivery’ (read CI/CD) and automated testing in order. However if, as has happened in the past, we get pull requests concerning this, and sufficient testing has been done to demonstrate no regressions then I don’t see why they would be turned down. However to be consistent across the entire UI would be a massive task, bar CSS type changes of course. So I think it’s unlikely this is going to happend any time soon. Plus, personally, I like the ‘utilitarian’ approach. But something like a dark theme or the like wouldn’t harm of course.

We also have quite a bit of technical debt, i.e. we are still Python 2 and need to move to Python 3, which would in turn allow us to upgrade our now old Django version. And these upgrades would in turn would allow us to employ Django Channels 2 (Python 3 async/await based) which should in turn allow our UI to be a lot more interactive, ie fewer (if any) requrements to refresh and the like. But of course once those goals are met those wishing to spruce up UI elements would in turn have more tools to hand so all part and parcel of progress I guess.

Yes, this has been one of our most request features. I think once we have our new distribution / automated test system in place and we have moved to python 3 / newer Django and revamped our other backend ‘services’ and moved to our planned openSUSE base with an image based installer and system pool roll back capabilities (inherited from default openSUSE) we should definitely look to adding a nice iSCSI Web-UI component.

So in short I think it’s fair to say that the current focus is on establishing a system to ensure fewer regressions (we had a few too many in more recent times) and higher release cadence. I’ve been busy with the hopefully improved build systems that will for a while have to account of our existing CentOS users and proposed new openSUSE based users. And I think we still have way too many ‘paper cut’ issues to consider adding too many large features such as iSCSI. But it would be a nice to have and so is likely to appear in due course. But again we are a small team, however pull requests are always welcome.

Re your proposed ESXi / VMWare base, remember that Rockstor absolutely has to have ‘device’ serial numbers so remember to enable this ‘feature’: see the Minimum system requirements doc section for the VMware configuration note.

Also, both ZFS (FreeNAS) and btrfs (Rockstor) favour direct disk access. Presenting a single ‘virtual’ hardware raid device looses some of the advantages of software raid and will in some cases remove it’s capability to ensure data integrity: ie the hardware raid can change things under them without the multi copy (checksumming) knowledge that is known only to those file systems. Others here will be able to chime in more knowledgeably than me on this but just a pointer and really up to your own design criteria how you go, Especially given your at least dual use of this hardwre. You could consider for example passing several of the ‘raw’ devices through to Rockstor. Also stick for the time being to btrfs raid1 or raid10 if you do go the software raid way.


Last I checked it worked as intended (bar paper cuts) but we have yet to establish proper unit tests for this. However @Flox has recently made some nice progress in this area and I think there is only a little way left to go before we have our teaming/bonding subsystems under proper unit tests. I think this is awaiting my input on unit test data as I have a couple of bonding setups here that I can use to generate that data. All divided time unfortunately. And so along with the ‘in progress’ build systems, my current task, we should going forward be able to ensure these more advanced network functions with a greater degree of confidence.

And on the same subject:

Should be supported (as mentioned) although our documentation is lagging unfortunately. As to if it’s worth it depends on how many simultaneous clients, network threads you have requirement for. Bonding is often only of benefit in multi stream scenarios as each network stream will still only be the speed of it’s individual hardware Ethernet: that is without some fairly messing about. And of course your switch will have to support the same.

Again much more informed opinions available from others in this forum but just chipping in, in the hope of being corrected.

Hope that helps and thanks for helping to support Rockstor development via a stable channel subscription. Much appreciated. If Rockstor ends up fitting your bill then great, but otherwise I hope FreeNAS, or possibly OpenMediaVault (linux based like Rockstor), work out for you as they are both doing fantastic work.

1 Like

Thanks @phillxnet for the reply. I am updating the OP with answers as they come, so that anyone looking at this thread can get the info quickly without having to scan the whole reply chain.

Can you explain this:

Presenting a single ‘virtual’ hardware raid device looses some of the advantages of software raid and will in some cases remove it’s capability to ensure data integrity: ie the hardware raid can change things under them without the multi copy (checksumming) knowledge that is known only to those file systems.

It seems to me that if I use a Virtual Disk of say 6 TB of my Datastore exposed via VMWare that Rockstor can setup any software raid/checks it wants to without issue? Maybe I am missing the mark though.

@nandor Re:

If that Virtual “Disk” is singular then there are 2 issues:
1: it has hardware raid underneath the btrfs. This means that the hardware raid can end up changing stuff underneath btrfs in it’s less informed attempt to maintain data integrity. Btrfs, in it’s default config, checksums all data so can give a guarantee of sorts that what was is retrieved from it’s volumes is what was saved. Hardware raid is generally considered to be less effective at this process.
2: if there is only one disk then their is only one copy (there is dup mode in btrfs but to simplify here) where as if for instance there are 2 or more real disk passed through to Rockstor then in btrfs raid1 there will be 2 copies stored on the 2 available disks. Note that with btrfs raid1 the are currently only 2 copies, even if you have btrfs raid1 with 3 or more disks. So if there is a checksum error with one copy, btrfs can look to it’s other copy and retrieve/present that copy. During scubs if a duff copy is found then it will replace that with a known good (via checksum) copy.

I think the possible confusion on my or your side is that btrfs requires multiple devices to do it’s software raid (ignoring the dup which can do 2 copies on the same device). So presenting multiple disks will enable it’s ability to ‘self heal’ if a data fault is found on one of it’s drives/copies.

So in short btrfs RAID profiles work across devices (generally) so if there is only 1 their is not raid (bar dup) and so no self heal capability or fail over to known good copy with consequent bad copy overwrite/repair. You need a minimum count of drives to establish a pool (btrfs volume) of a certain btrfs raid level. And putting hardware raid underneath btrfs undermimes it’s ability to maintain data integrity as you are then relying on the hardware raid for this. This is also common to ZFS (FreeNAS).

There is also a third complexity involving smart data. It can be quite tricky to get this through a raid controller. But that often still stands even in ‘pass through’ JBOD (Just a Bunch Of Disks) mode. Hence some folks reflashing and the like their raid controllers to get as close as possible to HBA presentation. It is possible but it’s tricky and can take quite a bit of configuration and depends on your particular hardware. See the smartmontools docs for this. The Rockstor S.M.A.R.T doc section has a subsection for this:
S.M.A.R.T through Hardware RAID Controllers and rockstors side of this is only young.

Hope that helps to clarifies my prior writings a little.

1 Like

To confirm, yes, here’s the related pull request:

As @phillxnet mentioned, and as far as I know (I unfortunately do not have the hardware to test that), bonding is working as intended and only related test data would be missing from these unit tests. I did find some minor UI quirks (only some minor cosmetic edge case that do not alter functionality) in the network bond creation page, but I have related minor fixes in an upcoming PR as well.

Hope this helps.