High speed dual 10G on a HP ProLiant SE326M1 G6

Hello,

I am looking for some info on setting up a rock solid, fast storage server for our small post production company.
Until now we where using local storage (ssd stripes, along with hdd mirrored backup) and aggregated Intel x420-T2 10G nic, to share our media and do our work.
We have 3 identical workstations and we mounted all the local storage to specific drive letters so everyone could see and open every project without losing the speed of the local stripes. Windows 10 is our OS.
Recently we purchased a refurbished HP ProLiant SE326M1 G6 with Dual Xeons, 96Gb Ram and 25x300Gb 10K SAS drives, in order to use it (and expand it with larger HDDs and SSDs) as a centralized fast storage system.
Testing out a couple of NAS products we found a major issue on the 10G speed when it comes to read transfer rate.
Both Quantastor and FreeNAS capped the read speed to about 250Mb/s while the write speed could go up to 600Mb/s, both of those, unacceptable, since our Raid0 ZFS (for test only) setup with those 25 drives could release a massive 2.6-3Gb/s read/write speed.
The network speed, measure by iperf wasn’t that great either but with a single thread gave 4.5Gb/s and with 5 threads it went up to 8Gb/s witch made me think that it wasn’t a raw network problem, but probably a tcp/cifs/samba problem.
That’s how I started searching for viable alternatives and stumbled to rockstor.

Tomorrow I will install it in one of the 25 drives witch are currently controlled by a Smart Array P410, and see how it goes.
My concerns are, the Raid controller and the network speed.

The specific SAS controller has no JBOB support, although I have done what HP recommends and done a Raid0 on every single drive, in order for the fs to recognize them, and use the hpacli utilities for red hat, to monitor them.
I also have no problem with the inability to HotSwap the drives in case of failure since I have never seen any IT, hot-swap mission critical drives, and I can live with a 20 minutes downtime for a server reboot.
So hopefully the controller, one way or the other, will work under Centos, and BTRFS will have no problem managing a bunch of raid0 disks, although I am open to suggestions.
And the nic speed, also hopefully, will be at least as fast as it is over Windows shares.
My first try will be with a non-aggregated port, and then, with a little help from the community and holy cli, I might be able to bond the two ports and reach an ever higher bandwidth so I can have at least 2 workstations use all the available network bus.

I will report any problems and any weird findings here, but I would really appreciate any insights, tactics, tips and trick, gotchas, and no-nos from the community and the experts, since I am mainly a Windows user, with a medium Linux background. I have no problem with cli (and it makes me feel cool and nerdy-trendy typing instead of clicking) so hit me with your worst!!

PS: Raid0 setup is only for speed testing. I will be doing a more safe Raid5 or Raid6 setup once I optimize the speed even if it brings it down to 75%.

Happy to join the rockstor bandwagon and I hope to stay here and help.

So,
I, quite easily, manage to install rockstor, create a pool, and a share, so I started testing speeds

here are some results:

Disk Speed Test
dd if=/dev/zero of=/mnt2/DemoShare/testfile bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 1.29433 s, 830 MB/s

Disk speed for Raid0 seems appropriate for now at 830 Mb/s

Raw Network
[root@rockstore ~]# iperf -s

Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)

[ 4] local 192.168.0.50 port 5001 connected with 192.168.0.11 port 54541
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 9.28 GBytes 7.97 Gbits/sec
[root@rockstore ~]#

Pretty good, and solid at 8 Gbits/sec

So lets try a simple copy from windows explorer
Read from Share:

Write to Share:

So any official sollution?
is Samba so slow?

As seen from the filecopy of a single 6Gb file the read/write spead is at max 360Mb/s witch is unacceptable for ANY kind of real work.

Between the two Windows 10 workstation the throughput continues to work at almost 760Mb/s

1 Like

Is there a UI way to enable Jumbo Frames on Rockstor side?

Hi @npittas, welcome to the Rockstor community! and thanks for sharing your test results.

For now, you’ll have to set Jumbo Frames using CLI. But I’ve created an issue to keep track of your request.

Thanks for the quick reply, although jumbo frames alone will not do the trick.
Are there any smb setting I should tweak or even add smb 3 support? Maybe multipathing can give better results?
after a day of tweaking around, I came to the conclusion that smb implementation needs a lot of trial and error, from rockstor’ side.

Hopefully you guys should have some clues on to what should be tweaked.

For now maximum achieved read/write with dual nics enabled in both sides, was under 650Mb/s with an average of 450Mb/s, almost ÂĽ of the available bandwidth.

So after a couple of days of testing, I am pleased to say that everything works as it should

I have a stable read speed of 550-650Mb/s from any windows machine, with the write at 350-500Mbps, depending on the file size.

What I’ve done to achieve those speeds:

**Enable Jumbo Frames (**9014) on all the NICs involved, and the switch.

Windows side tweaks:
Maximum Number of RSS Queues = Maximum Number of Physical Cores (8 in my case)
Performance Options (Driver Specific for the X540-T2):
Flow Control : Disabled
Interrupt Moderation Rate : Off
Low Latency Interrupts : Not using
Recieve/Transmit Buffers : Maxed Out (*4096/6384 respectively)
Recieve Side Scaling : On

From Rockstore side I searched and googled and find those settings to work:
smb.conf:
max protocol = SMB2
socket options=TCP_NODELAY IPTOS_LOWDELAY SO_SNDBUF=262140 SO_RCVBUF=262140
strict allocate = yes
read raw = yes
write raw = yes
strict locking = No
use sendfile = true

Now Windows 10 do not have Teaming as of yet, although on my W10 Rig I have the Option to Team the adapters, on the other 2 rigs I can’t.
So I just use differnet IPs for each NIC, and left one of them without Gateway and DNS, so it won’t screw my internet.
This hasn’t affected the performance, since I do not see any speed benefit, but I am afraid that is due to disk/SAS/PCIe bottleneck.

I would love to team the Rockstor Server NICs but I do not think I can do it without some help from you, so If you can advise me on how to procceed without breaking everything, that would be great.

Finaly I would like to ask if there is a way to add the Active Directory Domain Controler feature. I would love to be able to add a DC server to my network, using Samba4, but for now Centos 7 only provides a stripped down version.

1 Like

First of all, thanks a bunch for giving back to the community by sharing performance tuning confg.

nic bonding(and perhaps teaming) feature is on the roadmap. But perhaps someone can collaborate with you to set it up manually. This document could be useful.

Regarding domain controller, are you asking if Rockstor can act as a domain controller(using samba4)?

To increase the write speed have you tried turning on async io ?

You do this by setting:

aio write size = 1024

(or whatever size beyond which you want Samba to go async). This will allow multiple threads to simultaneously write SMB2 write requests to disk. Let me know if it helps !

FYI. “read raw” and “write raw” are old SMB1 config parameters and will have no effect on SMB2 calls.

I didn’t see it mentioned anywhere but on the windows boxes turn off Large Send Offload. I’ve not played with 10Gb but on 1Gb it’s often the difference between 40Mb/s and 90Mb/s writes