Thought I would post this up for some fun.
I run a 40Gb infiniband network between my servers and recently needed to copy a lot of data to my RockStor server for backup. So I kicked off an SCP of the data between the two servers and watched the graph.
So looking at that graph I was left wondering why it was only transferring 76MB a sec. (can you believe I used the word only in that sentence lol)
That is less than 1/8 the total bandwidth of the network.
Turns out, 76MB is about 6Gb or the max speed of the single SATA HDD in the RockStor pool on the server.
Im loving the 40Gb infiniband and its cheep on the used market. It was cheaper than 10Gb networking.
Now it’s quite possible I’m being extremely stoopid, but I’d calculate 76MB/s to be 608Mb/s, which is 0.6Gb/s. I’ve always multiplied by 8 to go from Bytes to bits.
Help me either way please - before everyone starts laughing at me…
The 600MBb/s would be the maximum for SATA III:
SATA III (revision 3.x) interface, formally known as SATA 6Gb/s, is a third generation SATA interface running at 6.0Gb/s. The bandwidth throughput, which is supported by the interface, is up to 600MB/s (as per Sandisks KB article)
Your right, my math was off yesterday morning.
I was also wrong as to where the bottle neck was. Seems 76MB/s is a limit of scp.
Im retesting with other protocols and methods.
For those following along at home.
The drive is a Seagate Constellation ES.2 which has a write transfer speed of 133MB/s or (using google this time) 1.06Gb/s
Using NFS and an rsync to the share Im hitting that limit.
Now from the network side of things. The chart looks like this
The spikes appear to be linux caching the incoming data to memory and then when the a limit is hit it pauses incoming data while it flushes the cache to disk.
And this where the Linux kernel is so adept at memory management. Taking a look at my NAS memory chart, it’s usually around 10% used with the rest cached/buffer, making the most of the 8GB available, using it to cache in or out going data transfers.
Looks like the network spikes and drops were more NFS related than cache related.
I set up the rsyncd daemon on the rockstor server and did an rsync from my SSD NAS over to rockstor. It gives a much nicer network profile.
Still hitting the limit of the drive write speed but it is a much nicer flow.
Wish/feature request, add rsyncd as a service with an editor for the /etc/rsyncd.conf and /etc/rsyncd.secret files. (Note: If you use “ionice -c3 rsyncd --daemon” to start the rsyncd service, the transfer does not clobber you IO)