Testing of LAN replication speeds. Curious what others are getting

I only saw one other post on Replication speed related and it was WAN related. So I decided to reach out and see how fast others’ replication is working as well as did some testing while writing this up.

This is my first attempt, I was syncing my music share from one Rockstor to another(Box 1 to 3…see below specs) I am seeing about 34-38MB showing up in the network graphs of each Rockstor box( is that MB/s??? …not sure the units as they aren’t stated…future tweak???). Not bad…but not as good as I was expecting. I let it complete before my next tests.

Second test: Box 1 to 3 (MusicVideos folder/share sync …very large) same speed of 34-38MB on graph…

Third test: 5 min later while second sync was still running I started another folder sync (Movies folder …also very large) Speed jumped up to 56-66MB on graph. Also CPU usage on Core2 Duo is over 50% per core (50-60% area)

Update EDIT: I added another folder to sync (three folder syncs running at once) throughput jumped to 68-74MB and CPU when up just a hair I think on the old Core 2 duo (55-65% each core)

Edit 2: I leaned that one folder sync seems to read off of only one drive (raid 1), and multiple folder syncs seems to be using both drives, as per Dashboard HDD activity graphs.

Cool that more shares/syncs equals more combined throughput. However, I am not sure the background of the sync process and why the throughput seems almost capped per instance.

Below is some background details for refernces.

I have 3 Rockstor instances. 2 are Proxmox VMs and 1 physical machine

Hardware list:
Proxmox host is Dell T3500 with 20GB ECC ddr3, and Xeon e5630 ( 4c + HT= “8 core”), onboard Intel Gigabit nic, ZFS mirrored SSDs that host the VMs OS “drives/partitions”
-RockstorVM (Box 1) has 4 cores and 4GB ram allocated , Rockstor v3.8-10.19 with storage drives/shares ( 2x WD 3TB Red passed to VM directly eg: virtio0 /dev/sdc, virtio1 /dev/ sdd)

-RockstorTest (Box2) has 2 cores and 2GB ram allocated, Latest Testing patch (v3.8.11.01 atm) SMB shares were OS drive set to(KVM) SATA mode in proxmox

Physical PC (Box 3)
-Intel Core 2 Duo e6300 w/ 6GB DDR2 800, and using PCI-e Intel Gbe nic. SSD OS drive and 1x2TB Sata atm.

for reference: SMB share testing to/from all 3 boxes from a gigabit connected Win 10 desktop(w/SSD drive) gets 108/110MB/s transfers of large ISO files. VERY EXCELLENT!!! Bottle neck is LAN, not virtual or passed through or physical drives it seems.
Haven’t specifically tested much with multiple small files yet( ie: music folders). Saturating the gigabit link is not an issue it seems, though.

1 Like

Tried replication of a 600GB {edit 600TB? I wish) data share to a VM as well, in preparation of doing that with a remote Synology in a datacenter.

Got my local LAN speeds 111MB to that specific target.

Over both NFS / SMB i can saturate a shorter circuit just fine, 119MB/s. Very happy about that, especially considering it is just a RAID0 of WD Red drives with a T1840 CPU, aka the typical low power homebuild NAS setup.

The good news is that north of 70MB/s WAN replication seems to work just fine too.

I really wonder what is up with your first two tests begin stuck at 38-ish MB/s.

1 Like

Well, I went from VM to physical for replication. Nice WAN speeds you have…double my LAN!
I am still debating which unit to keep as my Production and data backup unit (VM or physical).

What file transfer backbone is the “replication” using. Does it use BTRFS snapshots and internal send/recieve in btrfs?

Also note, I got lucky trying to figure out the Key creation/ copy paste into each others “appliance” section. The documentation that is up on this is old and only has user/password for authentication. I likely did it wrong…but it works. Not sure what the default “cliapp” key is used for that is in the key section. I made a new one on each box, naming/titling the newly created key the name of the other box. I then copy/pasted the key+ID into the opposite box that I used to name it off of. Did this to both and it worked…not sure if that is optimal method…no documentation.

I am suspicious of some ram caching when I did my quick Win 10 to NAS SMB testing. I actually picked Rockstor as it beat Nas4free 10.02 and FreeNas 9.3 and it tied with the new UnRaid 6.1. It was a NAS shootout between ZFS and BTRFS units. And Rockstor did it with the least hardware requirement ( no cache drives, flexible enough pools(transform raid levels, etc), less RAM needed…although I have since bought lots of RAM. Note: Nas4Free/FreeNas, I initially tested with just a single HDD, and might pull ahead once more are in an array.

More testing will be done…and any suggestions to performance tune are welcome!!!