Heads Up FYI: EXTENSIVE 10G Network Testing Results

For weeks I was worried about the transfer speed fluctuations over my 10G fiber network and discovered some things.

To narrow things down all drives in my NAS were disconnected and only 2 fast SSDs were setup in Raid-0.

  1. The 4.5GHz 3570K and i7-5930K setups were fastest, the 5930K being most consistent. (1+ GBytes/sec). The R9-5950X setup only runs the network up to 850Gbps for some reason.

  2. When copying 240G of files from a NVME OR RamDisk, the RockNas setup would consistently slow speeds to near zero as described in another post. (5950X testing, same results as 5930k)

  3. Installing Windows 10 on the RockNas setup hurt things rather than help! Go figure!

  4. Installing LinuxMint 20.4 and setting up a shared 1TB SSD Raid-0 Ext4, I got much clearer results. These test were repeated dozens of times in both directions from NAS to 5930K/5950X setups using Rockstor (in previous testing in other thread) and same setup with LM 20.4.

In all this recent testing, 3 copies of an 8GB file were used.

I reached the following results:

A CLEAN transfer of files to the Rockstor setup running Rockstor or LinuxMint 20.4 (<33G total) depends on whether or not there are existing files on the disk drives.

EMPTY Destination Drive:

Overwriting Existing Files:
24G-Xfer-DF

When over-writing files on the Raid-0 Ext4 setup, there was a slight lag starting the transfer then two more dips in speed. When doing very LARGE transfers, these dips start happening when the 32G of local Ram fills up.

The results show there is nothing wrong with Rockstor using btrfs or the same hardware using Linux with Ext4 or Windows 10 (W10 being the worst). The fact is, there is some significant overhead involved using SSD’s when over-writing files no matter the O/S in question with regards to long sustained transfers.

Furthermore, it may be worse or better with other SSD’s, but I saw this same behavior using two smaller SSDs of another brand.

Furthermore, the slower 850MBs AMD 5960X speed was “conditionally” traced to the buss interface on the Intel 10G NICs I am using. The Intel cards use PCIe-2.0 x8 and the R9-5950X motherboards can only run it PCIe-2.0 x4. Solution to this problem will probably be an upgrade to a PCIe-3.0 x4 card so far as the AMD R9-5950X is concerned. The other 3570K/5930K setups support x8 and run full 1000 MBs speed.

Final thoughts: When the hardware can’t keep up with the network speed, it isn’t necessarily a hardware fault, or the operating system fault, or the “other guys” fault. It can be a combination of SSD TRIM functions, HD access/RW speed, cache usage and methods, overall storage paradigm, all the above and more.

My total setup would run without glitches at all at 1Gbps speed and probably fine up to 5Gbps speed. It is only when I push it hard to 10Gbps speed that a little “Gotchya!” comes into play. The “Gotchya!” is complex and unpredictable in a large 240GB transfer of many different file types and sizes, therefore the intermittent full speed slowdowns seem unpredictable. BUT, I have proven at least two significant and repeatable setups where Mr. Murphy is paying close attention! That is enough for me to support the hypothesis.

Still can’t wait for Rockstor 4 to be released as an ISO file!

:sunglasses:

PS: Notable slowdowns also occur when directories and sub-directories are being created and when numerous small and tiny files are transferred, but at least we know the SSDs themselves have something to do with it as well. It’s complex, but now to me understandable.

3 Likes

Set up a 3-SSD (fast ones) in RAID-0 and tested at 850MBs with great results! The combined Write speed of the 3-SSD setup is about 1.2 GBs.

Then, I switched to the i7-5930K setup (Asus Tuf Sabertooth X99) and got full 1GBs speed with the same slowdowns as before with 2-SSD setup.

Sooo, in theory, the SSD/HD setup needs to be somewhat faster than the input data rate. My guess is 35% to 45% faster to allow for system overhead.

I’ve ordered a PCIe to M.2 NVME card to see if that would support full speed transfers in my last X4 slot on the Rockstor motherboard. It’s either that or a 4-SSD Raid setup will be required.

We shall see.

9-)

PS: I also tested each SSD individually as single disks and saw nothing unexpected, so that eliminated any SSD specific glitchy things.

3 Likes

With the upgrade to 4.09 and the 5950X LAN card upgrade, 10Gbps speeds back and forth are pretty much attained across the board with only slight slowdown points here and there after the cache fills up to the SSDs. Problems with random directories being set to System/Hidden/ReadOnly traced back to various windows upgrades and solved.

Direct writes from LAN to the HD’s in the NAS setup, although much improved, are still limited to the 140MBs ish speeds after cache saturation. This is normal and expected.

After much contemplation and reassessment of needs, I think my final configuration will be 6 4TB drives in a RAID-10 setup and 4 or 6 SSDs also in RAID-10 setup. This will pretty much completely stuff the chassis.

I’ll send pics when it is all done. Won’t be pretty inside, but it’s replacing 3 backup setups in one box.

Happy it’s close to being done!

:sunglasses:

3 Likes

The final result of how much data I have and the need for speed tells me 6 4TB HDs in a raid 10 configuration is the best method. I have in total just over 9TB of data to save, so 12TB will do the job for now.
When copying from the 5950x setup to the NAS, I get very respectable speeds and it can easily saturate the 10Gbps LAN without hickup! In fact, for the first time since I started this build, I am totally happy with the way it is working. The 4.09 update SOLVED so many problems for me that now I am supper happy!

This started as a single transfer from the 5950x 3 drive raid-0 setup to the NAS setup, then I added a simultaneous transfer from one of the NVME drives. It works perfectly and no errors or hickups after a lot of testing.

The NVME of course is faster, but the the 3-HD raid-0 setup gets back up to speed quickly.

Xfer-1

All in all, best compromise for now and I reiterate that none of this worked well until I updated to 4.09!

:sunglasses:

PS: Turns out the Intel 10Gbps NIC cards spec PCIe 2.0 x8 but only runs PCIe 1.0 x8!
The new NIC I have in the 5950x setup is PCIe 3.0 x4 and runs that way so full speed was attained!

3 Likes