[SOLVED] Network Port Bonding

I’m trying to setup port bonding, and am not sure if I have it working or not.

In the Rockstor web interface, I setup a new bond using 802.3ad:

I enabled port bonding on my Unifi Switch:

And if I am reading this output from the command line, I think it says my bond is active:

[root@rocknas ~]# cat /proc/net/bonding/bond3
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 00:13:3b:0f:33:dc
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 3
Actor Key: 9
Partner Key: 66
Partner Mac Address: f0:9f:c2:18:28:f6

Slave Interface: enp7s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:13:3b:0f:33:dc
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 00:13:3b:0f:33:dc
port key: 9
port priority: 255
port number: 1
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: f0:9f:c2:18:28:f6
oper key: 66
port priority: 128
port number: 4
port state: 61

Slave Interface: enp6s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:13:3b:0f:33:db
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 00:13:3b:0f:33:dc
port key: 9
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: f0:9f:c2:18:28:f6
oper key: 66
port priority: 128
port number: 2
port state: 61

Slave Interface: enp9s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: e0:3f:49:19:e5:2d
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 00:13:3b:0f:33:dc
port key: 9
port priority: 255
port number: 3
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: f0:9f:c2:18:28:f6
oper key: 66
port priority: 128
port number: 3
port state: 61

The issue is that if I open up two iperf3 connections (on different ports), when I try and hit the server from two different computers on my network, the total throughput is only 1 Gbps. I was expecting each machine to get 1 Gbps for a total throughput of 2 Gbps.

Is there a log or anything I can check to see if things are indeed up and running? Is there a way to see traffic flowing over the various links?

@kupan787 Hello again.
So I’m not that familiar with bonding/teaming but I believe the team driver is a newer incarnation within the kernel so it might be worth trying that as it is also able to support the LACP (802.3ad) switch hardware stuff.

Rockstor Team0 config:

Only 2 NIC’s but still:

And the resulting overview (had to manually select System - Network again as our above page seems to time out after submitting (bug I’ll enter soon).):

And I went with passive mode in my switch (in this case a TP-Link SG2008), assuming the newer tec in the Rockstor kernel would be better doing the active bit. I also enabled STP (Spanning Tree Protocol) as advised by the router help text.

The above entry didn’t show up until I’d configured Rockstor as above as I enabled the LACP ports on the router first via it’s “LACP Config” tab.

Then on the Rockstor (dell4.lan) I attempted to duplicate your test method and ran:

iperf -s

and on each of a desktop (1.09-1.10 GBytes/s when tested alone) and a laptop (985-1005 MB/s when tested alone) I executed the following command at about the same time:

iperf -i 1 -c dell4.lan

and got roughly usual results on each: (1.09 GBytes/s and 964 MB/s) with the corresponding info presented by the server instance of iperf running on Rockstor:

[  4] local 192.168.1.144 port 5001 connected with 192.168.1.110 port 51114
[  5] local 192.168.1.144 port 5001 connected with 192.168.1.138 port 38485
[  4]  0.0-10.0 sec  1.09 GBytes   939 Mbits/sec
[  5]  0.0-10.0 sec   964 MBytes   808 Mbits/sec

For ease of ‘cut and past’ I had to do some terminal work over the lan simultaneously but that looks to be 2 simultaneous 1 Gbit connections (or near enough given the slightly slap dash test and the fairly large variance I get from that laptop).

I’d try using the team drivers version of LACP as it seems to be working here.

Hope that helps.

1 Like

Thanks for the info!

I switched over to the Teaming driver, but was still seeing the same results. Then I noticed you had tested with iperf and I was using iperf3. Apparently, that made the difference! Once I tested with iperf, I saw the same results as you (confirmed with iftop as well).

I did another test doing SMB transfers on the two machines, and saw the same results.

So, I think the bottom line was my testing. I am surprised that iperf3 vs iperf would be the issue. I am not a networking guru, so I won’t even try and speculate. I’m just glad to see that it is all up and running.

@kupan787 Thanks for the update and glad you got it sorted. Funny about the iperf3 thing.

No me neither.