Recent install experiencing network speed issues

Hi Community,

Apologies for spamming the forum, this has been a learning experience for me.

Having resolved all previous issues (I did overwrite root when installing ZoneMinder on my last attempt), I now have Rockstor running in a somewhat stable manner on my Ryzen NAS, however networking (in general) appears to be running at a snails pace.

I have a fairly standard network configuration - No jumbo framing, TP-Link wireless AC router behind a cable modem. This box and my old NAS are both cabled into the router by a Gigabit switch.

NZBGet on my old NAS downloads a given NZB at 3 - 3.5Mbps, whereas the Rockon on rockstor is downloading the same NZB at 900Kbps - 1.1Mbps

Also of concern is network sharing. From my old NAS, I can stream a ~20Mbps movie to my wireless laptop without latency, whereas the rockstor box is spending more time buffering than playing the same file.

I have tried uninstalling all rockons and disabling the service (under the assumption this could be docker related), and this hasn’t improved the SMB stream speed at all.

No matter what I was trying to do, network throughput was not high (Sub 5Mb), CPU utilization has not peaked a single thread above 10%, and memory usage remains fairly constant around 14% used, 84% cached, 2 percent free and fractions in buffers.

Curiously, after reinstalling all rockons, rebooting mostly resolved the issue for a matter of hours (2.5 - 3Mbps in NZBGet, streaming about the same rate as my HPGen8), however it was back again within the day.

I really want to use Rockstor, because I have become a fan of BTRFS, and can find no other NAS distribution with decent BTRFS support built in - but if I can’t resolve these speed issues, it won’t prove very useful.

Any ideas?

@Haioken Hello again.

In which case could you update the relevant forum thread on this find as otherwise there is a ‘wake’ of issues apparently attributed to Rockstor (as the usual default) where as if a thread is updated then others can more easily be informed contextually. I.e on the possible effects if they make the same mistakes. Also it helps to see where we might improve the user experience to help avoid easily made mistakes: ie where the UI could guard / warn guard such actions for example. Thanks for the rich feedback by the way.

It would be easier to track down the cause if you game more info and eliminated extraneous issues such as wifi which can easily vary over time etc. There have been no other (unresolved) reports of general network speed issues and a good way of testing the base performance ie what the IP layer can do is covered in the following thread where @kupan787 and myself share iperf results; in that case it concerns bonded interfaces but the same tool might help with narrowing down the issue (ie ruling out network drivers etc):

In those tests we both, in the end, managed expected dual gigabit saturation.

Another post/thread of interest has details of network card performance differences which may also be of interest:

On a related side note I don’t think it is necessary (in this forum at least) to be alarmist with your post title. There are, as far as I know, no known network performance issues, outside of torrent downloaders, that affect Rockstor as we are essentially CenOS with a much newer kernel. I initially suspected that your NZBGet issue was down to insufficient port access (suspected in another torrent rockon) but from the looks of it that rock-on uses --net=host:

so should have whatever network access it desires. As for the smb protocol stream it would help to know in what context your performance was measured as then others with more knowledge of such things (than me) would be empowered to assist: ie which rockon or service was managing the streaming?

Also you may find the following thread of interest:

So it may be of use to also test plain samba performance to help to narrow things down a little more.

Hope that helps and thanks again for the feedback. Lets hope the forum can get to the bottom of this one:

understood.

I have updated the relevant thread, I have also added ‘RESOLVED’ to the title, as there was no tag for resolved.

I will perform iperf3 testing on the Rockstor host and my other host tomorrow morning from both a wired and a wireless client, and see if I can identify any issues. (I’m in Australia, it’s getting late here)
With any luck the issue is at a level low enough to be captured by Iperf.

I apologize for the title, it was meant to draw attention, not reflect on Rockstor as a whole.
I have changed the title.

I may have worded this poorly. I had opened a video using a video player on my Laptop, that was on a share hosted via Rockstor’s built in SMB server, not using a streaming service. The speed was poor when concerned with a debian based NAS playing the same video, over the same network.

1 Like

I’ve run some tests with Iperf2 (Incapable of running Iperf3 on the Debian 7.x boxed I have scattered around), and they’re showing vast differences between my Debian box and my Rockstor installation.

From my wireless client, the results of the iperf to my debian system on a HP Gen8 Microserver (1610T/8Gb RAM) are as follows:

C:\iperf>iperf.exe -c 192.168.0.6 -p 5001 -i 1
------------------------------------------------------------
Client connecting to 192.168.0.6, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.107 port 53581 connected with 192.168.0.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  20.0 MBytes   168 Mbits/sec
[  3]  1.0- 2.0 sec  21.8 MBytes   182 Mbits/sec
[  3]  2.0- 3.0 sec  22.6 MBytes   190 Mbits/sec
[  3]  3.0- 4.0 sec  23.0 MBytes   193 Mbits/sec
[  3]  4.0- 5.0 sec  22.5 MBytes   189 Mbits/sec
[  3]  5.0- 6.0 sec  22.6 MBytes   190 Mbits/sec
[  3]  6.0- 7.0 sec  22.9 MBytes   192 Mbits/sec
[  3]  7.0- 8.0 sec  23.5 MBytes   197 Mbits/sec
[  3]  8.0- 9.0 sec  23.5 MBytes   197 Mbits/sec
[  3]  9.0-10.0 sec  23.4 MBytes   196 Mbits/sec
[  3]  0.0-10.0 sec   226 MBytes   189 Mbits/sec

Comparatively, my test on the Ryzen 5 1600, on MSI B350M Mortar Arctic are as follows:

C:\iperf>iperf.exe -c 192.168.0.7 -p 5001 -i 1
------------------------------------------------------------
Client connecting to 192.168.0.7, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.107 port 53543 connected with 192.168.0.7 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  1.38 MBytes  11.5 Mbits/sec
[  3]  1.0- 2.0 sec  1.12 MBytes  9.44 Mbits/sec
[  3]  2.0- 3.0 sec  1.12 MBytes  9.44 Mbits/sec
[  3]  3.0- 4.0 sec  1.12 MBytes  9.44 Mbits/sec
[  3]  4.0- 5.0 sec  1.12 MBytes  9.44 Mbits/sec
[  3]  5.0- 6.0 sec  1.12 MBytes  9.44 Mbits/sec
[  3]  6.0- 7.0 sec  1.12 MBytes  9.44 Mbits/sec
[  3]  7.0- 8.0 sec  1.12 MBytes  9.44 Mbits/sec
[  3]  8.0- 9.0 sec  1.12 MBytes  9.44 Mbits/sec
[  3]  9.0-10.0 sec  1.12 MBytes  9.44 Mbits/sec
[  3]  0.0-10.0 sec  11.5 MBytes  9.62 Mbits/sec

The difference as you can see is quite dramatic.

Booting the same Ryzen system from an Ubuntu LiveCD, and runnning iperf on that reveals that the issue is likely not hardware related.

------------------------------------------------------------
Client connecting to 192.168.0.7, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.107 port 54211 connected with 192.168.0.7 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  23.0 MBytes   193 Mbits/sec
[  3]  1.0- 2.0 sec  23.4 MBytes   196 Mbits/sec
[  3]  2.0- 3.0 sec  23.2 MBytes   195 Mbits/sec
[  3]  3.0- 4.0 sec  23.5 MBytes   197 Mbits/sec
[  3]  4.0- 5.0 sec  22.9 MBytes   192 Mbits/sec
[  3]  5.0- 6.0 sec  24.4 MBytes   204 Mbits/sec
[  3]  6.0- 7.0 sec  23.5 MBytes   197 Mbits/sec
[  3]  7.0- 8.0 sec  24.2 MBytes   203 Mbits/sec
[  3]  8.0- 9.0 sec  24.1 MBytes   202 Mbits/sec
[  3]  9.0-10.0 sec  24.1 MBytes   202 Mbits/sec
[  3]  0.0-10.0 sec   236 MBytes   198 Mbits/sec

Comparison of hardware info provided by Ryzen running Ubuntu and Rockstor shows the same net driver in use, though an earlier kernel on Ubuntu.

Rockstor:

Rockstor Kernel
4.12.4-1.el7.elrepo.x86_64

Rockstor DMESG, NET device
[    0.850113] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
[    0.853830] r8169 0000:1e:00.0 eth0: RTL8168h/8111h at 0xffffc90000d31000, 30:9c:23:01:b7:70, XID 14100800 IRQ 48
[    0.853833] r8169 0000:1e:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko]

Ubuntu:

Ubuntu Kernel
4.10.0-28-generic

Ubuntu DMESG, NET device
[    4.584539] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
[    4.597386] r8169 0000:1e:00.0 eth0: RTL8168h/8111h at 0xffffb0b000d29000, 30:9c:23:01:b7:70, XID 14100800 IRQ 240
[    4.597387] r8169 0000:1e:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko]

Full LSPCI of the system

Ubuntu lspci
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1450
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Device 1451
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1453
00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1453
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1453
00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1454
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1454
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1460
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1461
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1462
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1463
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1464
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1465
00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1466
00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1467
01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd Device a804
03:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43bb (rev 02)
03:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43b7 (rev 02)
03:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b2 (rev 02)
04:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02)
04:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02)
04:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02)
05:00.0 VGA compatible controller: NVIDIA Corporation Device 128b (rev a1)
05:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1)
1e:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
20:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
21:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a
21:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Device 1456
21:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 145c
22:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455
22:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
22:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Device 1457

Are there any other details you can suggest I provide?

This appears to have alleviated itself, with no intervention - I’m now seeing comparable speeds across both systems with no intervention.

I haven’t been able to nail anything solid to the issue, but the speed issue has not been present for almost 24 hours now, despite pouring a lot of data down the line (migration from the OMV2.x HPGen8)

I will continue to monitor for any further troubles, but it seems (embarrasingly) that this may have just been a transient network fault. :confused:

While a little cautious, I’m still sticing with Rockstor.

Not sure if I should start a feature request thread for the following…

As share thresholds aren’t currently functional due to BTRFS limitations, is there a chance we can have the option to toggle this off?
I’m not personally interested in thresholds, as when space starts becoming an issue it just means that two different directories end up having too little space to fit one large object.

I also like the idea of being able to create shares within shares - mainly for filesystem organizational purposes.
For example, I currently have shares setup for different Rockon configs (cfgSonarr, cfgNZBGet), would be nice to group these under one location in /mnt2, thus I’d have

/mnt2
|- cfgRockons
  |- cfgSonarr
  |- cfgNZBGet
  |- cfgCouchPotato

I don’t know how the current codebase would handle this.

@Haioken Glad your all full speed on the network again, but yes that was strange.

We do already have the following issue with regard to quotas by forum member @maxhq :

Yes, not sure I like that idea. But we do have planned an enhancement to our existing ‘role’ based systems for disks, which I am planning to extend to shares in time; so that might serve the purpose of say grouping / displaying all shares that belong to an arbitrary group such as “rockon_config” say. ‘As is’ all shares (subvols) are already mounted within their respective pools (as well as within /mnt2) but a change to mount one share within another doesn’t seem like a clean approach. However this same request could be viewed as asking for share (subvol) creation within existing shares: rather than as we have it currently where we only cater for subvols (shares) in pools. Btrfs can do subvols within subvols but I think we should first continue our existing drive to get everything as it stands in it’s best shape first prior to introducing the likes of this, potentially confusing, capability.

I’m personally in favour of maintaining an ‘ease of use’ that ultimate flexibility often compromises. That is the ultimate flexibility is had by doing everything ‘with your teeth’ via the command line where there are no restrictions, but ‘ease of use’ / UI access will often necessitate some form of compromise but the upside is a democratisation of the main parts (and hopefully the majority) of a technology. The usability component of Rockstor is a major feature as far as I am concerned and one that can only be maintained by careful consideration of what elements of all that is possible are included. Everything and the kitchen sink is, I think, the road to ruin, especially on the quality, usability front.

Thanks for voicing your thoughts; everything, once voiced, can at least be considered.

Damn, missed that one - thanks, I’ll keep an eye out of this in future.

I think this is actually closer to what I am looking for. That or being able to arbitrarily set the subvol’s seconday mount location to something other than mnt2.

That being said, I can also see the advantages of not screwing with an interface that is simpler and easier to use.