Improving Read/Write Speeds

Recently switched to Rockstor and am migrating a lot of data. I installed Rockstor on my old pc and it works well but seems to have poor read / write speeds (11MB/sec). Much lower than the same system performed on a mirrored windows storage space. Any advice on how to improve this?

Here are my full system specs: pcpartpicker
I suspect maybe the PCIE sata cards I am using to expand the number of SATA slots on my Mobo don’t play well with Rockstor.
IO Crest 2​ Port SATA​ III PCI-E​xpress x1 ​Card (SY-P​EX40039)https://www.amazon.com/IO-Crest-Port-PCI-Express-SY-PEX40039/dp/B005B0A6ZS

Thanks in advance for any advice!

@coleberhorst Welcome to the Rockstor community forum.

Your quoted max speed of 11MB/sec is suspiciously close to 100Mbit lan type speeds so it might be worth checking to see if the lan card has been switched to it’s 1Gbit mode properly. So I’d suggest you first double check the effective network speed via something like iperf.

On the Rockstor machine execute the following to install the iperf tool:

yum install iperf

then run this newly installed tool in it’s server mode (via the ‘-s’ switch):

iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

this will then sit there waiting for a client machine to connect and perform tests against it.
(Ctrl + c to close this program)
So then you need the same program installed on a client machine on the same network:
ie in my case a Fedora 24 machine and run it in client mode( via “-c” switch):
The ‘-i 2’ means report every 2 seconds and is optional.

iperf -i 2 -c cube.lan
------------------------------------------------------------
Client connecting to cube.lan, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.110 port 39888 connected with 192.168.1.142 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec   224 MBytes   941 Mbits/sec
[  3]  2.0- 4.0 sec   224 MBytes   941 Mbits/sec
[  3]  4.0- 6.0 sec   224 MBytes   942 Mbits/sec
[  3]  6.0- 8.0 sec   225 MBytes   942 Mbits/sec
[  3]  8.0-10.0 sec   224 MBytes   942 Mbits/sec
[  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec

At least then you can test the effective network speed to rule that element out.
If you paste your results into this thread as you go then it gives others more info to help diagnose where the bottle neck might be.

As for your sata pci-express x1 cards they appear to be based on the Asmedia 1061 SATA host controller chip. I have something similar here akin to what is found in the Rockstor Shop on the mSATA adapter. Although both are actually based on the ASM1062 chip not the ASM1061 as a quick look-up of your card suggested:
ie:

lspci | grep SATA
00:13.0 SATA controller: Intel Corporation Atom/Celeron/Pentium Processor x5-E8000/J3xxx/N3xxx Series SATA Controller (rev 21)
01:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02)
04:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02)

And the throughput here to an attached mSATA card is:

hdparm -tT /dev/disk/by-id/ata-KINGSTON_SMS200S330G_50026B724C085CD1:
 Timing cached reads:   2826 MB in  2.00 seconds = 1413.63 MB/sec
 Timing buffered disk reads: 528 MB in  3.00 seconds = 175.77 MB/sec
hdparm -tT /dev/disk/by-id/ata-KINGSTON_SMS200S330G_50026B724C085CD1:
 Timing cached reads:   2848 MB in  2.00 seconds = 1424.35 MB/sec
 Timing buffered disk reads: 552 MB in  3.00 seconds = 183.95 MB/sec

Note the above is also the system drive so may be in use during the test.

And testing both the built in N3700 SATA controller (2 ports) and the one built into the ASRock N3700-ITX (2 ports):
Note: Each drive is on one port of each dual port controller.

hdparm -tT /dev/disk/by-id/ata-ST3000VN000-1HJ166_W6A0J98V

/dev/disk/by-id/ata-ST3000VN000-1HJ166_W6A0J98V:
 Timing cached reads:   2816 MB in  2.00 seconds = 1408.44 MB/sec
 Timing buffered disk reads: 482 MB in  3.00 seconds = 160.60 MB/sec

/dev/disk/by-id/ata-ST3000VN000-1HJ166_W6A0J98V:
 Timing cached reads:   2668 MB in  2.00 seconds = 1333.80 MB/sec
 Timing buffered disk reads: 484 MB in  3.01 seconds = 160.96 MB/sec

And the same model drive on the other controller:

hdparm -tT /dev/disk/by-id/ata-ST3000VN000-1HJ166_W7305P38

/dev/disk/by-id/ata-ST3000VN000-1HJ166_W7305P38
 Timing cached reads:   2720 MB in  2.00 seconds = 1360.45 MB/sec
 Timing buffered disk reads: 534 MB in  3.00 seconds = 177.76 MB/sec

/dev/disk/by-id/ata-ST3000VN000-1HJ166_W7305P38:
 Timing cached reads:   2764 MB in  2.00 seconds = 1382.44 MB/sec
 Timing buffered disk reads: 544 MB in  3.01 seconds = 181.01 MB/sec

However these 2 drives have also tested differently on the same controller so there can be a difference from drive to drive anyway.

See what your get on your hardware with the same tests to try and narrow down what’s going on.

Hope that helps.

1 Like

Thank you for the information!

The iperf revealed a few things. One of my switches was bottlenecking the transfer. Thanks for that! I removed it and connected directly to the main router. The speed is now up to 100MB/sec over network.

As far as the disks thanks for confirming my ASM chip can in theory work with Rockstor it helped me focus on the network problem and assuaged some of my fears.

Here’s the internal disk tests:

6TB
/dev/disk/by-id/ata-WDC_WD60EZRZ-00GZ5B1_WD-WX21DA6JXURL:
Timing cached reads: 17928 MB in 2.00 seconds = 8975.75 MB/sec
Timing buffered disk reads: 62 MB in 3.27 seconds = 18.98 MB/sec

4TB
/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E1722950:
Timing cached reads: 21812 MB in 2.00 seconds = 10923.21 MB/sec
Timing buffered disk reads: 254 MB in 3.10 seconds = 81.88 MB/sec

2TB
/dev/disk/by-id/ata-WDC_WD20EARS-00MVWB0_WD-WCAZA6928458:
Timing cached reads: 20964 MB in 2.00 seconds = 10496.95 MB/sec
Timing buffered disk reads: 50 MB in 3.00 seconds = 16.65 MB/sec
/dev/disk/by-id/ata-WDC_WD20EARS-00MVWB0_WD-WCAZA6928458:
Timing cached reads: 20416 MB in 2.00 seconds = 10222.04 MB/sec
Timing buffered disk reads: 196 MB in 3.01 seconds = 65.11 MB/sec

2TB
/dev/disk/by-id/ata-WDC_WD20EARS-00MVWB0_WD-WCAZA6923571:
Timing cached reads: 20116 MB in 2.00 seconds = 10072.69 MB/sec
Timing buffered disk reads: 316 MB in 3.01 seconds = 104.98 MB/sec

2TB
/dev/disk/by-id/ata-ST2000DM001-9YN164_Z1E0Y6HA:
Timing cached reads: 21444 MB in 2.00 seconds = 10737.14 MB/sec
Timing buffered disk reads: 370 MB in 3.00 seconds = 123.20 MB/sec

Just my 2 pence :slight_smile:

If you will setup your pool (FS) under btrfs as raid10 you will get a R/W speed up which is unnoticable over the network (a single spinning rust will max out 1Gb connection) BUT with multiple disks in raid10 you will get hell of a better seek performance. This is due to fact that btrfs can (and will) buffer and schedule reads and writes in such a way that your random IO throughput per disk will go up by N/2 (N = number of disks in pool).

Sure any advice is appreciated!

I’m current in raid1, but could switch pretty easily to raid10. I’ll try it.