Testing disk performance

Sorry if this has been discussed before. It looks like i need to install iperf package to make sure my network is not the bottleneck.

I have a RAID 10 array with 6 hard drives. How do i confirm disk IO and throughput of the local disk on the NAS?
Once i confirm maximum performance there i then want to run Iperf to test Network performance.

Also i recently switched from bootable SSD to a bootable USB Sandisk. Would this affect performance a great deal when transferring large amount of data? say 4 tbs to my raid 10?

At the moment i am getting average of 27MB/S via Teracopy in Windows to my Samba share. This seems low. I should be getting 80mb/s over a gigabit switch correct?

Thanks
-Mike

Here are my iperf results from my windows 7 computer to my rockstor NAS via a gigabit switch using single NIC on each side. This is what i expect to see when writing data from my windows 7 machine to Rockstor.

However right now when i use teracopy to copy data. I seem to be getting much lower writing speed. about 25mb/s average

Any recommendations on tweaking performance?


Client connecting to 192.168.2.109, TCP port 5001
TCP window size: 0.20 MByte (default)

[ 3] local 192.168.2.100 port 56599 connected with 192.168.2.109 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 82.0 MBytes 82.0 MBytes/sec
[ 3] 1.0- 2.0 sec 83.9 MBytes 83.9 MBytes/sec
[ 3] 2.0- 3.0 sec 83.1 MBytes 83.1 MBytes/sec
[ 3] 3.0- 4.0 sec 83.8 MBytes 83.8 MBytes/sec
[ 3] 4.0- 5.0 sec 81.8 MBytes 81.8 MBytes/sec
[ 3] 5.0- 6.0 sec 85.9 MBytes 85.9 MBytes/sec
[ 3] 6.0- 7.0 sec 81.5 MBytes 81.5 MBytes/sec
[ 3] 7.0- 8.0 sec 84.0 MBytes 84.0 MBytes/sec
[ 3] 8.0- 9.0 sec 79.2 MBytes 79.2 MBytes/sec
[ 3] 9.0-10.0 sec 82.9 MBytes 82.9 MBytes/sec
[ 3] 0.0-10.0 sec 828 MBytes 82.8 MBytes/sec

you should get 112MB over 1gbe
for testing overal performance of storage I’ll encourage use of console tool “dd” part of basic unix / linu admin tool belt that is available for ~30 … thank’s to it’s simplicity it aliminated a need for myriad of other tools that are so common on windows ( disk imagining softwares to be best example )

in terms of testig write performance
dd if=/dev/zero of=/location_of_your_filesystem_mount_point/temp.bin bs=10M
then interupt it after few minutes with CTRL + C, this is becase if not terrupted it will write forever ( /dev/zero does not have size, it’s infinite )

to test read
dd if=/location_of_your_filesystem_mount_point/some_large_file_over_few_gigabytes of=/dev/null bs=10M
Let it run to the end. /dev/null is essentially “nothing” so this command will read any file / device you give it and write output to “nothing”

additionally always use bs=10M on block devices. Block Size overwrites a blocksize reported by device. So devices like /dev/zero will report block size = 1 byte, which will make dd read a single byte and write a single byte to disk … byte by byte which may skew your test results. “bs=10” is a arbitrary value that is good for “home and small office” - large enterprise or research arrays capable of pumping hundreds or thousands of gigabytes / second will require bs parameter to be adequaty larger (even in gigabytes)