Scrub speed very VERY slow

Hi,

I’ve seen some very very old comments on scrub speed…

I have a new Rockstor install, on 5.0.7 (so cant access my shares over Samba until that is fixed - but I see that is being worked on so I eagerly await the next update).

I have a fairly powerful PC for a NAS (its a repurposed old PC, 14 cores, 32GB RAM).

Eight 18TB disks in raid6-1c4 (I’ve installed the back ports).

Scrub is currently reporting 28MB/s… which means it will finish in 25 days time. That just seems insanely slow.

None of the CPU cores are over about 5%, RAM usage is under 4%, disks are being read at about 19MB/s. so where is the bottleneck?

Wow! That is a lot of storage space!

Just for reference in case it helps, my Rockstor BackUp setup uses an intel 4 core 16G ECC DRAM on Supermicro board with 5 Seagate IronWolf 4TB drives in Raid-0. Total usable space is about 18TB. OS is on fast 256GB SSD.

The last one took about 7 hours for 16.64TB of data. Transfer speeds were about 95MB->140MB per second per HD per the Dashboard Disk Activity thing. I typically see 290 to 480 MBsec transfer speeds over a 10Gbs fiber optic link to my 5950X Win11 box.

I don’t know exactly how your raid setup works, but for my 2 setups, the faster read speeds really help. My other Rockstor setup uses all slightly slower WD drives and the different scrub times show. Also, all my drives are running AHCI mode in motherboard BIOS with NO spin down (if that has anything to do with anything…LOL)

:sunglasses:

PS: I’ll run a scrub on my main Rockstor NAS that uses raid10 on 8 4TB drives and see how it goes and post here.

1 Like

Thanks, I think its RAID 5/6 that is very slow to scrub, but I don’t know where the bottleneck is

and I have SERIOUSLY reduced my storage only this weekend I sold two 20TB drives! (this is still just the archive - my main storage is a pair of 20TB drives in raid 1)

Being a “Hardware” guy, a simple calculation shows 20TB of data to scrub on a single drive running with an average read speed of 125MBsec would take about 44 hours. On a Raid1 setup, there must be some additional overhead as well. I presume you have a fast SSD to hold the OS, but if you don’t, it would help things for sure.

If you could hook up to a windows system and download something VERY LARGE over a few minutes while monitoring the transfer speed, that could give you a close estimate on how to calculate your expected scrub times.

Meanwhile, I started a 15TBish scrub on my main NAS Raid10 setup and we will see how that goes…

:sunglasses:

Lol, yeah all OS is on SSD, but its not my raid 1 giving me issues, its my RAID 6 which is still happily taking 25 days to do a scrub… if I scheduled it for monthly it would be basically doing a never ending scrub

You mentioned you’ve installed the kernel backports (I overlooked that initially) so that might be as good as it gets for now …

RockNAS 28TB scrub of 8 4TB disks in Raid10 in about 9.5 Hrs:

Have no idea what a Raid 5/6 would look at though I am willing to try when 5.0.x.x gets the SAMBA glitch corrected…

:sunglasses:

Yeah I figured it would be… but I was hoping someone would know why. Its so strange that every part of the system seems to be running at less than 10% , but it just won’t go faster,

I love speed and try to plan for highest possible performance… :smile:
All my drives use CMR technology because of the poor performance of SMR in multiple HUGE file writes…

Quote From Buffalo seems to explain it well:

SMR drives’ higher capacity will also affect performance. When data is written to an SMR drive, the write head will write data onto an empty area on the drive instead of overwriting an existing track. Then, when the drive is not in use, it will enter into a “reorganization mode” where old data on the original track is deleted to make space for future use. Because this reorganization is the only way to clear old data, idle time is essential to SMR drives. Constant drive access will give the drive no time to reorganize the magnetic tracks, leading to very poor drive performance.

I re-read your post where you say disk throughput is only about 19MBs per drive. That sounds to me like super slow write performance which may be what you are experiencing…

Just a thought…

:sunglasses:

2 Likes

None of my drives are shingled - I wouldn’t use them in a NAS.

Its a scrub - its ONLY reading, there is no writing.

1 Like

Well, I was scraping the bottom of the old knowledge barrel of thoughts… Obviously that means it has to be the flux-capacitatorer!

Well, on that note, happy to send you some Rockstor badges for free if you PM me name/addy!

:sunglasses:

1 Like

With the beauty of BTRFS… I have rebalanced from Raid 6 to RAID 10 …

My scrub speed has gone from 28MB/s to 1.40GB/s

Which is just insane

3 Likes

YEA!!! :smile: :grin:

:sunglasses:

Okay, just for fun, and since SAMBA working again with 5.0.8 update, I’m Re-Raid’ing my backup setup to Raid6-1c4 to see what happens… should be done in like 12 hours or so (I hope I hope I hope).

Then I will do a scrub for grins and see what happens…

:sunglasses:

1 Like

WOOPSY!

Clobbered the whole setup in an unrecoverable way… I think because I didn’t first do the backport thing…

LOL!

It tried to to do it, got 4% done, went Paused and was dead after that…

Soooo, guess raid5/6 ain’t for me!

Started from scratch having to unplug all the drives first…

LOL!

:rofl:

1 Like