Poor SMB write speeds

Hello,
I have installed Rockstor with 2 4TB Seagate IronWolf drives.
The array geometry is fairly unorthodox but makes sense to my usecase
image
The write speeds over SMB are at max 30Mbps.
There are no Rock-Ons and no network clients so this write.
I have tested the network with iperf3 and it is OK-ish (5-6Gbps)
Any advice is more than welcome as I need to move over 6TB of data :slight_smile:

Thank you
A./

@lexxa Hello there.

Could your machine be CPU bound. Btrfs, as you may know, does a tone of check summing and it may be that you have not allocated enough CPU for this VM based install. Or it may be that your use case required quotas to be disabled (currently only available / supported in stable channel updates) as they can also have a fairly large performance impact, but that is usually when there are many snapshots.

Might also be useful to look into disk read/writes performace within the Rockstor instance, you may have a bottle neck there some how.

Hope that helps.

Hi,
To set the facts straight I am not running a VM PoC anymore. This is my production machine (Phenom II X6, some 8GB RAM, so far using the SATA II onboard controller because i managed to brick my LSI SAS9211-8i :slight_smile: )
After some tinkering it seems the bottleneck is the FROM OS (Windows Server 2012) and the FROM filesystem (ReFS).
Once I SMB mounted the Win drives to a Linux machine on the same VM host and that Linux machine to Rockstor I am writing at about 500MB/s. Which is satisfactory to say the least :slight_smile:

Curious thing, the write speed is not at all affected by balancing nor scrubbing the FS, I am really impressed with BTRFS and Rockstor and I am convinced I did the right choice to go with it instead of FreeNAS.

@lexxa Thanks for the update and glad you got it sorted.

Running balances can affect performance but that is definitely more the case once there are more snapshots, however upstream btrfs has made some major improvements in that area so we should get them in time. Plus we have some inefficiencies that have been mostly removed in a pending pull request that should improve our end responsiveness wise (due in 3.9.2-49), but only really a problem when slow downs are in effect.

You might want to keep in mind that when Rockstor convert balances it is rather opinionated with it’s data / metadata levels. I.e. we enforce set metadata raid levels according to the data raid levels. It helps with keeping things flexible and avoiding complicated corner cases such as raid1 metadata not allowing a ‘single’ data level to reduce below 2 disks: where as the expectation in that case is that if the pool is single it can be shrunk to a single drive. Just a though as if you specifically wanted raid1 metadata on multi disk data single pool you may want to appreciate that it might not, after a balance, be the same metadata raid level.

The associated code is:

which is called by:

There are plans to surface in the Web-UI the metadata raid level also and hopefully, in time, also allow specifying the metadata raid level independently. This would be nice for such things as raid6 data and raid1 metadata (once the parity raids are better anyway), but currently we hard wire metadata levels according to data levels.

Hope that doesn’t spoil your plans, it’s just with so much flexibility in btrfs itself we have to simplify / abridge some what. At least in this stage of our development.

You may also be interested in a recent posting re the current status of our update channels: see the Intro section of:

Hope that helps.

1 Like

Philip,
thank you for the clarification. It really seems the issue was with the system sending data and there was nothing wrong with Rockstor.
As for your remark about the flexibility of data/metadata raid level. I managed to get by with the CLI and I see nothing wrong with that.