NFS sync vs async

Have a Ubuntu LXC container running “Emby” Accessing the media via NFS share

Set the app-armour policies on proxmox to allow NFS mounting from a container but what I was finding happening is that particular container would seemingly die/hang and a fair few NFS timeouts were logged in dmesg.

I tried setting the number of NFS threads to 32 and it didn’t help, What does seem to have helped is changing the NFS Export to Sync instead of Async which is just plain werid.

The only thing I could think of is if because Emby is set to store metadata (images.etc) in the library folders it it possible it was somehow flooding Rockstor with requests?

If I’ve understood the docs correctly async will respond that a request has completed before it actually has been written, where as Sync will make the client wait?

I’m still getting time-outs on occasion in the hosts logs (could be left over requests from where it was hanging earlier for all I know since i’ve not rebooted the host.) but more importantly nothing seems to be hanging now

jup async writes to ram directly and reports “all done”, whereas sync waits for every write to finish on disk.

this seems very strange, i would assume to opposite effect, where sync gives timeouts because the disks are slow, and not the other way round.
i use lxc in proxmox too, but i figured its easier to setup the share on the host level once and bind it into each lxc which needs it. im using smb there, so i cant reproduce this. Your permissions are set up correctly (suid/guid)?

I’ve changed the way resources are allocated to see if that helps namely I’ve now given Rockstor 2 CPU cores and set the weighting as such it should pretty much get to grab them if it needs them. (I.e it’s set to the highest possible)

I’ve also giving it another 512MB of RAM to help it along, Usually the machine is fairly idle but because of the migration to proxmox and the fact I was stupid and didn’t backup Emby’s database meant that Emby was having to scan/probe the entire media library which is quite I/O intensive.

Where I’m hoping to get to is being able to stick a 512GB SSD in for the VM’s themselves and remove the 2x1TB drives they’re sitting on at the moment (Raid1). This should help remove some of the I/O bottleneck and I’m not going to worry about the Loss of the Raid as the other VM’s will backup to the Rockstor VM over NFS.

Of course the other advantage of using a 2.5" SSD for the VM’s is I can put it one of the non-hotswap bays in the back of the case (Case can take 4 2.5" drives in the back, although I only have enough power connectors for 3, and I don’t want to start messing with sata>molex adaptors, there’s already enough cables in there)

This means I’ll have all 8x of the front 3.5" bays available should I need to add extra capacity to Rockstor :wink: