I use rockstor to host vm image files. These are large files which receive frequent small writes. I have found that performance suffers (btrfs-transacti process utilises 100% CPU) unless they are defragmented periodically via command:
btrfs filesystem defragment -r /path/to/share.
Pls consider adding a scheduled task to defragment a share, ie add it to this list:
BTW, BTRFS’s autodefrag feature is not recommended for vm images apparently, so isn’t an alternative for this scenario.
But please add this advice in th GUI:
Defragmenting a file or a subvolume that has a copy-on-write copy results breaks the link between the file and its copy. For example, if you defragment a subvolume that has a snapshot, the disk usage by the subvolume and its snapshot will increase because the snapshot is no longer a copy-on-write image of the subvolume.
This is unexpected albeit technically plausible and understandable. But it is very annoying to the user who doesnt expect it and is irreversible
I have disabled CoW for the share I have manually defragmented. Perhaps to avoid disk usage becoming a problem, the NoDataCoW option at share-creation should be introduced at the same time, then when you select the defrag option in Scheduled Tasks, the list of shares is filtered to those that have the NoDataCoW attribute.
It doesn’t bother me that NoDataCoW disables checksumming and compression, since compression affects performance, and my hypervisor does integrity-checking already. Apparently BTRFS’s doing likewise causes conflicts.:
the best thing btrfs can do is simply get out of the way and let the application handle its own integrity management, and the way to tell btrfs to do that, as well as to do in-place rewrites instead of COW-based rewrites is with the NOCOW attrib and that must be done before the file gets so fragmented (and multi-snapshotted in its fragmented state) in the first place.