Balance runtime at 4 days

I recently added a 5th HDD to my BTRFS pool, set up at RAID1. Afterwards, it kicked off the balance. In 2 days, it it finished about 40%, but in the next 2 days it only finished another 10% (and seems to be slowing down even more).

I’ve been adding data to it while the balance ran. I’m starting to bump on on the storage limit with this new disk as well.

Showing the filesystem, it looks like the ‘used’ storage of all 5 disks is pretty much even. Can ‘btrfs balance cancel /mnt2/data’ be safely run in this case?

Is this behavior normal for a RAID1 balance? Or did continuing to add data to the pool possibly cause the slowdown?

[root@rockstor ~]# uptime
 14:42:49 up 3 days, 21:38,  3 users,  load average: 4.49, 4.40, 4.15
[root@rockstor ~]# btrfs balance status /mnt2/data
Balance on '/mnt2/data' is running
2633 out of about 5063 chunks balanced (2634 considered),  48% left
[root@rockstor ~]# btrfs fi show /mnt2/data
Label: 'data'  uuid: b95289da-399b-4f0f-a64c-539f0aadb340
        Total devices 5 FS bytes used 5.83TiB
        devid    1 size 2.73TiB used 2.52TiB path /dev/mapper/luks-550ac5fa-5313-49dd-be9a-bdd43581225d
        devid    2 size 2.73TiB used 2.51TiB path /dev/mapper/luks-cc499017-dea0-4d74-a5e9-37c28bb032d5
        devid    3 size 2.73TiB used 2.51TiB path /dev/mapper/luks-fa5969c0-6c0a-4c64-85fd-01b7e58bad5a
        devid    4 size 1.82TiB used 1.61TiB path /dev/mapper/luks-03e613ea-05fc-482a-8408-77523ab3c6ce
        devid    5 size 2.73TiB used 2.52TiB path /dev/mapper/luks-e80251be-c140-4ca9-ad62-72f03855f464