I set up a simple replication:
Both VMs I’m running on the same host machine, sender is on Leap 15.6, receiver on Leap 16.0 both have Rockstor version 5.5.3-0
btrfs sending system:
btrfs --version
btrfs-progs v6.5.1
receiving system:
btrfs --version
btrfs-progs v6.14
I was able to get to 72 snapshots (drives are not full), and then some key violation occurred during the 73rd replication:
~# tail -n200 /opt/rockstor/var/log/rockstor.log
[11/May/2026 18:50:07] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_71 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_72
[11/May/2026 18:51:06] ERROR [storageadmin.util:45] Exception: duplicate key value violates unique constraint "storageadmin_snapshot_share_id_name_10142bd3_uniq"
DETAIL: Key (share_id, name)=(4, abs-config_1_replication_73) already exists.
Traceback (most recent call last):
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/backends/utils.py", line 105, in _execute
return self.cursor.execute(sql, params)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/opt/rockstor/.venv/lib/python3.13/site-packages/psycopg/cursor.py", line 117, in execute
raise ex.with_traceback(None)
psycopg.errors.UniqueViolation: duplicate key value violates unique constraint "storageadmin_snapshot_share_id_name_10142bd3_uniq"
DETAIL: Key (share_id, name)=(4, abs-config_1_replication_73) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py", line 40, in _handle_exception
yield
File "/opt/rockstor/src/rockstor/storageadmin/views/snapshot.py", line 169, in post
ret = self._create(
share,
...<4 lines>...
writable=writable,
)
File "/root/.local/share/pypoetry/python/cpython@3.13.11/lib/python3.13/contextlib.py", line 85, in inner
return func(*args, **kwds)
File "/opt/rockstor/src/rockstor/storageadmin/views/snapshot.py", line 147, in _create
s.save()
~~~~~~^^
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/models/base.py", line 902, in save
self.save_base(
~~~~~~~~~~~~~~^
using=using,
^^^^^^^^^^^^
...<2 lines>...
update_fields=update_fields,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/models/base.py", line 1008, in save_base
updated = self._save_table(
raw,
...<4 lines>...
update_fields,
)
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/models/base.py", line 1169, in _save_table
results = self._do_insert(
cls._base_manager, using, fields, returning_fields, raw
)
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/models/base.py", line 1210, in _do_insert
return manager._insert(
~~~~~~~~~~~~~~~^
[self],
^^^^^^^
...<3 lines>...
raw=raw,
^^^^^^^^
)
^
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/models/manager.py", line 87, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/models/query.py", line 1873, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/models/sql/compiler.py", line 1882, in execute_sql
cursor.execute(sql, params)
~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/backends/utils.py", line 79, in execute
return self._execute_with_wrappers(
~~~~~~~~~~~~~~~~~~~~~~~~~~~^
sql, params, many=False, executor=self._execute
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/backends/utils.py", line 92, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/backends/utils.py", line 100, in _execute
with self.db.wrap_database_errors:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/opt/rockstor/.venv/lib/python3.13/site-packages/django/db/backends/utils.py", line 105, in _execute
return self.cursor.execute(sql, params)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/opt/rockstor/.venv/lib/python3.13/site-packages/psycopg/cursor.py", line 117, in execute
raise ex.with_traceback(None)
django.db.utils.IntegrityError: duplicate key value violates unique constraint "storageadmin_snapshot_share_id_name_10142bd3_uniq"
DETAIL: Key (share_id, name)=(4, abs-config_1_replication_73) already exists.
[11/May/2026 18:51:06] ERROR [smart_manager.replication.sender:79] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. b'Failed to create snapshot: abs-config_1_replication_73. Aborting.'. Exception: 500 Server Error: Internal Server Error for url: http://127.0.0.1:8000/api/shares/4/snapshots/abs-config_1_replication_73
[11/May/2026 18:52:04] ERROR [storageadmin.util:45] Exception: Snapshot (abs-config_1_replication_73) already exists for the share (abs-config).
NoneType: None
After 10 attempts, the replication stopped
[11/May/2026 19:00:04] ERROR [smart_manager.replication.sender:79] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. b'Failed to create snapshot: abs-config_1_replication_73. Aborting.'. Exception: 500 Server Error: Internal Server Error for url: http://127.0.0.1:8000/api/shares/4/snapshots/abs-config_1_replication_73
[11/May/2026 19:01:02] ERROR [smart_manager.replication.listener_broker:154] Maximum attempts(10) reached for Sender(6f32cb58-f849-4c93-bc65-6ebda422c66d_1). A new one will not be started and the Replica task will be disabled.
[11/May/2026 19:01:03] ERROR [smart_manager.replication.listener_broker:336] Failed to start a new Sender for Replication Task(1). Exception: Maximum attempts(10) reached for Sender(6f32cb58-f849-4c93-bc65-6ebda422c66d_1). A new one will not be started and the Replica task will be disabled.
When checking on the receiving system, I see only:
# ls -la
drwxr-xr-x 1 root root 108 May 11 18:51 .
drwxr-xr-x 1 root root 108 May 11 17:39 ..
drwx------ 1 2000 2000 428 May 11 18:12 abs-config_1_replication_71
drwx------ 1 2000 2000 428 May 11 18:12 abs-config_1_replication_72
When checking the sending system:
# ls -la
total 0
drwxr-xr-x 1 root root 108 May 11 18:52 .
drwxr-xr-x 1 root root 232 May 11 17:39 ..
drwx------ 1 stalwart stalwart 428 May 11 18:12 abs-config_1_replication_72
drwx------ 1 stalwart stalwart 428 May 11 18:12 abs-config_1_replication_73
So, removing snapshot 73 from the sending system using btrfs:
btrfs subvolume delete /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_73
confirmed on sending system
# ls -la
drwxr-xr-x 1 root root 54 May 11 20:32 .
drwxr-xr-x 1 root root 232 May 11 17:39 ..
drwx------ 1 stalwart stalwart 428 May 11 18:12 abs-config_1_replication_72
Restarting the replication process via the WebUI. New snapshots are sent:
[11/May/2026 20:34:05] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_72 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_73
[11/May/2026 20:35:06] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_73 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_84
[11/May/2026 20:36:08] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_84 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_85
[11/May/2026 20:37:07] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_85 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_86
[11/May/2026 20:38:06] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_86 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_87
[11/May/2026 20:39:07] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_87 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_88
[11/May/2026 20:40:08] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_88 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_89
[11/May/2026 20:41:07] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_89 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_90
[11/May/2026 20:42:07] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_90 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_91
[11/May/2026 20:43:08] INFO [smart_manager.replication.sender:335] Id: 6f32cb58-f849-4c93-bc65-6ebda422c66d-1. Sending incremental replica between /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_91 -- /mnt2/rockwurst/.snapshots/abs-config/abs-config_1_replication_92
...
I assume, the jump in snapshot numbers is related to the counter continuing for every failure (i.e. 10), hence picking back up at 84.
I will continue to observe, but so far no failures like observed by @riceru