Replication problems

Brief description of the problem

Inability to delete ‘unmounted’ shares

Detailed step by step instructions to reproduce the problem

Created replication, however it failed due to network interrupt. Deleted main share ok. Sub share (.snapshot/filename…) refuses to be deleted (force delete no good) and now shows ‘unmounted’ in the status column.
There is no record of this ‘snapshot’ in the snapshots table.

Web-UI screenshot

Error Traceback provided on the Web-UI

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/storageadmin/views/share.py”, line 337, in delete
force=force)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 800, in remove_share
toggle_path_rw(subvol_mnt_pt, rw=True)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 546, in toggle_path_rw
return run_command([CHATTR, attr, path])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 121, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/chattr -i /mnt2/nas3_pool1/.snapshots/00020003-0004-0005-0006-000700080009_home/home_23_replication_129. rc = 1. stdout = [‘’]. stderr = [‘/usr/bin/chattr: Read-only file system while setting flags on /mnt2/nas3_pool1/.snapshots/00020003-0004-0005-0006-000700080009_home/home_23_replication_129’, ‘’]

Have tried deletion via console as well but just says ‘read only’.
Hoping this isn’t a major problem as to get replication working has been a time consuming thing.
Many thanks
Paul

@PaulK Hello again.

It’s normally only required that the last snapshot be deleted to ‘unstick’ a failed replication task due to network interruption, there after it should pickup where it left off. A known bug currently.

But your situation now looks, as you say, to be that your share (subvol) has gone read only, either that or the pool it is a subvol of has gone read only.

If you have deleted it’s associated share that may have affected this (ie exposed a Web-UI bug of sorts). Also the replication process, in it’s early stages, transitions sub vols between share / snapshot status. They are both subvols to btrfs. I’ll do a technical wiki on this soon as in total it takes I think 5 replication task executions to reach the final state that it then maintains. You may have had the network interruption within this ‘strange’ and complicated initial 5 steps.

But to your current state:

From your following log entry:

It looks like that subvol or it’s associated pool has gone read only:

So the Rockstor code is trying to run:

/usr/bin/chattr -i /mnt2/nas3_pool1/.snapshots/00020003-0004-0005-0006-000700080009_home/home_23_replication_129

and the filesystem is apparently now read-only.

So I would look first as to why this is the case. You may have a poorly pool / subvol. Btrfs, when it encounters a corruption or other serious issue, will often go read only as a precautionary measure. So in short this looks like a pool / subvol health issue, and not a Rockstor code issue, bar the initial lack of robustness re picking up where it left off after the network interruption and dealing with cleaning up/reusing the required snapshot itself that is.

So I would first look to the health of your pool, associated subvol first on this one given the apparent read-only nature that command is now indicating as the reason it can’t be executed.

Hope that helps and thanks for helping to support Rockstor’s development via a stable subscription, or two in this case as replication currently requires the more modern Rockstor code on both the sender and the receiver.

Re repair of btrfs pools I find the following from openSUSE to be well presented:

https://en.opensuse.org/SDB:BTRFS

notably the “How to repair a broken/unmountable btrfs filesystem” subsection. It also takes care to advise on what commands are safe and when you may start making things worse and potentially reducing your chances of data recovery from the poorly pool.

@phillxnet Thank you for the comprehensive reply.
Had a good look through the documentation you linked to. After careful consideration I thought it was easier and less time consuming to start again (this is a back up server - not a primary repository).

After reinstall and creation of the pools I restarted the replication tasks.
After 3 cycles the new shares (and snapshots) are stable and accessible via samba (if required).

At just over 2TB it’s not something I’d like to repeat, but it does demonstrate the fragility of the replication process, especially when it is beginning and before the ‘consolidation’ process.
I’m hopeful this will be ironed out in future releases.

As always many thanks for your response and useful guidance.

@PaulK Glad you got it sorted.

Yes, but it will take 5 replication events, including the initial one, to fully ‘normalise’.

Agreed. From Rockstor’s perspective we definitely have some work to do there. Most notably on when we fail to recover after a network outage and end up with a blocking snapshot on the send machine that simply need deleting. We use to deal with this but had to make some deep changes in our pool share apis and I didn’t quite manage to restore the full prior capability; but there are notes within code as to where this ‘breakage/fragility’ is; and I have notes here that I have yet to present in an issue. But note that btrfs send/receive, upon which our replication system is build, is itself not robust to network interruptions currently so you are best to ensure there is a good network anyway. Our multi stage approach is actually an attempt to improve robustness, we just currently fail in a way we needn’t: at least from when I last look at this. Definitely room for improvement.

As am I.

Please note that the receive end will be overwritten so you really should consider that a read only share. Any changes made to that share will be overwritten by subsequent replications. The send and receive are not to be considered a cluster/distributed file system, the receive is simply a visible backup share of what the send was a few cycles back. If the original was lost then the receive could be re-purposed but while it remains the receive share of a send it will be repeatedly replaced by a progressively newer version from the send. Hope that clear enough. You can take a look at the associated snapshots on both the send and the receive to get an idea of what’s happening.

I’ll get to writing proper technical docs on this procedure and improve the user docs as well. It seems overly complicated but there are various constraints / safety concerns that we have to honour in order that we can use the btrfs send / receive system and that has informed Rockstor’s own internal procedures in that regard.

Thanks for sharing your findings and hope it works out for you. And thanks for helping to support Rockstor’s development via stable channel subscriptions.

1 Like