Snapshot removal issue

I just deleted all my „old“ 360 snapshots due to a maintenance cleanup.
All my rocksons stoppen working afterwarts.
The rock-on Service itself ist still Running.

Issue:

Transmission

Current status: exitcode: 137 error: stat /mnt2/Rock_on_Service/btrfs/subvolumes/2529be690b5ee727ad7b804f9557c74aa9c444f446e74554568c268b861ce337: no such file or directory

I thought that a Snapshot deletion wont impact my subvolumes ?!
I don
t know what to do anyway from this Point now …

Sorry you’re having troubles.
Did you delete snapshots on the Rock-ons_root share? If yes, that makes me think of the following issue due to the interaction with how docker itself uses snapshots.

1 Like

Please find a way to identify docker like snapshots, lock them and make them not blocking when dealing with shares deletion. I am now in serious trouble due to an identified issue from March 2017.

I don’t know enough about Docker’s inner mechanisms, but based on a recent post by @phillxnet, it appears the snapshots on the rock-ons root share could simply be the containers themselves.

I believe nothing should be lost regarding your rock-ons, though, so have you tried looking whether your Docker’s images are still on the system (that may be what your original post error was).

1 Like

@mluetz Hello again.

Thanks to @Flox for referencing the pertinent forum/GitHub references. Yes we definitely need to improve the usability side of things here. However no config or data should be affected, assuming you have followed the guidelines for Rock-on install of using discrete shares for each Rock-on’s various config/data requirements; as the rock-ons docker components are the only ‘snapshot only’ components. So a delete and re-install of each rock-on, picking the same shares for each as you had before, should put you just where you were prior to the snapshot delete incident.

For context this is a direct quote from the sighted issue from our very own @Flyer who was chipping in on my original issue of:

"Adding my 2 cents:
while testing share usage/rusage issues noticed every Rock-on / docker snapshot generates same warning when deleting shares.
My suggestion : find a way to identify docker like snapshots, lock them and make them not blocking when dealing with shares deletion"

Which relates specifically to a proposed enhancement on top of rock-on snapshot identification and the blocking / warning of their deletion.

I have update that issue with a link to this thread also.

To help avoid this issue in the future you could sort your snapshots by their Share and then avoid deleting all those that are on the rock-ons root share, named “rock-ons-root” in the following pic (note the Share column):

The rock-on/docker related snapshots also have wacky long names as indicated.

Only the rock-ons-root / docker element, and obviously the snapshot history of the various shares associated with the other deleted snapshots, should be affected. The tricky / confusing part being that the docker images (Rock-ons) exist only as snapshots.

So essentially, as @Flox indicated, all you rock-ons config and data should be intact and unaffected, and only the ‘system’ or installed/downloaded component of each rock-on/docker image needs to be re-established. And as stated earlier, if you chose the same shares as before for each they should re-establish where they left off.

Hope that helps and let us know how you get on.