Doing manual send/receive of snapshots, and I’m wondering if there is a way to stop Rockstor from seeing them and mounting them as shares.
Newish user here. I am using the built-in Snapper functionality for snapshots, but I also am doing some of my own send/receive to maintain a backup copy on an external drive. Right now it’s manual, but I will automate this soon. Essentially I am doing something similar to Rockstor’s replication but to another drive instead of to another server.
I have tried putting these in various locations on the drive and in hidden directories, but Rockstor seems to be quite good at finding them. They are really going to clutter up my shares, as I plan on keeping lots of them.
Is there a way that I can put them somewhere or name them in such a way that Rockstor will ignore them? I’ve done a lot of searching, but I can’t find an answer to this.
I recognize that this is an unorthodox way to use Rockstor, as I’m stepping outside of it a bit. I’d like to be able to maintain my own automated backups until such a point that I set up a 2nd Rockstor server.
First of all: thanks a lot for all your experimentation and your very constructive and helpful feedback. That’s excellent to see and very valuable to the community.
I’m not sure about others but I personally don’t see that as that unorthodox. I personally would love to have Rockstor better support such “custom”/third party backups options in the future and have always been curious about them.
Did some basic testing tonight. I wasn’t entirely sure how to handle it, as each snapshot would have a unique name. I assumed (didn’t test) that if I used my original plan of putting all of the snapshots into a regular folder, then any new subvolume names would get picked up as shares by rockstor. So I created a btrfs subvolume to hold all the snapshots instead, hoping that it would be smart enough to ignore the entire tree.
It seems to have worked! I created a new subvolume called “basesnaps.” After a reboot, I could see it in the shares list. I edited btrfs.py and added the name to the exclusion list. After a reboot, it disappeared from Rockstor. Then I created a new subvol underneath, and Rockstor also ignored that after a reboot.
I’ll start my actual snapshot backups in a few days, but unless I report further issues, I should be good now. Thanks @Flox for pointing me to those posts, and thanks @phillxnet for implementing the exclusion feature. You have all created a really nice product here. I can see from the forums that it’s been a lot of work to get here.
I appreciate all the help that everyone has given me as I’ve gotten started with Rockstor. Other than my initial hosts issue with Samba, it has actually been pretty smooth and easy to set up, and I’m deep enough into it to see that I can make it work for my needs (and the deeper I get, the less enthused I am about starting over with a fresh Ubuntu build ). I have really appreciated having a web GUI on my server for the first time, as it makes basic maintenance and changes pretty painless. I also like that everyone on the forum that I’ve interacted with has been really friendly. None of the Linux attitude or snark that I see elsewhere. I’m just a Linux hack, so I appreciate that!
I just went ahead and subscribed to the stable channel, just because I wanted to show some support for a great open source project. Thanks!
Great news; really glad it fits what you needed!
Thanks a lot for sharing your needs/ideas and for testing that workaround. I really do like @kupan787’s idea of surfacing this ignore/exclusion mechanism in the webUI; we’re bound to have these kind of needs be more and more common and now that @phillxnet has indeed implemented such a mechanism in the back-end, the dangerous/risky work may mostly be behind us already. Here’s the corresponding issue to track this feature:
Thanks a lot for the kind words; I completely agree with you. This is also a big part of the reason why I not only stuck with this community but got more and more involved and willing to contribute over time since my first days using it at home.
Thanks a lot for support and your continued feedback!