I’ve used Symform to store my off-site backups for awhile. They are, unfortunately, discontinuing their cloud storage service. After looking through the current list of Rock-Ons, I can’t find any backup providers other than Symform.
Now that they are no longer available, is there a clever way to use Syncthing or another Rock-On to perform automated off-site backups of my NAS? If not, is there a Rock-On in development that will fill this gap?
It’s not a rock-on, but I use Crashplan. They have detailed instructions on setting it up on a headless server, and another set of instructions that detail how to administer it from a different machine. I have about 5 TB backed up to Crashplan’s servers. Unlimited storage for $60 a year. I have gigabit fiber, but seem to only get up to about 5 MB/sec upload speed. Download appears to be roughly 20 MB/sec.
Although rclone is pretty cool, since it performs a sync operation it is not suitable for my backup needs at this time. I didn’t know about it, though, so thanks for mentioning it.
I’ve done headless Crashplan before and found it to be incredibly difficult. The instructions when I set mine up (about a year ago) were not current, and it took me a week to figure out why I couldn’t successfully connect for remote administration. Maybe they’re better now, but I’d rather have a turn-key community supported solution. Drobo uses Crashplan and ElephantDrive, as does Netgear. I don’t know about others.
Since we are missing anything similar, perhaps this should be converted to a feature request. Off-site backup is an essential NAS function.
I second Crashplan. I used ADrive for a long time, but outgrew them. ADrive.com was nice because you got 100GB for $25/year and they accept FTP, SFTP, RSYNC, WebDAV, etc. But, for ~$60/year with Crashplan you get unlimited storage and and snapshots. I have it running on one of my Linux machines with XWindows. I only go into X to start/stop/modify my Crashplan settings. Most can be done through the Website though and then the settings sync back to your server. I had to limit bandwidth because I was pegging my 10Mb WAN link and killing all other connections. Download tests I have been able to hit 25-30Mb without issue.
Those instructions will walk you through installing Crashplan from the CL and then walk you through setting up another computer to administer the system. I use my Windows machine to access Crashplan on my Rockstor installation. I have full control of Crashplan using this method.
I then used this to bump up my RAM to 2 GB:
I then used this to change my Dedupe settings (though I don’t recall what I changed it to)
I did not setup X on my Rockstor box, though the standard setup for CentOS 7 should work. I have multiple devices and one is a CentOS Box running X.
You can administer SOME stuff from the CrashPlan site. Like adding a file system to backup and a few other things. You will need JAVA and X on the box to install and configure the initial settings for CrashPlan. I run my server headless and only go into X when I need to make a major change to Crashplan. Then, when finished, I just log out of X back to the console and CrashPlan keeps running. I have my default login set to shell also instead of X.
One more very related question: If you use Rockstor’s baked-in ability to clone to a second Rockstor box, is there any way to configure that to work across the web, or is that feature limited to LAN functionality (thus requiring a VPN between the two locations to work out-of-the-box)?
If you use Rockstor’s baked-in ability to clone to a second Rockstor box, is there any way to configure that to work across the web
Yes, but it’s unencrypted, there’s no rotation, little documentation, it’s flaky (in my experience) when copying large subvolumes, is slow (doesn’t use full bandwidth), and can’t do an initial bulk replication locally (then ship the disk\s to destination and continue replicating).
Instead, I use this: Merlin’s BTRFS Subvolume Backup shell script. The only modification is I removed the shlock section since Rockstor\Centos doesn’t have shlock in any package manager. Also removed the last line “rm $lock” since this caused an error. Has been working well for many months.
On source NAS, save to /root/merlins_btrfs_ssh.sh. Initially, the backup script needs to be run using the --init switch:
~/merlins_btrfs_ssh.sh --init --keep <quantity_of_backups_to_keep> --dest <FQDN_of_backup_NAS> <share_you_want_to_backup> /mnt2/<pool_on_destination_NAS>
Then to schedule, add a line to crontab on source rockstor:
vi /etc/crontab 0 3 * * 0 root (cd /mnt2/<pool_containing_share_you_want_to_backup> && exec ~/merlins_btrfs_ssh.sh --keep <quantitity_of_backups_to_keep> --dest <FQDN_of_backup_NAS> <share_you_want_to_backup> /mnt2/<pool_on_destination_NAS>)
Above will run every Sunday at 3am, see cron documentation re adjusting the schedule to suit.
Then to allow the cron job to work unattended, enable passwordless ssh login from source to destination NAS. There are plenty of instructions explaining how to setup ssh key login.
You’ll need to forward TCP port 22 from destination router to destination rockstor assuming it doesn’t have it’s own external IP, and set up a dynamic DNS service, eg NoIP if necessary, so name resolution continues to work if the destination NAS is on a dynamic external IP address.
This approach is:
encrypted in transit (via ssh)
has customisable rotation
incremental, only deltas copied
uses btrfs and rockstor: no add-ins, cloud subscription fees or extra licences
According to my testing, resumes from where left off if interrupted (eg due to network outage)
Can do initial bulk copy locally on fast network, ship disk\s to destination, import BTRFS pools & shares and continue replicating