Considering that their main tasks are archiving old media, storing backups of other machines, and facilitating file transfers, Network-Attached Storage systems may not seem all that complicated at first glance. However, most NAS distributions include tons of network share options, security features, and data protection settings to ensure your files remain in tip-top shape.
Snapshot replication fits in the last category, and despite sounding like a mouthful, it’s a great way to ensure your data remains recoverable when disaster strikes your storage server.
This NAS setting most people skip protects against silent data corruption
Table of Contents
You should enable it the next time you log into your NAS
It essentially creates backups of your snapshots
When you’ve got a centralized storage server housing your essential files, you’ve got a single point of failure that, if compromised, can render all your data inaccessible in an instant. Besides dedicated backups, snapshots are a solid way to protect your data.
Since they’re essentially point-in-time images (or checkpoints, if you will) of your files, creating them doesn’t take too long – and the same holds true about restoring them. They also don’t hog a lot of space, making them far more efficient when you don’t have a lot of storage real estate. Thanks to their incremental nature, only the changes since your last snapshot will be saved in the new snapshots, so you can schedule them as often as you like.
Replication tasks take this a step forward by allowing you to create redundant copies of your snapshots. So, if things go wrong and your original snapshots remain inaccessible, you can rely on these redundant copies to recover your files. The best part? You can use replication tasks to send snapshots from your local machine to a remote server, and that’s what makes them ideal for a 3-2-1 backup setup.
Replication tasks mesh well with a remote destination server
When you save copies of snapshots on your local machine, your data is still vulnerable to a bunch of problems. If the drive housing your datasets and snapshots kicks the bucket, your files will be lost forever. Likewise, floods, fires, theft, and other untoward incidents can rid you of your precious data, and recovery will be impossible when all you’ve got is a primary storage server with a bunch of local snapshots. A cheap offsite NAS can serve as a solid countermeasure to these problems.
Keeping a secondary server that houses only the most essential files is the perfect way to add more redundancy to your setup without costing a lot of money. Meanwhile, replication tasks can synchronize the snapshots between the two nodes. If you’re on a ZFS-powered distro like TrueNAS, replicating snapshots is a lot faster than Rsync and other data synchronization tools. Once the replication task finishes writing the original data as the first snapshot to the offsite server, the subsequent snapshots will only transfer the incremental changes. This makes replication tasks extremely quick, and they also save you a lot of bandwidth on your local and remote network. As long as your snapshots remain on the remote NAS, you can use them to recover the entire dataset even if the primary node breaks down.
TrueNAS makes snapshot replication fairly straightforward
You can even automate the replication tasks
With the theory part out of the way, it’s time to go over the actual setup process for replication tasks. I’m a TrueNAS guy through and through, and since both my local and remote NAS run this distro, configuring snapshot replication was a piece of cake. Synology also supports this feature, but you’ll need a Synology NAS running DSM if you’re going down the remote node route. But considering the Synology track record, I’d rather stick to my TrueNAS nodes. But that’s a topic for another time.
On TrueNAS, the Data Protection tab houses both the Periodic Snapshot Tasks and Replication Tasks sections. Clicking on the Add button within the latter opens the Replication Task Wizard, where you’ll have to choose the Source and Destination nodes. For my setup, I’ve chosen my local NAS as the Source node, while my remote backup server acts as the Destination Location. If you haven’t already paired them together, you’ll need to create a new SSH connection on the local system with the TrueNAS URL and admin credentials of the remote NAS.
Once that’s done, you can choose the datasets (and snapshot directories) you want to sync with the remote node, as well as the method of transferring them. I rely on the pull method to initiate snapshot requests from my remote TrueNAS server, as it’s more secure. You can also pick a frequency for your snapshot replication tasks or go with a one-time execution. Personally, I recommend going with a weekly (or bi-weekly) schedule, and it’s a good idea to modify the snapshot lifetime to avoid performance and storage issues by overloading your nodes with thousands of old snapshots. When you’re done, you can try running the replication manually to confirm whether everything works as expected.
Tailscale can make this setup even more painless
Since you’ve got two NAS units lying on entirely different networks, you can’t rely on their local IP addresses to connect them. And with a NAS housing essential files, I can’t recommend exposing them to the Internet, either. Self-hosting a local VPN can help your replication tasks run securely, but if your network is afflicted with the curse of CGNAT, Tailscale is the next best thing. TrueNAS includes a Tailscale container in the App Store, so you don’t have to scramble around trying to configure it within VMs, either.
