Using rsync over ssh as an efficient backup tool on Linux March 3, 2011Posted by Tournas Dimitrios in Linux.
Although it is possible to use gzip and ftp to make a local copy of a remote directory , it has couple drawbacks . Data is transferred unencrypted and we are most likely transferring files which we had copied over a day before. We could of course use SCP to transfer the data over an encrypted ssh channel , but we still transfer duplicate data . To stop transferring a duplicate data we can use rsync. Combining rsync with ssh, compression, bash and cron we can end up with an efficient backup tool.
Read also : 5 Free Linux Backup Solutions
We need to create a Public Key based authentication (passwordless) ssh login , by doing this we can avoid the need of entering password when doing our backup. This way the whole backup process is completely automatic.Read my earlier article “SSH Public Key Based Authentication – Howto ” .
Three packages are involved to fulfill our scenario :
- openssh-server (on the remote machine)
- openssh-client (on the local box)
- rsync (on the local box)
Let’s create the basch script with the name “remote-backup.sh” .
# First create database backup
/usr/bin/ssh user@remote_server ‘( mysqldump – -password=’pass’ \
mydatabase > ~/public_dir/mydatabase.sql )’
# Let’s use rsync to make the transfer (database and files from public_dir)
/usr/bin/rsync -zave ssh – -delete user@remote_server:~/public_dir \ /backup/
Firstly the script will remotely execute mysqldump command over ssh to make a database backup and store it in a public directory.Next the script will create a local copy of a remote ~/public_dir directory and store it in /backup . The – -delete option will ensure to delete all files from a local directory which no longer exist in a remote source directory thus keeping both directories in complete sync. rsync’s -z option ensures a compression during transfer.
One step before we test our new backup script : chmod +700 remote-backup.sh
Testing the script with : ./remote-backup.sh