Smart ways to transfer massive files between servers

With the help of pipeline, we can copy files between servers very easily.

What we need to do is to establish an instant connection between servers, and we often create such a connection by OpenSSH. When the connection is established, transfer files between servers are much like local copy.

I’ll give you some examples, it would help you if you have the need to copy massive small files between servers.

Copy large amount of small files (without compress)

tar cf - * --exclude=logs/* | ssh -p PORT user@remote.admon.org " cd /node1; tar xvf -"

Copy large amount of small files (with compress)

tar zcf - * --exclude=logs/* | ssh -p PORT user@remote.admon.org "cd /node1/; tar zxvf -"

Dump Postgres Database and copy the backup file to remote server on-the-fly:

pg_dump -U USER DATABASE-NAME | ssh -p PORT user@remote.admon.org "dd of=/backup/pgsql-$(date +'%d-%m-%y')"

Dump MySQL Database and copy the backup file to remote server on-the-fly:

mysqldump -u USER -p'PASSWORD' DATABASE-NAME | ssh -p PORT user@remote.admon.org "dd of=/backup/mysql-$(date +'%d-%m-%y')"

Share this post

One thought on “Smart ways to transfer massive files between servers

Post Comment