Well, we you have to download a large file with a bad network connection, it will be very useful for you to enable transfer resume. Most of the browsers don’t support transfer resume, I’m wondering with the help of some of its plugins, you might be able to do transfer resume in browser downloading.
Here our examples are all command-line based.
Enable transfer resume for wget
There is parameter for this, from its man page we can see,
`-c' `--continue' Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. For instance: wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
So you just need to specify the -c option to enable transfer resume.
Enable transfer resume for curl
Same as wget, we can find this in its manual page,
-C/--continue-at Continue/Resume a previous file transfer at the given offset. The given offset is the exact number of bytes that will be skipped, counting from the beginning of the source file before it is transferred to the destination. If used with uploads, the FTP server command SIZE will not be used by curl. Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out.
$ curl -C - -O http://remote.server/large.file
Enable transfer resume for SCP/Rsync
When you are copying big files using scp and unfortunately the transfer fails, and you get “stalled”. You may have already copied 80% of the file and you don’t want to start from the beginning. Here is the solution. How to resume a scp transfer? Using rsync as below,
rsync --partial --progress --rsh=ssh user@host:/path/remote_file local_file