Fetching a remote file in Linux

Downloading a file to your computer, only to upload it to a server, is time consuming, especially if it’s a large file. Why not just download it from the server using the command line? It’s a lot easier than you might think.

Sometimes you need to download a file from a remote location. I can think of a couple of quick examples:

  1. Fetching a data feed published by an external source;
  2. Downloading a package (e.g. WordPress) to avoid downloading it to your computer and uploading it again from there.

In the first instance, being able to download the file from your server makes you a step closer to automating the process (e.g. if the feed is updated, say, once an hour). In the second instance, it’s much faster as it’s a one-step, rather than a two-step, process. And it’s really easy, using a package called wget, which supports downloads over HTTP, HTTPS and FTP:

wget -O /path/to/filename url

The -O switch controls where the resulting file will be saved, and the URL is the file to be fetched. If only a filename (no path) is entered, the file will be saved to the current directory set in the terminal. For example:

wget -O latest-wordpress.tar.gz https://wordpress.org/latest.tar.gz

This command would fetch the latest copy of WordPress from the WordPress servers and save it at latest-wordpress.tar.gz in the current directory.

One thing to be aware of: the switch (-O) is case sensitive. It must be upper case.

The wget command has a large array of other options at runtime, but for simple cases like fetching and saving a single file, this is all you need.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.