Fetching a remote file in Linux

Downloading a file to your computer, only to upload it to a server, is time consuming, especially if it’s a large file. Why not just download it from the server using the command line? It’s a lot easier than you might think.

Sometimes you need to download a file from a remote location. I can think of a couple of quick examples:

  1. Fetching a data feed published by an external source;
  2. Downloading a package (e.g. WordPress) to avoid downloading it to your computer and uploading it again from there.

In the first instance, being able to download the file from your server makes you a step closer to automating the process (e.g. if the feed is updated, say, once an hour). In the second instance, it’s much faster as it’s a one-step, rather than a two-step, process. And it’s really easy, using a package called wget, which supports downloads over HTTP, HTTPS and FTP:

wget -O /path/to/filename url

The -O switch controls where the resulting file will be saved, and the URL is the file to be fetched. If only a filename (no path) is entered, the file will be saved to the current directory set in the terminal. For example:

wget -O latest-wordpress.tar.gz https://wordpress.org/latest.tar.gz

This command would fetch the latest copy of WordPress from the WordPress servers and save it at latest-wordpress.tar.gz in the current directory.

One thing to be aware of: the switch (-O) is case sensitive. It must be upper case.

The wget command has a large array of other options at runtime, but for simple cases like fetching and saving a single file, this is all you need.

Truncating a file in Linux

Linux has a very useful and easy command when you need to keep a file but empty the contents of the file. Very useful when developing or testing and you only need to look at data in real-time.

When I’m working on websites, I often have a need to empty an existing file.

The most common need for this is when looking at error logs. When I’m working on something, I always keep a close eye on the error logs and when I find a problem, I fix it. I then clear the error logs to see if the error reappears. I have no real need to archive or rotate the log. I just want it cleared so next time I open the file, I can see anything that’s new without the need to wade through what was there previously.

It’s very easy to do this, using the truncate command:

truncate -s 0 /path/to/file

If the logged-in user doesn’t have permission to modify the file you’ll need to run the command as the super user:

sudo truncate -s 0 /path/to/file

The truncate command is used to shrink or expand a file to a specified size, and the -s 0 switch tells it to empty the file.

Very useful for situations where you have files like logs where you don’t need to keep the data once you’ve looked at it.