Setting a static IP address in Ubuntu Server

Networking takes care of itself in Windows, and static IPs aren’t very hard. But how do you set a static IP address in Ubuntu Server?

I’ve used Linux regularly for more than ten years. For the majority of that time, I’ve used Ubuntu, although I’ve also dabbled in Fedora (without much success) and CentOS (with marginally more).

I’m generally comfortable with Linux and when I’m not, I usually know where to look. However, if there’s one thing I always seem to have issues with on Linux, it’s networking.

When I first used Ubuntu, it was often a struggle to get any usable networking. After a while, it was just wireless (mainly WPA) which caused an issue. Latterly, the main issue I’ve encountered is networking with virtual machines (and most of those issues have been with VirtualBox – VMware doesn’t seem to encounter the same issues).

If you’re working with a server, you’ll want to set a static IP address. Once working, this makes things much easier than using a dynamic address.

If you use Ubuntu Desktop, you can edit these settings very easily from your control panel, but on Ubuntu Server you’ll need to use the command line.

In Ubuntu, your network settings are stored in /etc/network/interfaces. To edit the file, enter nano /etc/network/interfaces in a command line (or, if you have a different text editor, like vi, use that instead of nano). This will open the file in a text editor. A sample file might look like this (comments removed):

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

So, what does that mean? Well, there are two interfaces. The first is the loopback interface. Leave that be – if you mess with it, things will break. Ubuntu uses that to communicate with itself.

The second is the eth0 interface. This is your network card. You may have more than one; if you have a second, it might be labelled eth1. Your network card have a completely different designation – on my VMware virtual machine, it’s labelled ens33).

The second line in each of those blocks sets those interfaces. The first lines (auto lo and auto eth0) tell Ubuntu to start those interfaces on boot. If eth0 wasn’t started on boot, the machine wouldn’t have any networking outwith the machine, which will cause you problems!

In the example above, and by default, Ubuntu uses DHCP, so the instance will allow your DHCP server (usually your router) to give the machine an IP address and set any other settings with it.

So, how do you set a static IP address? Well, it goes something like this:

auto eth0
iface eth0 inet static
address 192.168.0.10
netmask 255.255.255.0
network 192.168.1.0
gateway 192.168.1.254
dns-nameservers 8.8.8.8 8.8.4.4

Line-by-line, that configuration does the following:

  • Sets Ubuntu to configure the eth0 interface on boot.
  • Sets the interface as a static interface (rather than DHCP).
  • Gives the interface an address (in this case, 192.168.0.10).
  • Sets a 24-bit subnet mask (so the network runs 192.168.0.1 – 192.168.0.255), and the network in the line underneath.
  • Sets the gateway (router) address. The address displayed is the one used by BT routers – others may use 192.168.0.1.
  • Sets the DNS nameservers to use. I’ve used Google nameservers in this example. You only need one for things to work, but it’s better to have at least two. You can mix them too – for better redundancy, you could use one Google namerserver and one OpenDNS nameserver.

A few quick notes:

  • Once set, you need to reboot the server for the changes to take full effect. Stopping and starting the networking interface won’t cut it.
  • If you forget to add the auto eth0 line and networking doesn’t automatically come up on boot, run sudo ifup eth0 That will manually bring up the inferface. Remember to fix your configuration afterwards!
  • If you don’t set your DNS nameservers, you’ll still have network access, but you won’t be able to resolve domain names. Thus, you will be able to ping an IP address would work, but pinging an FQDN would not work. You’ll eventually notice if this doesn’t work, because things like updates will fail.

This should be everything you need to know to set up a static IP address in Ubuntu.

Extracting archive files in Linux

So, you know how to fetch a remote file in Linux, but what do you do if you fetch an archive? How do you extract the files and place them in a directory to work with them?

I previously wrote a how-to guide to fetching a remote file in Linux. That’s great, but once you’ve got the file, sometimes you’ll need to do some more work with it before you can use it.

This is especially true with archive files. Archives generally come in one of two forms:

  1. ZIP files – more commonly used on Windows platforms;
  2. TAR files – more commonly used on Linux and UNIX platforms.

Platforms such as Github and WordPress often offer both ZIP and TAR formats for their downloads.

Once you have the archive file, you need to extract it into a directory. Assuming your file is called wordpress.tar.gz, and you want to extract it to an existing directory called my-site in the same directory, you can use the following command:

tar -xvf wordpress.tar.gz -C my-site/

The -x switch tells tar to extract the files (as you can also use tar to create compressed archives). The -v switch puts tar into verbose mode, so it prints everything it’s doing to the command line, and the -f switch is used in conjunction with the filename to set the file to extract the files from. The -C switch then tells tar to place the extracted files in the directory named at the end of the command.

However, in most cases this will extract the files, but leave the extracted files in a subdirectory. So, in the case of a WordPress archive, the files would be located in my-site/wordpress/ – what if you don’t want the files extracted to a subdirectory?

Not a problem. You can use --strip-components=1 at the end of the command:

tar -xvf wordpress.tar.gz -C my-site/ --strip-components=1

This strips out the top-level directory and leaves the files directly in the my-site folder.

Automating Linux website creation

I create a lot of test websites. Sometimes it’s for testing new things, other times it’s for testing upgrades. I’ve always found it’s a hassle to set a new website up, so I’ve tried to automate it.

Back in the days of shared hosting, setting up a new website was easy. It was as simple as:

  1. Set your nameservers to those of your web host;
  2. Log in to their system (usually cPanel) and click something like “New domain” or “New subdomain”.

Since I moved to using a VPS, it’s not as easy as that any more. Firstly, I use CloudFlare, so I use their namservers. Secondly, I don’t run cPanel, or Plesk, which is similar. There are two reasons I don’t use them:

  1. They make hosting more expensive, as they require licences;
  2. They further increase cost as they require much more powerful servers to run.

Currently I can run my VPS for about $5 per month. If I used cPanel or Plesk, they would cost about $50 per month. So, if I can do without, it’s a big saving.

Setting up a new website

There are a few things required in setting up a new website (I won’t cover setting up a VPS – that’s another series I’m working on):

  1. You need a user to own the files;
  2. You need to create the required directory structure and set permissions on it;
  3. You need to create a MySQL user and password;
  4. You need to create a MySQL database and control permissions to it;
  5. You need to set up the web server to make the address reachable.

I’ll assume the DNS is set elsewhere, or using a hosts file. I use Nginx as my web server.

That’s quite a few things to set up, and it would be easy to miss something or get a step wrong. Even if you get it right, it will take a few minutes. The longest part is usually setting up the web server – especially with Nginx that configuration is set centrally and isn’t easily overridden outside of that.

One day I got bored of doing it all manually, so I wrote a Bash script to take care of it. It’s not perfect, but it does a decent job. I decided I’d make it available via GitHub.

Currently, I make a folder it /opt/ and call it something like website-creator. I can run it as follows:

sudo bash /opt/website-creator/create-website.sh

When run, it does the following:

  • Prompts for the fully qualified domain name (FQDN) which will be used to access the website when it’s live;
  • Checks a folder for that FQDN doesn’t already exist (in a pre-determined folder);
  • Prompts for the Linux user who should own the folder for that website;
  • Checks to see if the user exists, and creates the user if it doesn’t;
  • Prompts for MySQL admin credentials;
  • Once authenticated, prompts for a database name;
  • Creates the database;
  • Checks for a MySQL user with the same name as the Linux user which should own the files, and creates a user with that username if it does not exist;
  • Gives that user access to the database which has been created;
  • Creates the folder structure (I use public, private, logs and backups);
  • Copies index.html and 404.html files to the web root for that site;
  • If necessary, creates a file with the MySQL password for the newly-created user and places it in the private directory, with read-access to the owner only;
  • Changes the owner of the website folder structure to the Linux user previously specified (including the file with the MySQL password);
  • Copies a template Nginx configuration to the sites-available directory;
  • Runs a search and replace on that file to insert the domain name and root directory;
  • Activates the site (placing a symbolic link in sites-enabled);
  • Reloads the Nginx configuration;
  • If necessary, prints to the screen the password of the new Linux user.

It’s not perfect, but it does mean I can set up a working website in five seconds, rather than five minutes. There are, of course, some things that may need to be changed once it’s done, like modifications to the site’s Nginx configuration, but without changing anything, it will happily serve static and PHP content.

There’s also a file where runtime variables can be stored. At the moment, it has two:

  • WEBROOTFOLDER – this is the folder the website directories will be created in (I use /usr/share/nginx/);
  • NGINXCONFIG – this is where the Nginx configuration is stored (in Ubuntu, that’s /etc/nginx/, but it may be different on other systems).

It makes my life a lot easier. Maybe it will help a few other people too.

How to screw up your Ubuntu server

Ever wondered how people learn how to solve things? Well, sometimes you just have to break it first. Pro tip: just make sure you don’t break it on something important. Lucky for you – and me – I remembered that bit.

I know, I know, this isn’t really what you’re meant to do. The idea of a guide is to do something useful, to solve a problem, to help people do something quicker, or better… but sometimes, it doesn’t work like that. Sometimes, things happen that give you problems.

Well, I managed that today, so I thought I’d share it. It was – strictly speaking – a work thing, but on a test server that had nothing on it, and already had a slight problem, and I had a feeling it might not go to plan, but I thought I’d try it anyway.

We have a Ubuntu virtual machine which is used for occasional testing for two websites we look after internally, and it turned out the virtual machine’s network connection stopped working because of an issue on the host. Thus, the server was suddenly not a whole lot of use.

The easiest thing to do, given there was next-to-nothing on the server, would beat to boot from the Ubuntu 16.04 ISO (the server was there to test PHP 7 on 16.04) and start again. It would take no more than 20 minutes to get the server back to where it was. However, the alternative was to remove the virtual network card from the virtual machine and assign it a new one. Well, with nothing to lose, it was worth trying, out of curiosity to see what would happen…

Ubuntu kernel panic

It turns out, it caused a kernel panic and wouldn’t finish booting. It would definitely be quicker to reinstall, but that’s essentially what would happen if it was a real box, the NIC failed and you had to install a new one.

I’m currently debating how curious I am, and if it’s worth seeing how easy it is to resolve an issue like that. I guess it would be nice to know I’d be able to fix it, but I’m currently tempted to take the lazy route.

Fetching a remote file in Linux

Downloading a file to your computer, only to upload it to a server, is time consuming, especially if it’s a large file. Why not just download it from the server using the command line? It’s a lot easier than you might think.

Sometimes you need to download a file from a remote location. I can think of a couple of quick examples:

  1. Fetching a data feed published by an external source;
  2. Downloading a package (e.g. WordPress) to avoid downloading it to your computer and uploading it again from there.

In the first instance, being able to download the file from your server makes you a step closer to automating the process (e.g. if the feed is updated, say, once an hour). In the second instance, it’s much faster as it’s a one-step, rather than a two-step, process. And it’s really easy, using a package called wget, which supports downloads over HTTP, HTTPS and FTP:

wget -O /path/to/filename url

The -O switch controls where the resulting file will be saved, and the URL is the file to be fetched. If only a filename (no path) is entered, the file will be saved to the current directory set in the terminal. For example:

wget -O latest-wordpress.tar.gz https://wordpress.org/latest.tar.gz

This command would fetch the latest copy of WordPress from the WordPress servers and save it at latest-wordpress.tar.gz in the current directory.

One thing to be aware of: the switch (-O) is case sensitive. It must be upper case.

The wget command has a large array of other options at runtime, but for simple cases like fetching and saving a single file, this is all you need.

Truncating a file in Linux

Linux has a very useful and easy command when you need to keep a file but empty the contents of the file. Very useful when developing or testing and you only need to look at data in real-time.

When I’m working on websites, I often have a need to empty an existing file.

The most common need for this is when looking at error logs. When I’m working on something, I always keep a close eye on the error logs and when I find a problem, I fix it. I then clear the error logs to see if the error reappears. I have no real need to archive or rotate the log. I just want it cleared so next time I open the file, I can see anything that’s new without the need to wade through what was there previously.

It’s very easy to do this, using the truncate command:

truncate -s 0 /path/to/file

If the logged-in user doesn’t have permission to modify the file you’ll need to run the command as the super user:

sudo truncate -s 0 /path/to/file

The truncate command is used to shrink or expand a file to a specified size, and the -s 0 switch tells it to empty the file.

Very useful for situations where you have files like logs where you don’t need to keep the data once you’ve looked at it.