Setting a static IP address in Ubuntu Server

Networking takes care of itself in Windows, and static IPs aren’t very hard. But how do you set a static IP address in Ubuntu Server?

I’ve used Linux regularly for more than ten years. For the majority of that time, I’ve used Ubuntu, although I’ve also dabbled in Fedora (without much success) and CentOS (with marginally more).

I’m generally comfortable with Linux and when I’m not, I usually know where to look. However, if there’s one thing I always seem to have issues with on Linux, it’s networking.

When I first used Ubuntu, it was often a struggle to get any usable networking. After a while, it was just wireless (mainly WPA) which caused an issue. Latterly, the main issue I’ve encountered is networking with virtual machines (and most of those issues have been with VirtualBox – VMware doesn’t seem to encounter the same issues).

If you’re working with a server, you’ll want to set a static IP address. Once working, this makes things much easier than using a dynamic address.

If you use Ubuntu Desktop, you can edit these settings very easily from your control panel, but on Ubuntu Server you’ll need to use the command line.

In Ubuntu, your network settings are stored in /etc/network/interfaces. To edit the file, enter nano /etc/network/interfaces in a command line (or, if you have a different text editor, like vi, use that instead of nano). This will open the file in a text editor. A sample file might look like this (comments removed):

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

So, what does that mean? Well, there are two interfaces. The first is the loopback interface. Leave that be – if you mess with it, things will break. Ubuntu uses that to communicate with itself.

The second is the eth0 interface. This is your network card. You may have more than one; if you have a second, it might be labelled eth1. Your network card have a completely different designation – on my VMware virtual machine, it’s labelled ens33).

The second line in each of those blocks sets those interfaces. The first lines (auto lo and auto eth0) tell Ubuntu to start those interfaces on boot. If eth0 wasn’t started on boot, the machine wouldn’t have any networking outwith the machine, which will cause you problems!

In the example above, and by default, Ubuntu uses DHCP, so the instance will allow your DHCP server (usually your router) to give the machine an IP address and set any other settings with it.

So, how do you set a static IP address? Well, it goes something like this:

auto eth0
iface eth0 inet static
address 192.168.0.10
netmask 255.255.255.0
network 192.168.1.0
gateway 192.168.1.254
dns-nameservers 8.8.8.8 8.8.4.4

Line-by-line, that configuration does the following:

  • Sets Ubuntu to configure the eth0 interface on boot.
  • Sets the interface as a static interface (rather than DHCP).
  • Gives the interface an address (in this case, 192.168.0.10).
  • Sets a 24-bit subnet mask (so the network runs 192.168.0.1 – 192.168.0.255), and the network in the line underneath.
  • Sets the gateway (router) address. The address displayed is the one used by BT routers – others may use 192.168.0.1.
  • Sets the DNS nameservers to use. I’ve used Google nameservers in this example. You only need one for things to work, but it’s better to have at least two. You can mix them too – for better redundancy, you could use one Google namerserver and one OpenDNS nameserver.

A few quick notes:

  • Once set, you need to reboot the server for the changes to take full effect. Stopping and starting the networking interface won’t cut it.
  • If you forget to add the auto eth0 line and networking doesn’t automatically come up on boot, run sudo ifup eth0 That will manually bring up the inferface. Remember to fix your configuration afterwards!
  • If you don’t set your DNS nameservers, you’ll still have network access, but you won’t be able to resolve domain names. Thus, you will be able to ping an IP address would work, but pinging an FQDN would not work. You’ll eventually notice if this doesn’t work, because things like updates will fail.

This should be everything you need to know to set up a static IP address in Ubuntu.

WordPress updates with SSH2 in Ubuntu 16.04

I’ve had a few issues upgrading WordPress recently. I thought I was going mad, but it turned out to be a really annoying bug.

Recently, I upgraded my virtual private server (VPS) to the latest Ubuntu release (Ubuntu 16.04). I did this for two reasons.

First, it’s good to be on a recent release, and Ubuntu 16.04 is a long term support (LTS) release, meaning it’s supported for longer than other Ubuntu releases, and aimed more at stability for things like servers, rather than all-new features which might be a bit sharp around the edges.

Second, Ubuntu 16.04 came with PHP7 out of the box, which provides drastic performance improvements which, on a VPS, is very useful as CPU resources can be a little constrained.

It also gave me a chance to burn the old install to the ground as over the past three years or so, I’d tested a lot of things out on it and I wanted everything to be nice and uniform, putting into practice some of the things I’d come up with along the way, like my automated website creation script.

Anyway, on my Ubuntu 14.04 install, which used PHP 5.6, I had automatic updates set up on my WordPress installs. They worked nicely and provided good security, as I used SSH2 for the updates, meaning the files and folders the WordPress install lived in were not modifiable by the web server itself.

I host multiple websites on my server, and to enhance security, each site is owned by a different local user, and each database has its own user, so if one site is compromised, it’s harder to compromise the rest.

The web server shouldn’t really have write access to local files, but it needs write access to update WordPress automatically. By using SSH2 for WordPress updates, the web server can get the access it requires without having direct rights. It works well.

The problem is, it’s broken in PHP7. No matter what I did, I could not get updates to work. I came across various error messages, and after a lot of hunting around and double-checking, I was sure it wasn’t because I was doing anything wrong.

And it turns out I was right to be sure. A problem with the php-ssh2 breaks updates for WordPress. If you have this problem, you’re probably not going mad. Fortunately, I can offer a solution: SSH SFTP Updater Support.

This plugin uses a different library and I found that once I’d uploaded the plugin manually and activated it, my updates worked perfectly, first time (because my settings were correct, obviously!)

Once I’d fixed this, I decided to see if I could find any more information about this package, so I had a look at the information attached to the package in my installation:

php-ssh2

So this is an unreleased git snapshot and should be used with caution? Doesn’t seem like the kind of package that should appear in a long term support release…

Automating Linux website creation

I create a lot of test websites. Sometimes it’s for testing new things, other times it’s for testing upgrades. I’ve always found it’s a hassle to set a new website up, so I’ve tried to automate it.

Back in the days of shared hosting, setting up a new website was easy. It was as simple as:

  1. Set your nameservers to those of your web host;
  2. Log in to their system (usually cPanel) and click something like “New domain” or “New subdomain”.

Since I moved to using a VPS, it’s not as easy as that any more. Firstly, I use CloudFlare, so I use their namservers. Secondly, I don’t run cPanel, or Plesk, which is similar. There are two reasons I don’t use them:

  1. They make hosting more expensive, as they require licences;
  2. They further increase cost as they require much more powerful servers to run.

Currently I can run my VPS for about $5 per month. If I used cPanel or Plesk, they would cost about $50 per month. So, if I can do without, it’s a big saving.

Setting up a new website

There are a few things required in setting up a new website (I won’t cover setting up a VPS – that’s another series I’m working on):

  1. You need a user to own the files;
  2. You need to create the required directory structure and set permissions on it;
  3. You need to create a MySQL user and password;
  4. You need to create a MySQL database and control permissions to it;
  5. You need to set up the web server to make the address reachable.

I’ll assume the DNS is set elsewhere, or using a hosts file. I use Nginx as my web server.

That’s quite a few things to set up, and it would be easy to miss something or get a step wrong. Even if you get it right, it will take a few minutes. The longest part is usually setting up the web server – especially with Nginx that configuration is set centrally and isn’t easily overridden outside of that.

One day I got bored of doing it all manually, so I wrote a Bash script to take care of it. It’s not perfect, but it does a decent job. I decided I’d make it available via GitHub.

Currently, I make a folder it /opt/ and call it something like website-creator. I can run it as follows:

sudo bash /opt/website-creator/create-website.sh

When run, it does the following:

  • Prompts for the fully qualified domain name (FQDN) which will be used to access the website when it’s live;
  • Checks a folder for that FQDN doesn’t already exist (in a pre-determined folder);
  • Prompts for the Linux user who should own the folder for that website;
  • Checks to see if the user exists, and creates the user if it doesn’t;
  • Prompts for MySQL admin credentials;
  • Once authenticated, prompts for a database name;
  • Creates the database;
  • Checks for a MySQL user with the same name as the Linux user which should own the files, and creates a user with that username if it does not exist;
  • Gives that user access to the database which has been created;
  • Creates the folder structure (I use public, private, logs and backups);
  • Copies index.html and 404.html files to the web root for that site;
  • If necessary, creates a file with the MySQL password for the newly-created user and places it in the private directory, with read-access to the owner only;
  • Changes the owner of the website folder structure to the Linux user previously specified (including the file with the MySQL password);
  • Copies a template Nginx configuration to the sites-available directory;
  • Runs a search and replace on that file to insert the domain name and root directory;
  • Activates the site (placing a symbolic link in sites-enabled);
  • Reloads the Nginx configuration;
  • If necessary, prints to the screen the password of the new Linux user.

It’s not perfect, but it does mean I can set up a working website in five seconds, rather than five minutes. There are, of course, some things that may need to be changed once it’s done, like modifications to the site’s Nginx configuration, but without changing anything, it will happily serve static and PHP content.

There’s also a file where runtime variables can be stored. At the moment, it has two:

  • WEBROOTFOLDER – this is the folder the website directories will be created in (I use /usr/share/nginx/);
  • NGINXCONFIG – this is where the Nginx configuration is stored (in Ubuntu, that’s /etc/nginx/, but it may be different on other systems).

It makes my life a lot easier. Maybe it will help a few other people too.

How to screw up your Ubuntu server

Ever wondered how people learn how to solve things? Well, sometimes you just have to break it first. Pro tip: just make sure you don’t break it on something important. Lucky for you – and me – I remembered that bit.

I know, I know, this isn’t really what you’re meant to do. The idea of a guide is to do something useful, to solve a problem, to help people do something quicker, or better… but sometimes, it doesn’t work like that. Sometimes, things happen that give you problems.

Well, I managed that today, so I thought I’d share it. It was – strictly speaking – a work thing, but on a test server that had nothing on it, and already had a slight problem, and I had a feeling it might not go to plan, but I thought I’d try it anyway.

We have a Ubuntu virtual machine which is used for occasional testing for two websites we look after internally, and it turned out the virtual machine’s network connection stopped working because of an issue on the host. Thus, the server was suddenly not a whole lot of use.

The easiest thing to do, given there was next-to-nothing on the server, would beat to boot from the Ubuntu 16.04 ISO (the server was there to test PHP 7 on 16.04) and start again. It would take no more than 20 minutes to get the server back to where it was. However, the alternative was to remove the virtual network card from the virtual machine and assign it a new one. Well, with nothing to lose, it was worth trying, out of curiosity to see what would happen…

Ubuntu kernel panic

It turns out, it caused a kernel panic and wouldn’t finish booting. It would definitely be quicker to reinstall, but that’s essentially what would happen if it was a real box, the NIC failed and you had to install a new one.

I’m currently debating how curious I am, and if it’s worth seeing how easy it is to resolve an issue like that. I guess it would be nice to know I’d be able to fix it, but I’m currently tempted to take the lazy route.