Constraining images with CSS

Small details make a big difference to websites. It’s never a good look when an image is larger than its container. Fortunately, it’s very easy to fix.

Responsive websites are great; mobile-first websites are even better. However, they do sometimes present frustrating issues. One issue which props up on a fairly regular basis is the issue of images – more specifically, a question along the lines of:

Why don’t my images fit on the page?

It’s a simple question really – if text can adapt to fit on a smaller screen, why do images sometimes insist on maintaining their original size? The answer is very simple.

Here’s a fairly average piece of code for an image:

<img src="image.png" alt="Image" width="800" height="200">

When displayed on a website, that image will display at a resolution of 800 pixels wide by 200 pixels high. If you’re using a screen with a lower resolution, the text will be scaled down and the image will dwarf the rest of the content. That’s because, without any other information, the browser doesn’t know any better.

So, to get around it, you need to add the following CSS:

img {
 max-width: 100%;
 height: auto;
 }

This does two things. First, it constrains all images to the maximum screen width. If that is 320 pixels, that’s the size the image will be scaled to. Second, it constrains the image height proportionally, so the image doesn’t look like it’s been squished.

So, what if you need to constrain not just images, but videos, and possibly more besides? Easy. Change your CSS to this:

.constrain {
max-width: 100%;
height: auto;
}

Now, you can use the .constrain class on any element and it will be constrained to the intended width.

Setting a static IP address in Ubuntu Server

Networking takes care of itself in Windows, and static IPs aren’t very hard. But how do you set a static IP address in Ubuntu Server?

I’ve used Linux regularly for more than ten years. For the majority of that time, I’ve used Ubuntu, although I’ve also dabbled in Fedora (without much success) and CentOS (with marginally more).

I’m generally comfortable with Linux and when I’m not, I usually know where to look. However, if there’s one thing I always seem to have issues with on Linux, it’s networking.

When I first used Ubuntu, it was often a struggle to get any usable networking. After a while, it was just wireless (mainly WPA) which caused an issue. Latterly, the main issue I’ve encountered is networking with virtual machines (and most of those issues have been with VirtualBox – VMware doesn’t seem to encounter the same issues).

If you’re working with a server, you’ll want to set a static IP address. Once working, this makes things much easier than using a dynamic address.

If you use Ubuntu Desktop, you can edit these settings very easily from your control panel, but on Ubuntu Server you’ll need to use the command line.

In Ubuntu, your network settings are stored in /etc/network/interfaces. To edit the file, enter nano /etc/network/interfaces in a command line (or, if you have a different text editor, like vi, use that instead of nano). This will open the file in a text editor. A sample file might look like this (comments removed):

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

So, what does that mean? Well, there are two interfaces. The first is the loopback interface. Leave that be – if you mess with it, things will break. Ubuntu uses that to communicate with itself.

The second is the eth0 interface. This is your network card. You may have more than one; if you have a second, it might be labelled eth1. Your network card have a completely different designation – on my VMware virtual machine, it’s labelled ens33).

The second line in each of those blocks sets those interfaces. The first lines (auto lo and auto eth0) tell Ubuntu to start those interfaces on boot. If eth0 wasn’t started on boot, the machine wouldn’t have any networking outwith the machine, which will cause you problems!

In the example above, and by default, Ubuntu uses DHCP, so the instance will allow your DHCP server (usually your router) to give the machine an IP address and set any other settings with it.

So, how do you set a static IP address? Well, it goes something like this:

auto eth0
iface eth0 inet static
address 192.168.0.10
netmask 255.255.255.0
network 192.168.1.0
gateway 192.168.1.254
dns-nameservers 8.8.8.8 8.8.4.4

Line-by-line, that configuration does the following:

  • Sets Ubuntu to configure the eth0 interface on boot.
  • Sets the interface as a static interface (rather than DHCP).
  • Gives the interface an address (in this case, 192.168.0.10).
  • Sets a 24-bit subnet mask (so the network runs 192.168.0.1 – 192.168.0.255), and the network in the line underneath.
  • Sets the gateway (router) address. The address displayed is the one used by BT routers – others may use 192.168.0.1.
  • Sets the DNS nameservers to use. I’ve used Google nameservers in this example. You only need one for things to work, but it’s better to have at least two. You can mix them too – for better redundancy, you could use one Google namerserver and one OpenDNS nameserver.

A few quick notes:

  • Once set, you need to reboot the server for the changes to take full effect. Stopping and starting the networking interface won’t cut it.
  • If you forget to add the auto eth0 line and networking doesn’t automatically come up on boot, run sudo ifup eth0 That will manually bring up the inferface. Remember to fix your configuration afterwards!
  • If you don’t set your DNS nameservers, you’ll still have network access, but you won’t be able to resolve domain names. Thus, you will be able to ping an IP address would work, but pinging an FQDN would not work. You’ll eventually notice if this doesn’t work, because things like updates will fail.

This should be everything you need to know to set up a static IP address in Ubuntu.

Changing directories in the Windows command prompt

Everyone knows how to change directories in the command prompt, right? Sometimes you know less than you think.

I’ll admit, this one does seem a little basic. However, it’s something that’s often struck me as odd, yet I’d never bothered to look into further until this week, when my curiosity got the better of me.

In Windows, as in Linux, it’s really easy to change directories when in a command prompt. Say you’re in C:\Users\User, and you want to change to C:\Users\User\Desktop, you would put in the following:

cd Desktop

If you wanted to switch to C:Windows\System32, you would put in the following:

cd C:\Windows\System32

Simple, right? So, if you store files on another drive (as I always do, because it makes backups and sharing much easier for me), and you want to switch to, say, D:\Websites\Website\, you would put in the following:

cd D:\Websites\Website

The trouble is, it doesn’t work. I would always have to do the following:

D:\
cd Websites\Website

Until now, I’d never been bothered enough to look at what i need to do to make it work. It turns out, when switching drive letters as well as directories, you need to use a switch, /D. so, to do it in one line, you need the following:

cd /D D:\Websites\Website

To be honest, I can’t really see any reason why you would need a switch, since nothing happens at all if you don’t use it, but I’m sure someone, somewhere, will know why it’s implemented like that. Although come to think of it, I’ve never really understood why Windows still uses drive letters, as the Linux model of folders relative to root seems much more logical to me!

Installing PHP Manager on IIS 10

Have you tried to install PHP Manager on IIS 10? It doesn’t seem to be compatible. But don’t fret – the solution is straightforward.

This is a bit of a niche issue on the whole, but an issue all the same.

Microsoft have a useful utility called the Web Platform Installer, which is a repository of products which plug in to Internet Information Services (IIS) for people hosting sites on Microsoft servers (yes, some people do that, despite what you might read).

One useful utility is PHP Manager. It installs an easily accessible utility into IIS Manager giving you a shortcut to PHP settings. It’s also useful if you run sites on different PHP versions and/or configurations.

One problem. If you try and install it on IIS 10 (Windows 10 / Server 2016), it will fail. Why? It seems to check against a registry entry for the Windows version, and didn’t take into account that the Windows version number might change in the future. Awkward.

The key in question is in:

HKLM\System\CCS\Services\W3SVC\Parameters*

The entry in question is MajorVersion. In Windows 10, it’s set to 10 (decimal). Change it to a lower number (e.g. 8). PHP Manager will then successfully install. Once you’ve installed PHP Manager, don’t forget to change it back!

CCS is shorthand for Current Control Set.

Extracting archive files in Linux

So, you know how to fetch a remote file in Linux, but what do you do if you fetch an archive? How do you extract the files and place them in a directory to work with them?

I previously wrote a how-to guide to fetching a remote file in Linux. That’s great, but once you’ve got the file, sometimes you’ll need to do some more work with it before you can use it.

This is especially true with archive files. Archives generally come in one of two forms:

  1. ZIP files – more commonly used on Windows platforms;
  2. TAR files – more commonly used on Linux and UNIX platforms.

Platforms such as Github and WordPress often offer both ZIP and TAR formats for their downloads.

Once you have the archive file, you need to extract it into a directory. Assuming your file is called wordpress.tar.gz, and you want to extract it to an existing directory called my-site in the same directory, you can use the following command:

tar -xvf wordpress.tar.gz -C my-site/

The -x switch tells tar to extract the files (as you can also use tar to create compressed archives). The -v switch puts tar into verbose mode, so it prints everything it’s doing to the command line, and the -f switch is used in conjunction with the filename to set the file to extract the files from. The -C switch then tells tar to place the extracted files in the directory named at the end of the command.

However, in most cases this will extract the files, but leave the extracted files in a subdirectory. So, in the case of a WordPress archive, the files would be located in my-site/wordpress/ – what if you don’t want the files extracted to a subdirectory?

Not a problem. You can use --strip-components=1 at the end of the command:

tar -xvf wordpress.tar.gz -C my-site/ --strip-components=1

This strips out the top-level directory and leaves the files directly in the my-site folder.

Printing to a file with PowerShell

I need to print a large number of files to PDF. I don’t have time to open each file individually and do it manually, so I needed to find a way to automate the process. PowerShell provided me a solution, with a little ingenuity.

I’ve been trying to tackle a very tricky issue recently. I have a folder of PDF files sitting in a folder, and on a regular basis, the files in that folder need to be password protected and then moved elsewhere.

Acrobat Pro has a great feature called Actions, where you can set a workflow for documents. Normally, I would pick the folder containing these files and use a saved workflow to encrypt each file with a pre-determined password.

Unfortunately, the contents of some of these files (fillable forms) prevent that workflow from applying properly, meaning some files can’t be password protected in this manner. Since they need to be encrypted, it was important to find another way to do this.

I discovered that if I used a PDF printer to print the file to PDF again, the resulting file could be encrypted without issue. This was a good start, but not of much use if I needed to open each file and print the file manually. I needed something that could be automated. Specifically, I needed to be able to:

  • Automatically select the correct printer;
  • Specify a folder and work through each PDF in that folder;
  • Print each PDF to a file without any interaction during the process (so the output path must also be automatic);
  • Avoid interfering with the default print device beyond this task.

So, I turned to PowerShell and Adobe’s PDF printer (which comes with Adobe Acrobat’s licensed products).

Adobe’s PDF printer is useful because it’s possible to specify a default output directory in the printer preferences, and thus avoid a pop-up appearing for each PDF prompting for a save location. Free PDF printers like CutePDF don’t have this option (although I believe they are available in their paid-for products).

After that, everything can be done in PowerShell. Here’s the code I wrote:

$defaultprinter = Get-WmiObject -Query "SELECT * FROM win32_Printer WHERE default=$true"
$PDFprinter = Get-WmiObject -Query "Select * From Win32_Printer Where Name = 'Adobe PDF'"
$PDFprinter.SetDefaultPrinter()
Dir C:\test\*.pdf | Foreach-Object { Start-Process -FilePath $_.FullName -Verb Print }
$defaultprinter.SetDefaultPrinter()

The first line retrieves the current default printer. This is so that, at the end of the task, the default printer is restored (it will be changed for the duration of this task).

The second line retrieves the Adobe PDF printer. On my computer, it’s called “Adobe PDF”. If the named printer doesn’t exist, the script will return a horrible looking error and fail (I’ve no real need for error handling in this script).

The third line of this script sets the Adobe PDF printer from the second line as the default printer.

The fourth line of the script takes a specified directory and, for each PDF in that directory, prints the document. As the default printer is the Adobe PDF printer, that’s the printer that is used. In the Adobe PDF printer, I’ve already set a couple of options:

  • Disabled the option to show the PDF as it’s created (I want the task to run in the background as far as possible);
  • Specified an output directory where the PDFs will be saved to;
  • Set the PDF quality settings (in this case, “Standard”, for compatibility).

The final line restores the default printer, as it’s not likely I would want the Adobe PDF printer set as the default printer.

For me, that’s not the end of the process as I then need to go back in to Acrobat and encrypt the files, but the PowerShell script makes this a much quicker process than it would otherwise be.

I’d love to be able to encrypt the files from the same script, but unfortunately I’ve not found a way to do that. Nevertheless, a time-consuming problem made easier.

Fetching a remote file in Linux

Downloading a file to your computer, only to upload it to a server, is time consuming, especially if it’s a large file. Why not just download it from the server using the command line? It’s a lot easier than you might think.

Sometimes you need to download a file from a remote location. I can think of a couple of quick examples:

  1. Fetching a data feed published by an external source;
  2. Downloading a package (e.g. WordPress) to avoid downloading it to your computer and uploading it again from there.

In the first instance, being able to download the file from your server makes you a step closer to automating the process (e.g. if the feed is updated, say, once an hour). In the second instance, it’s much faster as it’s a one-step, rather than a two-step, process. And it’s really easy, using a package called wget, which supports downloads over HTTP, HTTPS and FTP:

wget -O /path/to/filename url

The -O switch controls where the resulting file will be saved, and the URL is the file to be fetched. If only a filename (no path) is entered, the file will be saved to the current directory set in the terminal. For example:

wget -O latest-wordpress.tar.gz https://wordpress.org/latest.tar.gz

This command would fetch the latest copy of WordPress from the WordPress servers and save it at latest-wordpress.tar.gz in the current directory.

One thing to be aware of: the switch (-O) is case sensitive. It must be upper case.

The wget command has a large array of other options at runtime, but for simple cases like fetching and saving a single file, this is all you need.

Truncating a file in Linux

Linux has a very useful and easy command when you need to keep a file but empty the contents of the file. Very useful when developing or testing and you only need to look at data in real-time.

When I’m working on websites, I often have a need to empty an existing file.

The most common need for this is when looking at error logs. When I’m working on something, I always keep a close eye on the error logs and when I find a problem, I fix it. I then clear the error logs to see if the error reappears. I have no real need to archive or rotate the log. I just want it cleared so next time I open the file, I can see anything that’s new without the need to wade through what was there previously.

It’s very easy to do this, using the truncate command:

truncate -s 0 /path/to/file

If the logged-in user doesn’t have permission to modify the file you’ll need to run the command as the super user:

sudo truncate -s 0 /path/to/file

The truncate command is used to shrink or expand a file to a specified size, and the -s 0 switch tells it to empty the file.

Very useful for situations where you have files like logs where you don’t need to keep the data once you’ve looked at it.