Website Load Speed Optimization - Part 1

April 2017 · 11 minute read

One of the biggest user experience snags on every web developers’ mind is page load speed. You can use Pingdom’s free speed test tool to get a report on your website’s load speed and how to improve it. It uses Google’s PageSpeed Insights Rules to build out the test report. This is what mujz.ca’s looks like:

mujz.ca Test Report

In this post, I’ll be explaining 4 essential solutions—image optimization, caching, minifying and compressing, and enabling http2—that every website should implement. In part 2 of this post, I show you the code to implement these solutions.

1. Image Optimization

Media files, especially images and videos, constitute some of the most unnecessarily large files on the internet. The main optimizations you can make to reduce image file sizes are:

  1. Stripping meta data: such as camera manufacturer, date photo taken, etc.
  2. Resizing: serving a resized image as big as the viewport.
  3. Compression: using the smallest file type possible. There are 2 kinds of compression algorithms:
    • Lossy compression - reduces file size by reducing image quality/size.
    • Lossless compression - reduces file size without changing size or quality.

To convince you that this is worth your time, we’ll start with the image below and see how its size changes as we apply each step. Also, please note that if you use a CDN for hosting your static assets, and you totally should, then you still need to know these optimizations, but you’ll just have to apply them at some point before deploying your images to the CDN.

Original
Original (2629×1964) 5.9MB

And yes, Cory is an acceptable word in scrabble.

1.1 Stripping meta data

If you open your image file’s properties, you’ll see a bunch of meta data stored with it, such as camera, date taken, etc.

$ file photo.jpg

image-optimization-original.jpg: JPEG image data, JFIF standard 1.01, aspect ratio, density 72x72, segment length 16, Exif Standard: [TIFF image data, big-endian, direntries=6, PhotometricIntepretation=RGB, manufacturer=Canon, model=Canon EOS REBEL T5, orientation=upper-left, datetime=2017:02:07 07:39:47], baseline, precision 8, 2629x1964, frames 3

If you think that this data doesn’t use up much space, then you are mistaken. Allow me to illustrate:

$ # Strip metadata from the image
$ convert photo.jpg -strip photo-stripped.jpg
$ file photo-stripped.jpg
image-optimization-stripped.jpg: JPEG image data, JFIF standard 1.01, aspect ratio, density 72x72, segment length 16, baseline, precision 8, 2629x1964, frames 3
$ ls -lh
total 56576
-rw-r--r--  1 mujtaba  staff    5.5M 25 Apr 17:33 photo-stripped.jpg
-rw-r--r--@ 1 mujtaba  staff    5.9M 25 Mar 10:33 photo.jpg

You can see above that the difference in size between the stripped image and the original is 0.4 MB. With some other images, I was able to get a difference of about 1 MB! That is quite a lot.

Stripped
Metadata Stripped (2629×1964) 5.5MB

1.2 Resizing Images

You can either

  1. save multiple sizes of your images on the server and serve the appropriate one based on the request, (your only option with CDNs)
  2. or you let your web server resize the image when requested and caching it for future references. (My recommended option when self-hosting)

Option (1) means that you create duplicates of each image in different sizes, like so img265.png, img512.png, etc. and having the client request the size it needs.

With option (2) however, you only store the best size of your image on the server and when a client asks for a specific size, your server resizes the image and sends it back. Your server should also cache the image so that when that size is requested again, you spare your server the overhead of resizing the image over and over again. Here’s a figure showing what I mean:

Responsive Image Server with Cache

To implement this architecture in nginx, follow this article (which is where I got this figure from).

In your HTML, you can specify the proper size for each screen size using srcset attribute or picture element. More on this in part 2.

Resized
Resized ()

Depending on the size of your screen, you see one of these widths:

700px (61KB), 1400px (184KB), 2100px (360KB), or 2629px (5.5MB)

1.3 Compression

Format GIF PNG JPEG WebP BPG
Compression Lossless Lossless Lossy Lossy/lossless Lossy/lossless
Browser Support All All All Chrome, Opera & Android None
Developer CompuServe PNG Development Group Joint Photographic Experts Group Google Fabrice Bellard
Initial Release 1987 1996 1992 2010 2014
Latest Release 1989 2004 2012 2017 2016
Alpha Channel
Best for (see below) simple drawings & animations drawings with lots of colors photographs all (if you’re willing to do the setup) all (if you don’t mind the JS polyfill)

There are many image compression algorithms, the most common of which are JPEG, PNG, and GIF. To learn the differences among the 3, head over to this wonderful stackoverflow answer. The main takeaway from it is to use GIF for animations and simple drawings with few colors, PNG for more complex drawings with lots of colors, and JPEG for photographs.

If you’re trying to squeeze the size down to as low as possible, you should consider the new image formats optimized for the web such as WebP and BPG. The main caveat with these new formats is that they are not supported by all web browsers. WebP is made by Google and doesn’t work on Firefox, IE, or Safari, and BPG is not supported by any browser on its own. However, you can use javascript polyfills for the unsupported browsers. If you decide to use WebP and don’t want to use a polyfill, you can configure your web server to serve WebP to the browsers that support it and other formats to the browsers that don’t. For more information, you should check out the comparison between BPG, WebP, and other formats, and the comparison between 4 implementations of WebP vs JPEG while maintaining compatibility

The bottom line, unfortunately, is that it’s not so simple. BPG seems to be awesome, but having to rely on a javascript polyfill can be a deal breaker for some people. If you’re one of them, you can use WebP without a JavaScript polyfill using nginx or other web servers.

Compressed
Compressed BPG (2629×1964) 165KB

Sizes of equivalent widths as the previous image are:

700px (30KB), 1400px (75KB), 2100px (122KB)

As you can see, we’ve gained a very significant size reduction by using these optimizations. In part 2, you can see the code and use it on your websites.

2. Caching

You want to keep the number of requests your origin server receives to a minimum. This can be achieved by saving cached versions of your content on the browser, your server, and intermediate proxy servers (such as ISPs). You are in charge of configuring these caches to work for you.

The first thing you should do is setting an expiry date on the files you serve based on their type so that the client knows whether its cache should be updated or not. This can be done by specifying a Expires header. For example, since images aren’t likely to ever change, you might set their expiration date to be a month or a year, but HTML pages might get an expiration date of 1 hour only or even 0.

You should also consider setting some cache busting solution that you can use to make sure that the latest version is always served. For example, say that you set your CSS files expiry to be 1 year. Then you make changes to a CSS file, but you don’t want your users to wait a whole year before fetching your new CSS changes. You can bust the cache by adding a hash, which changes after every build, at the end of your CSS files name, for example my_styles_83n2h1a.css instead of just my_style.css. Additionally, you can configure your server to direct all requests of the format file_name.[hash].css to serve an uncached version of file_name.css. This way, you can change the hash on the client and not on the server.

Finally, we need to specify whether the content we’re serving is specific for the user who requested it (private), or if it’s the same for everybody (public). This header, Cache-Control, is mainly used by proxy servers that sit between the user and the origin server (for example ISP proxy servers). For example, consider a dynamically generated HTML page that contains the name of the user. It doesn’t make sense to cache this page at the ISP’s proxy server, since it’s different for every user. Therefore, it should be private. On the other hand, image files, for example, are one and the same for everybody. Therefore they should be public.

3. Minifying and Gzipping

Minifying Gets rid of white space, comments, extra semicolons, and other useless characters in your code. For exapmle, consider the hello_minify.js code below:

function HelloMinify() {
  // Logs "Hello Minify to the console
  console.log("Hello Minify");
};

Here’s what it will look like after minification:

$ uglifyjs hello_minify.js
function HelloMinify(){console.log("Hello Minify")}

Gzipping replaces repeated text with pointers to reduce file size. Julia Evans visualizes how Gzip works in her post and video:

Once upon a midnight dreary, while I {pon}dered weak an{d wea}{ry,}
Over many{ a }quaint{ and }curious volume of forgotten lore,
W{hile I }nodded, n{ear}ly napping, su{dde}n{ly }th{ere} ca{me }a t{apping,}
As{ of }so{me o}ne gent{ly }r{apping, }{rapping} at my chamb{er }door.
`'Tis{ some }visitor,'{ I }mu{tte}r{ed, }`t{apping at my chamber door} -
O{nly th}is,{ and }no{thi}{ng }m{ore}.

This CSS-Tricks post explains gzipping and minification really well with an example. It shows how bootstrap.css starts out at 147KB, goes down to 22 KB after gzipping, and to 20 KB (20% of original) after minification and gzipping. There are some who argue that minification is not worth it since gzip does a much better job compressing code files. Additionally, minification can introduce some annoyingly hard to debug problems if not done properly. For example, if you have function that you reference in an onload attribute of your HTML and the minifier changes the name of that function to make it shorter, your code breaks. This can easily be fixed by either telling your minifier not to shorten function names or by having it minify both your HTML and JS code together so that the function name gets shortened the same way in both places. But if you didn’t know that’s where the issue lies, then you might end up wasting hours figuring it out. On the other hand, you can do minification and have no problems at all. At the end, it’s your call to minify or not; check the size of your code with and without minification and make a decision whether it’s worth it or not. But you should definitely enable gzipping no matter what.

4. HTTP2

HTTP2 is supported by most browsers and can make a difference for your users. HTTP fixes important problems that didn’t exist when HTTP1.x was created. Enabling HTTP2 is an easy win in most cases and won’t cost you anything. Clients that don’t support HTTP2 will automatically fall back to HTTP1.x. Also, HTTP2 still has key-value headers and all the content types that you’re used to with HTTP1.x. Therefore, you can just simply enable it and it will work.

There’s a lot to say about what’s cool about http2, and since people smarter than me have already explained it beautifully, I’ll link to them instead. Watch this Google Chrome Developers video or read this sexy post by Kinsta to get a full understanding of HTTP2. I’ll attempt to give you a short summary of each feature:

The bottom line is that using HTTP2 is an easy win and you should definitely enable it.


Now that you understand the reasoning behind web pages load speed optimization (hopefully), you can go ahead to part 2.

Have questions, comments, or suggestions? Please do post them in the comments section below and I’ll make sure to put your name on my website ;)