Is It Worth It to Optimize Images for Your Site?

2 days ago 1

Yes but it depends on how you define the verb “to optimize”. For any image conversion heavy lifting I rely on the trusty ImageMagick yet I’ve been wondering whether my argument preset is correct: should it be more or less optimized?

The problem with questions is that they lead to other questions, such as: how much assets is this blog actually generating each year? Is my image optimization technique sustainable enough or will I end up with terabytes full of nonsense in ten or twenty years?

When it comes to size, opening up the handy gdu disk analyser in the assets/post folder is enough to get a first impression:

Gdu summarizing how much disk usage the assets on this blog are for each year in MiB.

As I maintain the same folder structure for both content/post and assets/post—this post lives under /post/2025/10/is-it-worth-it-to-optimize-images-for-your-site/, for example—generating an overview of asset sizes per year becomes trivial. Not taking the earlier Brain Baking years into account, the total amount of data that gets added each year is on average 12.5 MiB. Let’s make that between thirteen and fourteen as 2025 isn’t finished yet.

That means in twenty years, I’ll have accumulated an extra 260 MiB. That’s not even half a classic CD-ROM. Is it really worth it then, to think twice about every MiB that gets checked in? Well, yes, since all those bytes need to leave one server and make an appearance at another in order to serve these pretty images to your visitor. Besides, as a proud member of The 512KB Club, I should keep my promise in reducing file sizes as much as possible.

Of course, not all posts have assets attached to them: the average amount of assets linked to a post here is 1.038 with each post having about 147.24 KiB on data. That’s quite optimized! Yet can I do better? Or should I stop over-compressing those images up to the point that they’re losing their vivid shine? More questions! No wait, those were the same.

Here’s the default ImageMagick command I rely on:

mogrify -sampling-factor 4:2:0 -strip -quality 80 -interlace JPEG -format jpg -colorspace sRGB screenshot.png

What exactly does this do?

  • -sampling-factor 4:2:0: the sampling factor used for the JPEG encoder. If Colin Bendell tells me to use 4:2:0 claiming a +/- 17% image size reduction, then I believe him.
  • +profile '!icc,*': Removes all profiles except for the ICC colour profile; gets rid of EXIF data. See What Exif Data Reveals About Your Site.
  • -quality 80: The compression quality of the image. With 90 or less, chroma channels are downsampled (which I instruct it to do anyway with the sampling factor argument).
  • -interlace JPEG: explicitly tasks ImageMagick to create a progressive JPEG allowing for the browser to show a lower-resolution version of the image whilst data is still being transferred. Perceived download speed is also important!
  • -format jpg: Why on earth would you want to export JPEG files when the era of WebP is here? That’s an easy one to answer: because my retro hardware knows JPEG. Because I believe we should build websites that last.
  • -colorspace sRGB: the default and recommended option for the WWW for image that do not contain any extra colorspace information such as JPEG. Other options provide slightly better compression.

I urge you to read Colin’s old but relevant post on chroma (colour detail) and luma (lightness and darkness) and how to optimize for the web/mobile. It even includes a regression analysis, concluding that:

Resizing images matters most. It multiplies the size a little more than the square root of the total pixels. More pixels, many more bytes. Compression matters somewhat. For quality=80 the bytes are x23; for quality=100 bytes multiply x50. Subsampling of 4:2:0 could further reduce the bytes by 17%.

What I did not realize until now by testing an comparing images is that -strip does something else besides stripping GPS Exif data. I noticed the export became washed out, as if a portion of the colour profile information was lost. Take a close look at the macOS dock screenshots re-rendered in Firefox:

Above: using -strip; without ICC. Below: using +profile '!icc,*'; with ICC.

Can you find the difference by inspecting the saturation of the red Firefox fox or the yellow wings of the NetNewsWire satellite? The difference is very subtle—and very difficult to showcase in a screenshot—but very annoying.

Inspecting the images using ImageMagick’s identify reveals that the ICC profile is removed in the process:

identify -verbose original.jpg | grep Profile Profiles: Profile-exif: 62 bytes Profile-icc: 4032 bytes identify -verbose stripped.jpg | grep Profile ???? (no output)

The embedded ICC profile is there to make sure the image looks the same on any computer and any piece of software; without it browsers can render it like they want. The result is a flat looking image as you can see in the above screenshot (which does have an embedded profile). The -colorspace option does not solve this: it tells ImageMagick to convert the colorspace, not to attach it. Instead of using -strip, use +profile '!icc,*' to throw away all profiles but the ICC one.

Also, so be sure to add a -resize as this obviously has the highest impact on file sizes. But wait, what about providing a higher resolution image to desktop browsers and reducing the resolution to lower versions for mobile browsers? For me, that’s a hassle I don’t want to bother with at all. It requires saving the assets in their original format and providing a couple of alternatives, greatly increasing the total size of the source repository, the total size of the deployable folder, and the total bandwidth for my humble server.

For mobile users, that’s not a problem as downloading 147.24 KiB of data is less then the copious amounts of megabytes that will get slurped in when you visit your average newspaper site. For ultra widescreen 4K nerds, the max width on the container wrapping this <article/> will keep things somewhat in check.

The biggest takeaway for me is that in twenty years I’ll have filled half a CD-ROM which is significantly less than I expected. Should this incentivize me to bump the quality to 90%, reduce downsampling, or instead increase the usage of assets in general?

Maybe I should be less worried about the file size and more about the content.

webdesign  

Read Entire Article