When I decided to move The Servitor to a new CMS, I wasn't looking for the new hotness. I just wanted a break from WordPress, Google, and frameworks. Something I could understand completely. Something that wouldn't bury me under layers of abstraction every time I needed to change a single line of code.
So I built a static site generator using Server Side Includes. SSI. A technology from 1996 that most people forgot about.
The whole system works like this: articles are HTML files with SSI variables at the top for metadata (title, description, date), and the build process assembles everything into a working site. So there's no database, it's all just files on disk and nginx doing what it does.
Why SSI instead of a modern static site generator?
Ehh... why anything? ... For fun and speed. There are excellent static site generators out there that I've played with but far from mastered: Jekyll, Hugo, etc. They all do basically the same thing: take content files, run them through templates, output HTML. But they all require learning their specific templating language, their build system, their way of thinking about content.
Minus whale build my own! SSI is simpler. It's just HTML with a few special comments that tell the web server to include other files or echo variables.
Here's what an article looks like:
<!--# set var="title" value="Article Title" --> <!--# set var="description" value="Article description" --> <!--# set var="date" value="2025-10-18T12:00:00+0000" --> <!--# include virtual="/fragments/header.html" --><article> <h1><!--# echo var="title" --></h1> <p>Article content here...</p> </article>
<!--# include virtual="/fragments/footer.html" -->
That's pretty much it. The web server processes those SSI directives and outputs regular HTML. nginx handles it natively.
Because it's just HTML, I can edit articles directly. Open the file, change the HTML, save it. It does mean I occasionally have to propagate something to all files that I didn't anticipate in my build template. For that, I use known strings (like the SSI includes) with sed to insert whatever is needed cross-codebase. (And then if it is something I might want to do again add it to my build script).
Although I can't say I engineered the system for agents, it is fairly agent friendly and that's a plus. Modern AI coding assistants understand HTML perfectly because it's in their training data going to their nematode ancestors. SSI hasn't meaningfully changed since introduction. Compare that to explaining the quirks of some JavaScript framework's component system that was released 3 months ago.
Getting away from wordpress
WordPress runs about 40% of the internet. It's a fine system if you want plugins for everything and don't mind the constant update treadmill. Or the massive security pitfalls of being one of the most targeted ecosystems. Commercial malware could crack WordPress like a nut that didn't have a shell to start with. F that noise.
The performance difference is dramatic. WordPress needs PHP execution, database queries, caching layers, and optimization plugins just to serve static content quickly. The Servitor now serves plain HTML files. nginx reads a file from disk and sends it.
I measured it. WordPress averaged 600+ms response times for The Servitor's content (up to multiple seconds at P95+ :-/). The SSI version serves the same pages in < 50ms. That's a big'ole honking improvement from eliminating PHP and database calls.
Security is simpler too. Static HTML files can't be exploited much - there's nothing to execute. The worst someone can do is deface the files, which is fixed by restoring from git. There is a little JS on the site, but if it can be used by one user to attack another, I don't see how (... yet?).
Learning HTTP and NGINX
Building this system forced me to better understand how web servers work instead of treating them like magic boxes, and that's been part of the fun. I learned more about MIME types, HTTP headers, cache control, and redirects by doing them manually.
The Google problem is that SEO requirements keep piling up. Structured data markup, Open Graph tags, Twitter cards, XML sitemaps, RSS feeds with specific formatting, canonical URLs, proper meta descriptions. None of this is technically required to serve a webpage, but if you want search engines to properly index your content, you need all of it.
I've tried to jettison what I don't want to deal with and build in the rest. The build script generates the sitemap, the RSS feed, and ensures all the meta tags are present. But i get the ultimate say via the simple python that is my "build system".
Moving from janky to slightly less janky
The early version of this system was held together with duct tape. Python scripts scattered around. Permission issues between my user account and the web server. Encoding bugs. Image processing that worked differently depending on which script uploaded the file. Meh.
Every time I tried to fix something, I'd break something else. Also all part of the learning and fun. Ever played Dwarf Fortress? See their definition of "fun".
The recent modernization fixed this. I created a unified CLI tool (./servitor) that handles all site operations. I built infrastructure modules for configuration, permissions, and encoding. I extracted the image processing into a clean class instead of 445 lines of spaghetti code scattered across functions.
The admin interface is a simple HTML form that posts to a Python CGI script. That script processes the submission, saves the file with correct permissions, converts images to WebP, generates responsive sizes, and triggers a rebuild.
At some point, I did have to ask myself why I knocked down this fence only to rediscover I wanted most of the features.
What I learned
Building your own CMS is educational but inefficient. If I just wanted a blog, I would have used WordPress or Ghost or Medium and been done with it. But I wanted to understand how websites work, and you don't learn that by installing plugins.
The modern web is complex once you get past serving HTML. HTTP/2, SSL certificates, Content Security Policy headers, CORS, blah blah blah. Running your own server forces you to deal with all of it.
Google and the search ecosystem deserve a little credit and lot of blame. They've created standards that make the web better (mobile responsiveness, fast loading), but also made everythin sort of same-y, and created requirements that make simple sites harder to build (structured data to make us easier to read for AI overlords, Core Web Vitals, mandatory SSL even for simple stuff).
The right amount of abstraction
I keep coming back to being "close to the bone" with web development. Not no abstraction, but just enough to be useful without hiding how things work.
SSI provides just enough templating to avoid repeating the header and footer in every file. Python provides just enough structure to handle file operations and encoding without needing compiled code.
Compare this to a modern JavaScript framework where you need Webpack, Babel, React, Next.js, a dozen build plugins, and a mental model of client-side hydration just to display static content. It works, and it enables complex interactions, but it's not simple.
Why this approach won't work for everyone
Static site generators aren't suitable for dynamic content. If you need user accounts, real-time updates, etc, you probably need server-side execution.
The build step is theoretically a limitation. With thousands of articles it might get to be a big deal. Every change requires regenerating the site and that time would go up. In reality, for The Servitor's tiny collection, it's < 1.5 seconds to regenerate because everything is text and HMTL.
You do need to know HTML and Python (or be real trusting of LLMs). No visual theme customizer. No plugin marketplace. If I want afeature guess who has to design it. *waves* But for a personal site, a blog, or anything where you're the main editor and you want to understand how it works, this approach has benefits.
.png)


