09 Jul, 2025
Most REST APIs aren’t RESTful. And that’s okay — because most software doesn’t need to be. But if you’ve ever hesitated before naming an endpoint /terminate, or wondered why hitting F5 sometimes brings up that "Confirm Form Resubmission" popup, this post is for you.
This might be surprising for some of you, but the truth of life as an academic is that you're measured on the quality and number of papers you publish. It doesn't really matter all that much if you engineer the best system on the planet, to complete your PhD, you need to publish a dissertation. And that dissertation needs to read a lot like an academic paper and not so much a quick technical summary.
And so this is where we meet our hero, Roy Fielding. The World Wide Web and the HTTP protocol were invented some time around 1990. And by 1993, Roy Fielding was helping on the project. At this point, it was a very, very young project — practically unrecognizable from what the web looks like today.
It was around this time that he also started his PhD program. And seven years later, he publishes his dissertation entitled "Architectural Styles and the Design of Network-based Software Architectures". Most importantly, chapter 5 is a section called "Representational State Transfer" (REST). And this will be the topic of this blog post.
The reason I bring up history and academic life is that in order to even read this fairly, you must understand the nuance. It's written in academic language because it's written for an academic audience. In plain and simple terms, Roy Fielding needed to graduate. And he had to get the approval of the panel to do so.
In 1990, the first browser was written by Tim Berners Lee. And what it was, was basically a file explorer for your co-workers' computer. They'd put some HTML files where they wanted you to see things, then they could link to other HTML files where you might see some more notes. And then some of those HTML files might link to an image or some data dump.
The thing is that the World Wide Web was born at CERN. It was a tool for scientists to share their data and their findings with each other. The original World Wide Web was Wikipedia for scientists — built in an era of Windows 3.0 and giant floppy disks.
The first version of the web was not made for Myspace and Newgrounds. And it was certainly not made for Instagram and Roblox. Those things came later, as the technology matured and you could do more and more things with it.
But from 1990-1993, the world wide web was a file sharing app with links and text and tables. The links are very important. It's what made them "hyper"media. Because these pages were inspired by an old 1988 MacOS app called HyperCard.
The next iteration of the web was driven by someone saying "Wait a minute. What if, instead of just sharing files, we put a program at /var/www/bob/dynamic_page.exe ?" And if anyone ever tried to load that file, the computer could run it and give the output to the browser. That would let you do all kinds of things like count how many times a page has been viewed! And the best part of all? The browser wouldn't even need to know that /var/www/bob/dynamic_page.exe wasn't just a plain-old HTML file. Everything could happen server-side and it would seem like it was just sending plain, innocent HTML files.
Pretty soon, people would take this idea and build bigger systems off of this. PHP would be born within two years. And then the first web-based online stores would follow.
So that was 1993-1995. In 1995, you would be really, really lucky to have a 56kbps modem in your house. Even in the year 2000, that would have been really common.
In short, the internet was slow as heck back then.
There was a problem back then called the "C10K Problem". It was really difficult to maintain over 10,000 live connections. This was true for MMORPGs like Ultima Online just as much as it was true for web servers.
The big fundamental reason why was poor operating systems APIs. We just hadn't invented epoll or kqueue yet.
Back then, all you had was select and poll. And that was basically the equivalent of looping through all the active connections every second and asking the OS "Has this connection sent me anything yet?" All this work just to get 10,000 responses full of "Nope".
The newer things I mentioned like epoll and kqueue allow you to tell the OS the equivalent of "Hey Linux, if any of these 1,000,000 things ever sends me anything, wake me up and tell me the which one so I don't need to check the others." So now you can have millions of simultaneous connections if you optimize really, really well.
Again, this is just more historical context. What I want to get at is that in the year 2000, the internet was slow and you couldn't have many connections active.
How could you run a website like Amazon back then?
Well, it's pretty simple. First you pay for the biggest internet connection you can to solve the problem of speed. Second, you just don't keep connections alive. Open them, send the data you need, close them. If you send someone a catalog of things for sale, they're going to be reading that for a minute or two before they decide to view a few product pages. And then once they're on those product pages, that's a few more minutes before they click "Buy". You don't need those connections alive all the time.
The years 1993 to 2000 were a rich time for the web. Things grew, people wrote better web browsers. People wrote better web servers. PHP was invented to dynamically generate HTML pages better. HTML got images. HTML got styling. JavaScript was invented.
Lots and lots and lots of things happened.
Behind some of that stuff was Roy Fielding. He was one of many guys attending these meetings between web browser companies, web server companies, ISPs, universities, and whoever else was a stakeholder in these discussions and he was contributing to standards like "What kind of stuff should people be allowed to put in HTTP headers?" and "What should we do when the users refresh the page?"
Web 1.0 was very much a static file sharing web. But Web 1.1 (my terminology) allows arbitrary programs to run in order to generate web pages dynamically. It was in this context that REST was born.
Finally, it's time to tackle the main topic of this post. What exactly is Representational State Transfer?
Well... It's a way to transfer state... by representation.
So what exactly is "state"? It was whatever it was that your system was storing on the server. Amazon's state is the product catalog and the current inventory. A bank's state is the balance of all of its customers' accounts.
And how do web servers transfer that state? HTML, CSS, and JavaScript.
REST is about transferring state to web browsers. And that state is represented by "Hypermedia" in the form of "Hypertext Markup Language" or HTML for short.
If you've ever encountered HATEOAS (Hypermedia as the engine of application state), this should sound eerily familiar. Because what those academic words mean is: Users should be able to click around on HTML pages and affect things on the server.
You can't persist a single connection forever. So you make the protocol stateless. Connect, send some payload, get a response, disconnect.
Bandwidth was very limited. So what do you do? Cache. If you're in a university and Professor A downloads a paper? Maybe Professor B might need it too. Many universities set up HTTP caches in this time period. And browsers and web servers needed to be aware of this and support these types of things.
Uniform Interface? That was him saying "I got Mosaic and Netscape to agree to use HTTP 1.1. I also got Apache and Microsoft IIS to agree to this standard too."
This entire dissertation as Roy Fielding wrote it? It was an after-the-fact description of the web as he helped design it and what he managed to get all the stakeholders to agree on and implement. The dissertation itself was a formality in order to satisfy his graduation requirements.
When he talks about a "client" he means literally a web browser. Sure, curl is technically a client too, but nobody is going around curling web pages and manually following links around. Not even Richard Stallman.
When he talks about a "resource" he means a web page. Or a downloadable image. Or a script. He does NOT mean "a computer-understandable data payload for your bank account". He means a web page that displays your bank details and all the supporting stuff in there like scripts and stylesheets that might get cached by web browsers or university proxies.
As far as I can tell, this paper is not meant for us humdrum terrestrial software engineers. The best people to take advantage of this might be aliens who need to reinvent their own world wide web.
I'm not trying to discredit his work. The World Wide Web is an amazing piece of technology and Roy Fielding did an amazing job to push that entire ecosystem forward.
But how he designed the World Wide Web should not be the basis for how software engineers like us design everyday APIs for the banks and schools that we work for. And a misreading of his doctoral thesis shouldn't be the cause of endless architectural bike shedding.
There's really no good reason to try to be a "RESTful"-purist. The only truly RESTful apps in the world? Google Chrome, Firefox, Opera, and Safari. That's it.
So let's take the bike shedding away. In the real world in 2025, REST has become a shorthand for the following architecture:
- JSON over HTTP
- Routing handled by URLs (sometimes with arguments parsed out of the URL path)
- Some attempt to try to follow HTTP error codes
All of these things are supported out of the box by pretty much every major server-side web framework.
So here's where I put forth my ideas.
Misunderstanding 1: JSON over HTTP
JSON over HTTP is pretty good. Single request - single response protocols are really nice in general. Under the hood, this might be implemented through multiplexed TCP sockets, packets are reordered, who knows what other cursed things. But as far as our applications are concerned? We send one JSON payload, the server receives it, routes the data to the correct function through the URL, then sends us back one JSON payload.
It's not HATEOAS, but it's great. I like it a lot. It's human readable. You can debug it with curl or the network tab in Chrome. 10/10.
Misunderstanding 2: Endpoints as "Resources"
Routing requests via URL? Fantastic. The alternative is that you pack the route inside the message itself and you ignore what the frameworks provide out of the box. I don't see a reason to fight against it.
Here’s where RESTful purism starts to clash with real-world constraints
Nouns only for resources. Sometimes you really just need POST /accounts/123/supercharge. It's a business requirement. Renaming it to /accounts/123/superchargeness doesn't improve anything.
Also, the browser gods gave us only 7 verbs. Sometimes you just need to POST to /accounts/123/deactivate because you had already used DELETE for actual account deletion.
In fact, maybe to remove ambiguity altogether, you're better off writing two URL handlers:
POST /accounts/123/deactivate
POST /accounts/123/delete
There's really no technical reason why DELETE is better than POST. In fact, in some cases, DELETE might be worse than POST. Some really old and misconfigured enterprise firewalls might block "weird" HTTP verbs. Don't ask me why. There's a lot of hardware and software manufacturers out there.
8/10. Route to the correct handlers by URL. But don't obsess too much about verbs or nouns. Use what feels right.
On HTTP Verbs
Here's what actually matters with HTTP verbs though.
GET requests
- Sometimes the responses are cached by CDNs like Cloudflare
- Sometimes the responses get cached by the browser
- Sometimes they are preloaded from another page
- If you end up on a page from a GET request, you can refresh the page and the browser won't warn or complain
- They can't carry a request body. Query params only.
POST requests
- The responses are never cached
- "Confirm Form Resubmission" on refresh
- They trigger the browser's CORS preflight security checks
PUT, PATCH, and DELETE are pretty much identical to POST. (Though there are some libraries that don't let you send request bodies on DELETE. But that's not a hard requirement of the HTTP protocol.)
All the things everyone says about GET idempotency? That's on engineers to implement them. It's a great rule of thumb if you're designing a system. But it's not enforced by any web framework.
If you have a form for a user to transfer money from his bank account to someone else, please make that form do a POST request. Don't do it through GET. Otherwise, someone's going to have a bad internet connection, refresh the page five times, and accidentally send money five times.
But also, if it's hidden behind an API call triggered by a JavaScript HTTP fetch? Honestly, it really doesn't matter at all. It's really up to your personal feelings if you want to pack the data into query parameters. GET vs POST will have slightly different security implications though.
Misunderstanding 3: HTTP Error Codes
Some people say the default HTTP error codes are all you need. That's probably not true.
There's only a few error codes out there. Less than 600. And some of them are April Fools' jokes like HTTP 418 IM A TEAPOT.
HTTP error codes were made for all the middle men. They're not for your app. Cloudflare needs to differentiate if they should email the web admin if the site goes down or if the user just submitted a bad form.
That's it. That's what the default HTTP error codes are for.
"Wrong Password" on a login form? I guess that's not so bad to be 401 Unauthorized. Try to access something not yours? Sure, 403 Forbidden.
But what about other weird application-specific errors like trying to withdraw too much money from a bank account? There's just no dedicated error code for that. So you just have to roll your own.
I think it's fine to use HTTP error codes. But you should probably plan to add a proper application-specific error code system. How you do it is up to you, but anyone who says that 600 standards-compliant error codes should be enough is making the same type of mistake as claiming 640K ought to be enough RAM for anybody.
Bill Gates never said 640K RAM is enough. And neither did Roy Fielding say 600 error codes are enough. Those are for the internet infrastructure. You app deserves better. Translatable, traceable error codes.
The worst apps are the ones that just show popups saying "Error 400: Something went wrong!"
5/10. Good as a first approximation. Add more error codes if your app is going to get big.
The world today is very different from the one in 2000. We have faster chips, better operating systems APIs, and better networks now.
Much of these ideas still hold a lot of weight today. If it worked on worse hardware, why wouldn't it work on better ones? There's much wisdom in trying to build around stateless APIs and caching locally when you can.
But there are also plenty of things you can do now that they couldn't even dream of before. REST isn't a solution for everything. It's a product of its time that still holds some great ideas.
Roblox is a multiplayer game. On a browser. Not built on stateless REST-like protocols. They could never have built that in 2000 (without Flash or Java Applets).
.png)


