Most people in IT have at least heard of assembly language. Some even saw it during university - just enough to know it exists, but not enough to use it for anything real.
Assembly used to be the secret behind high-performance code. Games like Quake, graphics engines, and device drivers often had big chunks written in assembly for one simple reason: speed.
But times changed. Modern developers rarely touch assembly anymore, and for good reasons. Until recently, it simply didn't make sense for most projects.
Well, at least, that's what I thought - until I tried something on my Mac a few days ago.
Why Assembly Disappeared
Assembly slowly vanished from mainstream software development because it was too much work for humans to handle.
- It's not portable - an assembly routine that runs on x86 won't work on ARM, RISC-V, or a GPU.
- It's not maintainable - every line depends on the quirks of a specific CPU.
- It's also not forgiving - one wrong instruction and you crash the process or corrupt memory.
Higher-level languages like C, Java, or Python fixed that. They gave us portability, readability, and safety. We stopped thinking about registers and flags, and started thinking about business logic.
Compilers got so good that we forgot there was assembly underneath. For human developers, this was a big win. For LLMs, though, those constraints don't really apply.
Humans Have Limits. LLMs Don't - at Least Not the Same Ones
Humans forget things. We can't hold every detail of a large project in our head. That's why complex systems end up consuming too much memory, leaking resources, or deadlocking.
LLMs, on the other hand, can "see" entire codebases at once. With enough context tokens, they can process hundreds of files, reason about dependencies, and never lose focus.
So I started thinking: if an LLM can understand requirements directly in English, why do we still need to go through a high-level programming language at all?
Why not just ask the model to write assembly code directly for the target hardware?
From English to Assembly on My Mac
Each CPU architecture - x86, ARM, RISC-V, GPUs - has its own instruction set. But that's not really a problem anymore. You can simply be explicit in your prompt.
In my case, I told Copilot something like this:
That's it. No compilers, no C, no libraries. Just plain English. Copilot produced a short, valid assembly routine for my Mac. Then I asked it to write a test harness to verify it. It chose Python automatically, wrote the code to load and call the routine, and checked that 3 + 5 returned 8. It worked.
It was simple - just adding two numbers - but it proved the point: I didn't need to know the registers, calling conventions, or even the assembler syntax. The AI handled all of it.
Why Assembly Might Make Sense Again
For humans, assembly was too low-level to manage. For LLMs, it's just another language - no harder than Python or JavaScript.
They could:
- Generate code directly optimised for the target CPU or GPU.
- Adapt automatically to architecture differences.
- Explore micro-optimisations that compilers might miss.
Imagine asking:
"Generate an ARM64 routine for matrix multiplication optimised for Apple's M-series cache layout."
The LLM could write it, benchmark it, refine it, and repeat - all automatically. That's something no human could realistically do across multiple hardware targets.
Of course, there are risks. LLM-generated assembly might not be deterministic or reproducible. It may fail silently or produce slightly different code for the same request. And debugging machine code generated by a neural network sounds like a nightmare.
But then again, we already trust compilers we don't fully understand. This is just one level deeper.
Requirements in Plain English
One of the most exciting parts of all this is how LLMs turn requirements into code.
It's like Behaviour-Driven Development (BDD), but without the intermediate steps. You describe what you want:
"The function should return the sum of two integers and store the result in memory."
The LLM translates that into assembly, or C, or Python - whatever fits the goal - and can even test it immediately. In theory, product managers could write requirements that are executable from day one, without developers writing scaffolding or boilerplate.
Languages would become more like intermediate representations - still useful for humans, but optional for machines.
What Comes Next
Here's where I think this could go in the next few years:
- LLMs trained per architecture - ARM, x86, RISC-V, CUDA, etc.
- AI-driven compilers - translating English or pseudocode straight into optimised machine code.
- Semantic debugging - where you ask "why is this logic wrong?" instead of inspecting registers.
If that happens, the boundaries between design, coding, and compiling could blur completely. Developers will focus on describing intent, not syntax. LLMs will handle the low-level reality. And maybe, just maybe, assembly will make sense again - not because we understand it better, but because our tools finally do.
If you'd like to see the tiny "add two numbers" experiment I mentioned, I've shared it on GitHub: github.com/ionionascu/llm-to-assembly
This is not a production-ready project - it's a simple proof of concept. The code focuses only on the happy path and is not designed to be fault-tolerant or cover all possible test cases. Invalid inputs or edge conditions may cause segmentation faults or crashes, and that's expected.The goal was never to build robust software, but to explore whether an LLM could generate functional assembly code directly from plain English.
TL;DR: This project isn't about "vibe coding" or skipping engineering discipline. It's a small, controlled experiment meant to explore what happens when large language models can reason directly at the hardware level - not a suggestion that we should replace careful software design with AI prompts.
Note: As I'm not a native English speaker, I used ChatGPT to review and refine the language of this article while keeping my original tone and ideas.
Originally published on Medium.