We Can't Name Variables. Now We're Writing Prompts?

2 weeks ago 1

For years, you could get away with terrible variable names: data2, temp, x. The compiler didn’t care, but your teammates grumbled. The code ran. You shipped features. You got promoted. Writing skills? Those were for the people writing documentation or apologizing to customers when your code caused a server to self-destruct.

Now, to build software, you write prompts. Alas, prompts don’t compile. They don’t throw syntax errors. They just… generate something. That something could be correct. It could be wrong. It could be a mind-boggling interpretation of what you thought you said but didn’t actually say.

As a skilled programmer, you optimized for a world where machines enforced clarity. Compilers, type systems, and linters were the grammar police. You could write sloppy sentences in code comments, but the code itself had to be precise. Now, those sloppy sentences are about to become the “code”. And English sentences—sentences in any natural language for that matter—just don’t have compilers. Natural language is inherently ambiguous, and now you, poor programmer, are suddenly responsible for the difficult task of writing clearly without a compiler to save you.

Abstraction Has Come Full Circle

We’ve spent decades raising the abstraction level of programming. When we wrote code in Assembly, we told the CPU exactly what to do, register by register. C abstracted away registers so we could think in functions and pointers. Python abstracted away memory so we could think in objects and logic. Frameworks abstracted away boilerplate so we could think in configurations.

Now we’ve reached natural language. We’ve abstracted so far that we’re back to the least precise medium of communication humans have ever invented. Natural language evolved for storytelling, persuasion, and ambiguity. It was never meant for specifying computational behavior. A sentence can mean twelve different things depending on context, tone, and what the listener ate for breakfast.

Apparently, this has happened before, even though I’m way too young to have experienced it first-hand. In the 1970s, SQL was designed as an English-like query language so non-programmers could access databases more easily. The vision was simple: just ask the computer what you want in almost-English, and it would figure it out. Ultimately, SQL turned out to be structured thought disguised as English. SELECT * FROM users WHERE active = true still requires you to know what “active” means, what you’re actually asking for, and what it should look like. Precision still mattered.

Prompting AI is the same situation, but scaled up. You’re writing queries against a model that will happily interpret your words in ways you never intended. It will fill in gaps with assumptions. It will confidently misunderstand you. And unlike SQL, most AI prompts don’t come with a fixed schema to constrain interpretation.

Naming Things at the Scale of Entire Programs

There’s an old joke in Computer Science: “There are only two hard things in Computer Science: cache invalidation, naming things, and off-by-one errors.” We’ve repeated this joke for decades, usually while laughing and then immediately turning around to name a variable temp2. Most of us treated naming as a minor annoyance; something you could get wrong and fix later during code review, if anyone bothered to call it out.

A good name like authenticatedUserSessionMetadata tells you what it is, where it lives, and how to think about it. A bad name like data tells you almost nothing. Naming was always hard because it required three things:

  • Compression: distill meaning into a few words.
  • Context: make it clear to someone else who doesn’t have access to your brain.
  • Precision: rule out misinterpretation, or at least significantly reduce its likelihood.

It turns out that these three things are also exactly what a good AI prompt requires.

Some engineers still can’t name variables properly. getUserData() could mean fetching from a database, reading from a cache, validating a session token, or deserializing JSON from an API response. Now we’re asking those same engineers to “name” entire programs in paragraphs of English. What could possibly go wrong?

Consider what’s wrong with a bad variable name like temp. What kind of temporary thing? How long does it live? What is its purpose? Now consider a bad prompt: “Make this code better.” Better how? Faster? More readable? Fewer lines? Different algorithm? AI will pick an interpretation, and that interpretation may not match yours.

One could argue that the skill required to properly name variables is identical to that required to write good AI prompts. Both require you to externalize your thoughts. The difference is that the compiler used to force you to properly externalize your thoughts through syntax. Types, function signatures, module boundaries were the forcing functions for clarity. In contrast, AI won’t force you at all. It will just guess, generate, and wait for you to realize that the output doesn’t even come close to what you meant.

The point I’m trying to make here is: prompting isn’t an entirely new skill. It’s the same skill engineers have always struggled with, just without the training wheels we’ve always had.

Logic is Cheap Now. Clarity is Expensive.

For decades, one of the primary bottlenecks in software development was how fast you could translate logic into code. Typing speed, library knowledge, and debugging skill all determined throughput. Engineers who could crank out logic faster were more valuable. This is why many technical interviews evolved into algorithm speed-runs:

Reverse a linked list and invert a binary tree, concurrently, in constant time and in 20 minutes, while hanging upside down like a bat.

The assumption was that fast logic production was a skill that mattered. But if AI can now solve these problems in seconds, what are we actually interviewing for? I guess that’s an article for another day.

For many coding tasks, AI can now generate logic faster than most humans. The constraint is no longer generation; it’s specification. If you can’t describe what you want, AI will happily build the wrong thing at incredible speed. And because it builds fast, you’ll have an avalanche of wrong code to debug by the time you realize what’s going on.

Debugging code you wrote yourself is hard enough. You retrace your logic, find the flaw, and fix it. Debugging code generated by AI from your vague prompt is worse. You’re debugging two things at once: the logic and your own shitty prompt. You have to ask yourself:

Did the AI misunderstand me? Did I mis-specify? Or is it both?

Neither you nor the AI is wrong. Natural language just has an annoying penchant for allowing multiple valid interpretations of the same thing.

Some may have thought that AI would save us from having to invest a lot of energy into communication. No more meetings, no more writing docs. Just describe it to the machine and ship it. Instead, AI has probably made communication the most important skill you need. The machine can generate logic. YOU have to explain what logic to generate.

The game has changed. Logic is abundant. Clarity is scarce. Engineers who can communicate intent precisely are now the bottleneck, and the competitive advantage. Which one are you?

What Does this Mean for You?

Stop thinking of prompting as “talking to AI”. Think of it as writing a specification that a very literal, very fast, slightly confused entry-level engineer will implement without asking clarifying questions.

Three mental models help. First, prompts are compressed context. AI doesn’t know what you know. It doesn’t have your product requirements or your team’s conventions in its head. Every assumption must be stated. If you assume the AI knows that “optimize” means “reduce the number of database queries,” you may get code that’s optimized for readability instead.

Second, ambiguity is your enemy. Words like “better,” “clean,” “simple,” and “optimize” mean nothing without criteria. Better for whom? Clean by what standard? Uncle Bob’s? Simple in terms of lines of code, cognitive load, or execution time? AI will pick a definition. It probably won’t be yours.

Third, you have to iterate. Just as you wouldn’t expect code to work on the first try, don’t expect prompts to work on the first try either. Write a prompt, see what it generates, refine your language. The process is identical to writing a function, running it, and fixing the bugs. Except now, the bugs are in your description of the function, not the function itself.

This is about externalizing thought. Code reviews, documentation, and system design docs all require the same skill: taking what’s clear, or not so clear, in your head and making it clear to others. Engineers have always struggled with this. We write comments that say // fix later and commit messages that say “updated stuff”. We avoid writing comments and design docs because:

The code is the documentation. 🤡

News flash! AI is about to make communication skills non-optional. If you can’t explain your intent clearly to AI, you probably couldn’t explain it clearly to a human either.

How to Get Better

Getting better at prompting means getting better at communication. Some call it “prompt engineering”. I hate that phrase almost as much as I hate the phrase “vibe coding”. Anyway, here’s how to get better at prompting AI.

Be specific about constraints and criteria. Don’t say “optimize this function.” Say “reduce time complexity from O(n²) to O(n log n) without increasing memory usage.” Don’t say “make this API call more robust.” Say “add retry logic with exponential backoff, timeout after 30 seconds, and log failures to Sentry.”

Provide examples of success and failure. Show the AI what “good” looks like. If you’re asking for a function, give it sample inputs and expected outputs. This is test-driven development for prompts. You’re specifying behavior through examples because prose alone is too ambiguous.

Iterate in small steps. Don’t ask AI to “build a REST API with authentication, rate limiting, Redis caching, and a pathway to heaven.” Ask it to scaffold routes. Then add input validation. Then add error handling. Then integrate Redis. Incremental prompts enable you to verify results incrementally. You’ll have a better chance at catching misunderstandings early, when they’re still relatively cheap to fix.

Learn to name things properly. Start small. Review your variable names, function names, and commit messages. Are they clear? Contextual? Unambiguous? Apply that same rigor to prompts. If you can’t name a variable well, you won’t describe a program well either.

Read more. Write more. Clarity comes from practice. Write technical documentation. Write design docs. Write blog posts explaining your work. Each of these activities trains the same muscle: taking what’s in your head and making it clear to others. Engineers who write regularly are better prompters, better communicators, and better thinkers.

These aren’t “AI skills”. They’re engineering communication skills that were always valuable but are now essential.

What Now?

Maybe this is poetic justice. For years, many engineers dismissed writing as a “soft skill”, something less important than the “real work” of slinging code. We weren’t just bad at naming things. We were bad at externalizing intent. Technical writing? That was for people who couldn’t ship code 🙄. Communication was overhead and a distraction from the work that mattered.

Now it turns out that writing IS coding. The prompt is the program. The clearer your language, the better your software. Engineers who can communicate intent precisely will leverage AI to ship faster, iterate faster, and think clearer. Engineers who can’t will generate mountains of plausible-looking garbage and spend their days debugging their own ambiguity. Communication clarity is now a competitive advantage.

Read Entire Article