From Idea to Launch in 2 Weeks

3 months ago 7

Everyone’s been talking about LLMs. I didn’t want to be too late to the party. When everyone’s talking about doomsdays scenarios, I just wanted to see for myself. As an engineering leader, I spend a lot of time thinking about productivity, tooling, and how engineers work best. But it’s easy to lose touch with the day-to-day of building. So I rolled up my sleeves and built a few things.

In the past couple of months, I launched three projects —aGentDPR, caccepted, and Okutac — with the help of LLMs. I made LLMs my primary part of the development process. I wanted to understand how far you could really get with today’s AI-driven workflow. Most importantly, how useful LLMs actually are, where they break down, and what it means for real-world engineering work. So, I developed three distinct projects. Here is the specification for each.

  • aGentDPR: A GDPR audit tool that generates compliance reports.
  • Caccepted: A daily challenge tracker focused on self-improvement and discipline
  • Okutac: A micro-article publishing platform powered by curated, AI-assisted summaries for Turkish audiences.

I also talked to other engineers, some skeptical, some already using LLMs as copilots. A common theme: “It helps, but only if you already know what you’re doing.” That rhymes well with my overall experience. These tools don’t replace thinking. They multiply it. If you’re clear on your direction, you can move incredibly fast. If not, you’ll just generate better-written confusion. If I’m writing a bash script to do X, Y, and Z, there’s honestly no way I can do it faster or even better than an LLM. And that’s fine. I’m not writing it to prove anything; I’m writing it to solve a problem.

Anyway, let’s talk about my stack and how I got here.

My Stack

All three projects ran on the same basic setup: Next.js, Supabase, and TypeScript. It’s the stack I default to when I don’t want to waste time thinking about infrastructure. Just enough abstraction to move fast, just enough flexibility to handle changes later.

Tooling-wise, I kept it simple:

  • VSCode Pro with Claude, OpenAI, and Gemini wired in
  • Cursor for context-aware editing and refactoring
  • Warp as my terminal. It makes decent suggestions.

But tooling was only half of it. The real advantage came from how I used the LLMs. I didn’t just go with one model and stick with it. I used them like a relay team. Overall, Claude models were solid for writing and editing code. It was structured, logical, and rarely went off the rails. OpenAI handled things like landing page copy or writing descriptions. Gemini was there, but honestly, I didn’t lean on it much. A few friends tell me Gemini is good at specification or architecture but didn’t explore it that much. 

I was casually coding with them but then it really started to click when I began cycling between models. I started writing something with one, improving it with another, and sometimes running the same problem through each of them just to see the contrast. Well, I wasn’t chasing the perfect result. I was exploring different paths quickly and seeing which one held up.

This pattern – draft, evaluate, revise — ended up looking a lot like a manual version of agentic AI. One model generates. Another critiques. I guide the loop. It’s not some polished, automated system, but it works. And it works fast.

It’s basically pair programming where your pair is three different LLMs. They’re inconsistent, sometimes wrong, but also surprisingly helpful when you direct them properly. You’re still the one steering, deciding what to keep, what to throw out but they cut the noise and help you move. That’s where the leverage is. I know they create bullshit solutions at times especially when you encounter a semi unique problem but overall it does the job.

The New Leverage

If you’re clear on what you want to build, these models give you insane leverage. You don’t need to touch every file, every function, every line. You just need to know what you’re trying to say and the model fills in the rest.

You can go from idea to working MVP in two weeks. I’m not exaggerating. You don’t need a cofounder, a designer, or even another engineer. If you can define the logic, structure the app, and catch obvious bugs, you can get something real off the ground fast. Not half-baked demo fast actually usable fast. Well, take a look at each of the applications I built. They aren’t earth shattering but they are fine. I built all of them with these models.

This is what most people still underestimate. The productivity boost isn’t in writing “faster code.” It’s in skipping over 80% of the boilerplate, setup, and edge case glue code that normally burns your evenings. I remember how many times some stupid configuration was off and wasted hours just to realize it. It gives you the speed to build and the space to think. That’s the new leverage.

One thing I didn’t use, but should have,  is defining rules upfront to guide the model. Not prompt hacks, I mean actual persistent instructions. VSCode, for example, supports a .copilot-instructions.md file where you can write things like “use snake_case,” “always import from ‘utils/’ for helper functions,” or “always return JSON that matches this schema.” I only found out about that after I’d shipped all three projects. It would’ve saved me a ton of rework. Next time, I’ll set those guardrails early. Hence, the model behaves more like a teammate who remembers the context, not someone guessing in the dark every time.

Where Things Get Messy

Of course, it wasn’t all butterflies and rainbows. The early phase is where this works best. Greenfield, isolated features, no legacy decisions in the way. LLMs either hallucinate outdated APIs, miss dependencies, or produce logic that compiles but breaks in edge cases. They don’t track interdependencies across files or modules, so what feels fast becomes fragile fast.

Models don’t understand your full context. They forget what they said 5 prompts ago. They make changes that break edge cases you’ve mentally accounted for but never documented. You still need to think in systems. You still need to hold the architecture in your head.

I think that’s where you want to make the distinction. They seem like they are peer programmers but honestly they aren’t. Thus, if you break things down into smaller tasks, give it clearer direction, and tighten the loop, it still works. You just have to shift from “co-builder” to “task solver.”

For aGentDPR, once the app was working, the hard part wasn’t building new features, it was refactoring without breaking the reporting logic. It made so many messy assumptions like pushing report generation logic into the front end. I created a questionnaire that has many branches but that’s something I designed not the LLM but it was able to fill. But even then, I used LLMs to rewrite pieces, validate behavior, and cross-check assumptions. It wasn’t glamorous, but it saved time. And it wrote amazing unit tests, I’m not going to lie. 

The Real Work Is the Specification

After working with LLMs across multiple projects, I’ve realized the most valuable thing I created wasn’t the code. It was the spec. Like what’s it that I want this application to do and how I want it to do. Sure, it can generate some fluff itself but it’s not that useful. 

You realize the code is just a lossy projection of your actual intent. The real artifact in fact is the specification. It’s a structured expression of what the system should do and why. It’s funny because I teach my students that functional and non-functional requirements are the most important part of software development. If you get them right, you’re doing the right thing. When you do it wrong, you have more loss. So, I think LLMs make this even more prominent. 

Anyway, that’s what I spent most of my time doing on these projects. Breaking the idea down. Deciding the edge cases. Clarifying flows.  Writing that out in clean, readable language for myself and for the model. The better I wrote that, the better the results. Simple as that.

Most of what slowed me down wasn’t typing code obviously. It was vague requirements, mostly my own. The second I clarified the spec, the model could generate 80-90% of what I needed. And when I didn’t? The model failed. So did I. Well, that was rather expected but it did a great job.

Many of us take pride in writing the code. But when AI can generate functional implementations, your value becomes how well you think, how clearly you define the system, and how effectively you communicate that structure. Sure, let’s take pride in writing good software but when LLMs can generate good enough code, then you need to understand what’s the catch then.

Programmers, Redefined

I think we will have to redefine what programming means and what a programmer is. I’ve already seen this happening. With tools like aGentDPR, I barely wrote any code. I wrote the structure, logic, and flows. I described behaviors. I reviewed the output. The model typed. Same with Okutac. I described how content should be tagged, and surfaced. The model implemented it. I debugged and refined. That was programming.

We’re heading toward a shift where engineers won’t do coding as we know it. They’re architects of clarity. Writing a good spec becomes the core skill. It requires systems thinking, product intuition, edge case awareness, and user empathy. And that applies whether you’re a PM, designer, engineer, or marketer. Doesn’t matter who you are if you can write a tight spec, you can ship working software. 

Does it scale? In the right hands, yes. In the wrong hands, no. But that’s always been the case.

Many startups wrote something that barely worked. Once they got enough money, they threw money at the problem and hired the best engineers. I don’t see how this is any different. You’re shipping something fast and you need someone to make it efficient, scalable and so forth once it gets to an inflection point. 

Going back to spec, just like code, specs can be versioned, testable, composable. That’s not theoretical. I’ve already started treating my prompt docs like living specs. They describe how the system should behave, with examples and constraints.

Are We Getting Replaced?

No. But engineers who learn to work with LLMs will most probably outpace those who don’t.

Across the three projects I built, I kept testing how far I could push LLMs. And what I’ve seen is this: they’re not coming for my job, but they are changing the way I build.

On greenfield code, they’re a massive accelerator. You can go from nothing to a working flow in a few hours. You skip setup, boilerplate, scaffolding, all the stuff that normally slows you down. For prototyping and early-stage features, they make solo-building feel like you’ve got a team. I was shipping working features in days instead of weeks.

But once you’re inside an existing codebase, especially something you already understand, the gains flatten out. In fact, I noticed that I was sometimes slower with the LLM. You spend more time rephrasing prompts, fixing context errors, undoing weird assumptions. You think you’re moving fast, but really you’re babysitting. It looks productive, but your velocity drops if you’re not careful.

Despite that, I kept using it. Why? Because it made the work more enjoyable. It took care of the repetitive stuff and freed me up to focus on harder problems. When I hit a blocker or needed to refactor something ugly, I’d have it draft a few alternatives. Sometimes it helped me think through the solution. Other times it just helped me get started.

Now, some people say this is a “mental shift”—from coder to orchestrator. But let’s be honest: that’s always been the job. Especially for senior engineers. The hard part was never typing. It was problem framing, architecture, trade-offs, reviews, edge cases. We were already doing that. What LLMs do is surface this reality earlier. Now even mid-level engineers can start working this way if they know how to use the tools right.

But here’s the deeper truth and this became obvious as I iterated faster: the real work isn’t the code at all. It’s the spec as I discussed earlier. The clearer I was in describing what I wanted, the faster everything moved. Prompts became specs. Specs became the product. And the person who writes the clearest spec? That’s the actual builder now. Maybe, not immediately but in the future. 

However, I still think writing code for enterprise software remains hard even with all these advancements. The complexity isn’t just technical. It’s organizational: multiple teams, shifting scopes, security policies, compliance, monitoring, rollout strategies, internal politics. LLMs don’t have access to that full context, and without it, their output is always partial at best.

Maybe they’ll get there someday. Maybe we’ll start feeding them architecture docs, incident history, and policy constraints in real time. But for now, the same problems that make enterprise software messy for humans make it even harder for models to navigate without tight human supervision. The concerns security, scale, accountability are still very real.

So no, you’re not getting replaced.

But if you’re still measuring your value by how much code you write, then times are tough.
Because now, the most valuable engineers are the ones who can think clearly, write intentionally, and turn a rough idea into a structured, testable, executable specification.

And that’s not just engineering anymore. It’s product. It’s design. It’s leadership.

The model can write the code.
You write the system.

Read Entire Article