AI-Assisted Software Development

6 hours ago 3

AI-assisted programming has reached a tipping point in 2025. While LLMs have been applied to coding since before ChatGPT, these were mostly for toy, one-off, simple projects, web pages, or scripts. However, in the past several months, AI systems (both LLMs and related tooling) are becoming good enough for serious use.

This shift has created a predictable divide. On one side, you have the evangelists, AI startup executives and pundits promising a 10x productivity revolution, where AI writes the vast majority of code, and most developers lose their jobs. On the other, engineers on the ground warn of "AI slop": a future drowning in unmaintainable, insecure code no one truly understands or owns.

Both camps are right, to an extent. AI is a power tool, but its value depends on the person using it. Your new AI assistant isn't magic and it isn't flawless. It's a brilliant, tireless, eager, but profoundly naive intern. It can churn through a staggering amount of work, but without constant, expert direction, it will create a mess. If you let it run unsupervised, you're just building technical debt at an accelerated rate.

The Spectrum of AI-Assisted Coding

"AI coding" isn't one thing. It's a spectrum of tools and workflows; knowing when to use each is half the battle.

  • App Builders: These are the "zero to done" tools like Replit, Lovable or Bolt.new. The user provides a prompt, and they generate a complete, simple application. While they can be prompted further to tweak the app, they're not designed for long-term work on a large system. Targeting "citizen developers" building internal tools or basic websites or apps, they're great for quick prototypes or basic apps, but break down once real-world complexity shows up. This style of coding has been popularized as vibe coding.
  • Smarter Tab Completion: This is the original GitHub Copilot model, since implemented by other IDEs such as Cursor, IntelliJ, Zed and others (there's even a Vim plugin). The AI lives in the coding editor or IDE, suggesting code as you type. It cuts boilerplate busywork and completes common patterns, but remains purely reactive.
  • Inline Assist: Most modern AI-enabled IDEs have this as an option alongside tab completion. The developer can highlight code and ask the AI to explain, refactor, or find bugs. It's a focused way to get help without context-switching out of the editor.
  • Chat-Based Help: This is simply chatting to an AI in a separate window to ask for explanations, code snippets, or debugging ideas, and then manually copy-pasting information or code between the AI chat and code.
  • Agentic Coding (hands-on): An AI agent lives in a sidebar inside the code editor, and can be tasked with explaining code, brainstorming, or completing a well-defined task. Current versions of GitHub Copilot, Cursor , aider and other popular coding assistants focus on this type of interaction. The developer gives detailed context and instructions to the AI agent. The agent then proposes a plan, writes the code, and can even run commands to test it. The developer reviews every step, provides feedback, and owns code quality. Once the small task is done and committed, the developer wipes the context and starts fresh on the next one.
  • Agentic Coding (autonomous): Popularized by Claude Code, this gives the AI agent more freedom and access to additional tools to achieve its job. The agent is given a high-level goal (for example fix ticket 123) and runs until done. Along the way, it can access tools such as the browser, database, internet search and others, usually configured using Model Control Protocol (MCP). It can also run tests or shell commands as needed. While by default it asks for permission every time, many tools also support "dangerous" (or "yolo") mode where the agent executes the commands without waiting for permission, allowing it to work fully autonomously without the developer being present. When done, the developer performs a code review, tests the result, and sends it back for rework if needed.

Why AI-Generated Code Goes Wrong

Traditional computer software either works correctly, or doesn't. AI, in any domain, is different. The results are unreliable: sometimes slightly off, sometimes entirely hallucinated.

This is especially noticeable in coding, where tiny errors easily propagate and small quality issues snowball into an avalanche of unmaintainable mess. The problem is even worse considering developers often don't have the full picture of all the constraints and details in their head before they start coding, meaning the AI often lacks enough information to be able to do the work correctly.

This doesn't mean AI is useless for coding, but it does mean it needs careful guidance, guardrails, testing and review. Developers can delegate to AI agents, but can't abdicate responsibility for code correctness and quality.

How to Get High-Quality Results: Pair programming with AI

Success with AI coding comes from a disciplined workflow that uses AI for speed but relies on human expertise for quality. The developer is still in charge as the architect, reviewer, and mentor:

Plan First, Code Second

Never start by asking the AI to write code. Start by asking it to think.

  • Give it context: Provide a clear task description or specification (PRD, ERD, or ticket details).
  • Ask for a plan: Prompt the AI to create a detailed implementation plan. Tell it to list the files it will modify, the new functions it will create, and the overall approach. Insist on no code in this step.
  • Iterate on the plan: Review the plan. Is it sound? Does it violate any of your architectural principles? Did you forget something? Correct its assumptions. Refine the approach until you, the expert, are satisfied.

This forces the AI to align with your project's structure before it generates a single line of code. Only then should you ask it to start implementing the required changes.

Manage the Context

An AI is only as good as the information you give it.

  • Be specific: Use features like @file or @folder in tools like Cursor to provide only the relevant files for a given task. Never just point it at your whole repository.
  • Keep it clean: Start a new chat session for each distinct task or feature. This prevents context from previous tasks from bleeding over and causing confusion.
  • Create a "rules file": Maintain a rules file or files for your project. Include core instructions: libraries used, architectural patterns, code style guides, and things to avoid. A good way to bootstrap this is to ask the AI: "Analyze this codebase and create a set of onboarding instructions for a new junior developer." Include this file in the context for every task (Claude does that automatically for Claude.md, Cursor looks at .cursorrules , other agents have similar conventions).
  • If stuck, start over: If the AI gets stuck on the problem, it's better to reword it and start again. Spinning in a loop will cause the AI agent to poison its context with irrelevant, buggy, or desperate (non)solutions as it tries to fix the problem. Often you're better off starting from a clean slate, and rewording the task or explicitly calling out the problem and best solution.

Follow a Disciplined Coding Loop

Don't just ask the AI to "build the feature." Guide it through a tight, iterative loop, and ensure you have full understanding of what's going on:

  1. Define a single, concrete task. The very next small change.
  2. Ask for approaches. Let the AI suggest a few ways to solve it. You pick one.
  3. Draft the code. Tell the AI to implement the chosen approach.
  4. Review and learn. This is the critical step. Read the code. Do you understand it? Check the API docs for what it used. Ask the agent to explain its reasoning. If the code is garbage or the reasoning is deeply flawed, throw it away and try another approach.
  5. Test. Use your existing guardrails. Does it compile? Any linter warnings? Do all tests pass?
  6. Commit. Once you are satisfied that the code is correct and well-written, commit it.
  7. Repeat. Wipe the context and move to the next small task.

This process ensures you remain in control, understand every change, and have the safety net of version control. By starting with fresh (empty) context for each task, you avoid context rot and keep the agent focused.

Build Strong Tooling Guardrails

Your existing engineering discipline is your best defense against AI slop.

  • Automated checks keep AI on track: Linters, static analysis tools, and automated tests provide fast feedback to the AI agent without involving human, minimizing hallucinations and mistakes.
  • Use automated reviewers: Tools like Copilot for PRs can act as a first-pass filter, catching obvious mistakes and freeing up human reviewers to focus on architectural and logical issues. However, they are not replacements for human reviewers.
  • Give it tools: Connect your agent to a tool like Playwright (control the browser to check front-end code or look up something on the internet), or expose your database or other subsystems through MCP or command-line tools. This makes it easier for the agent to inspect the environment, look up information, or check the results of the code it has written.

A paradigm shift

AI won't replace developers, but it will fundamentally change software development. Writing boilerplate code is now a cheap, commoditized task. The valuable skills are system design, architectural thinking, rigorous testing, and the ability to expertly direct these new tools.

Software developer responsibilities will shift from primarily writing code to reviewing, directing, and integrating it. To keep up, developers will need to be expert pilots of these AI assistants, write precise instructions, manage context, and critically evaluate AI output with a skeptical eye. This is a significant lever on programmer productivity, but only if they can use it responsibly.

There's still work to do before AI-assisted coding scales to large projects and teams:

  • Messy codebase is the biggest blocker. AI can't fix a chaotic environment; it will only add more chaos, faster. Before you scale AI usage, you should have a stable, well-documented system and a robust test suite.
  • The cost of the tool is trivial compared to the cost of the mess. Don't fixate on the monthly subscription fee. Focus on the cost of an engineer's time spent cleaning up bad AI-generated code or fixing a production outage caused by an unvetted security flaw.
  • Find what actually works. Experiment. Document what works and what doesn't. Share practical playbooks. Allow and encourage people to share both successes and, more importantly, failures and near-misses.

A recent talk by Gergely Orosz of Pragmatic Programmer, who interviewed big tech companies, AI startups, independent developers and some of the luminaries of the software development field sums it up nicely: "It's time for developers to experiment more with LLMs"

The future of development isn't fully automated. It's a partnership between human expertise and machine speed. The engineers and companies that master this collaboration, grounded in discipline and a healthy dose of skepticism, are the ones who will succeed. The rest will be stuck cleaning up the mess.

Further resources

More resources on the state of AI-assisted software development, including best practices, pitfalls, and future directions:

Read Entire Article