Agent Assisted Coding Taxonomy

4 months ago 11

A Phrasebook for Building Real Software with LLMs

Bill

This taxonomy organizes key concepts, patterns, and practices for AI-assisted software development. This is meant to be a fun and maybe practical vocabulary for anyone navigating the strange new workflows that emerge when humans and coding assistants build software together.

The taxonomy covers several main areas:

  • Prompt Engineering & Context Management
  • Development Workflows & Patterns
  • Security, Privacy & Alignment
  • Testing & Validation
  • Debugging & Refactoring
  • Traceability & Documentation

noun

The practice of writing a concise, scoped, and edge-case-aware instruction before you unleash an LLM.

Example:

Implement login() in auth.py

• Stack: Python 3.12, FastAPI, standard library + passlib[bcrypt] only.
• Behaviour:
─ Accepts JSON: {"email": str, "password": str}
─ Look up user in SQLite (users table: id, email, pwd_hash, failed_count, locked_until).
─ On success: reset failed_count, return {"token": <JWT>} (HS256, exp 15 min).
─ On failure: increment failed_count; after 5 failures within 30 min, set locked_until
and return HTTP 423.
• Edge cases to handle:
─ Email not found → generic 401 to avoid user enumeration.
─ Account already locked → 423 without touching failed_count.
• Tests (pytest):
─ test_login_success() → returns 200 + valid JWT header
─ test_login_bad_pw() → returns 401, failed_count +=1
─ test_lockout() → 5 bad tries then 423, stays locked 30 min

noun

A prompt that asks the model to improve or clarify the prompt itself before coding.

Example:

My draft request:
"Build a to-do CLI in JavaScript."

Before writing any code, please:

1. List what’s unclear or missing (e.g., storage choice, command names, error handling).
2. Rewrite the request to fill those gaps.
3. Return only the improved prompt.

noun

Always-on instructions encoding standards & guardrails.

Example: https://docs.anthropic.com/en/release-notes/system-prompts#nov-22nd-2024

verb

Pruning the prompt / IDE state to only the files & data the model needs.

Example: Using @file:models/User.js @file:controllers/auth.js when asking about user authentication flow.

noun

The seasoned engineer’s knack for steering an AI coding assistant, knowing when to trust its suggestions, when to override them, and how to nudge the model with just-enough context or constraints to get high-quality output on the first try. It blends sharp technical taste, clear communication, and fast pattern-recognition, much like an experienced auto mechanic who can diagnose an engine by sound and tighten the right bolt with half a turn.

noun

A systematic approach to AI interaction where each prompt builds on previous responses to gradually refine and improve the solution, using feedback loops to steer the AI toward the desired outcome.

verb, Slang. derived from vibecoding

To transform raw vibecode into production-grade software by applying a disciplined cycle of testing, refactoring, security review, and architectural fit-out.

Autopilot Atrophy

noun

The gradual loss of a developer’s hard-earned coding, debugging, and architectural judgment that occurs when they over-rely on an AI assistant, accepting suggestions uncritically and delegating all “thinking” work to the model.

Diff Amnesia

noun

When an LLM is told to “revert” code but instead re-generates an approximate previous version that still contains stitched-in remnants of the discarded logic, leaving a patchwork result that isn’t a true rollback.

Prompt Hygiene

noun

The practice of Removal or redaction of secrets, PII, and proprietary data from prompts and model responses before they’re logged, persisted, or re-used.

Guardrail

noun

Deterministic check (policy, linter, schema) that blocks bad output.

Example: ESLint rules that prevent common security vulnerabilities, or schema validation that ensures API responses match expected format.

Agent Sandboxing

verb

Isolated runtime where an agent can execute code/tests safely.

Example: A micro VM with limited permissions where AI-generated code can be tested without affecting the main system.

Alignment Check

noun
Step ensuring LLM output meets security, ethics, license & style bars.

Example: Automated review that checks for:

- Security vulnerabilities (SQL injection, XSS)
- License compliance (no GPL code in proprietary projects)
- Code style consistency
- Performance implications

⁵Generator–Verifier Cycle (Gen-Ver Cycle)

noun

A fast, iterative workflow in which an LLM (Generator) produces a draft solution and a separate agent (Verifier: a test suite, critic-LLM, or human reviewer) immediately checks that draft for correctness, safety, and style. The feedback from the Verifier feeds back into the next prompt, and the loop repeats until the draft passes all checks.

Rules-gated

noun

A repo-level contract that encodes stack norms for the agent.

Example: .cursorrules file specifying:
Framework conventions (React hooks, Express middleware patterns)
Testing requirements (unit tests for all functions)
Security standards (input validation, authentication checks)
Performance constraints (no N+1 queries, proper indexing)

Test-Driven Prompting

verb

Write, or have the LLM write failing tests before you ask the LLM to generate code, then iterate prompts until every test turns green.

Testability Blindness

noun

The oversight that occurs when engineers or their AI coding assistant focus exclusively on implementing a feature and ignore how (or whether) that code can be verified.

Coverage-Hallucination

noun

The cycle in which an AI assistant obediently writes unit tests for code that was never designed for injection or observation, creating the illusion of coverage while leaving the core logic effectively unverified.

Fix Prompt Spiral | Prompt Ping-Pong

noun

A runaway feedback loop in which an LLM, unable to solve the current problem, repeatedly proposes confident but incorrect “fixes.” Each failed attempt prompts the user to supply a new or increasingly verbose instruction, only for the model to generate yet another broken patch.

Prompt De-escalation

noun

A systematic process to break out of Prompt Ping-Pong by reducing urgency, expanding context, and changing the interaction approach.

Example: Instead of repeatedly asking “fix this bug,” de-escalate by:
“Explain what you think the problem is”
“Here’s the full error message and stack trace”
“Show me 3 different approaches to solve this”
“Let me debug this manually while you analyze the root cause”

Context Collapse

When AI loses track of the debugging context and starts suggesting fixes for problems that don’t exist.

Agentic Complexity Ceiling

noun

The point at which an AI-driven (agentic) workflow stops coping with a growing codebase, signalling it’s time for a deliberate human-led refactor before further changes cause regressions or thrashing.

Prompt Traceability

noun

Logs linking each commit to its originating prompt & response.

Example: Git commit messages that include the original prompt, or a separate log file tracking AI interactions with commit hashes.

# Example commit message
git commit -m "Add token-bucket rate limiter
Implements a thread-safe bucket (10 req/min per user) and returns (allowed, retry_after)." \
-m "# Prompt Colophon
prompt-id: 8f3c2a
model: claude-sonnet-2025-05-02
date: 2025-06-22T14:27Z
conversation-url: https://chatlogs.example.com/8f3c2a
human-edits: fixed retry_after cast; added docstring"

Copilot Guilt

noun

The faint, embarasment feeling a developer gets when admitting that a shipped feature, bug-fix, or clever refactor was co-crafted with an AI assistant rather than typed entirely by their own hands, even though the human still planned the work, judged the suggestions, and steered the final outcome.

In this article we mapped the core patterns of AI-assisted coding, from prompt craft to governance guardrails. Which terms, tactics, or horror-stories did we miss ? Drop your own phrases and field notes so the next revision reflects real-world experiences .

[1] Andrej Karpathy (April 2025). 'Vibe Coding MenuGen'
https://karpathy.bearblog.dev/vibe-coding-menugen/

[2] Simon Willison (March 2025). 'Here's How I Use LLMs to Help Me Write Code'.
(https://simonwillison.net/2025/Mar/11/using-llms-for-code/)

[3] Artharva Raykar (May 2025). 'AI-Assisted Coding for Teams That Can't Get Away with Vibes'.
(https://simonwillison.net/2025/Jun/10/ai-assisted-coding/)

[4] Armin Ronacher (June 2025). 'Agentic Coding Recommendations'.
(https://lucumr.pocoo.org/2025/6/12/agentic-coding/)

[5] Shah Mahdi Hasan (May 2025).'An Optimistic, Yet Opinionated Take on Vibe Coding'.
(https://smhasan.com/2025/05/28/an-optimistic-yet-opinionated-take-on-vibe-coding/)

Read Entire Article