AI Is Changing How New Programmers Learn – For Better and Worse

4 hours ago 4

Every year, I mentor a batch of new graduates who start their engineering careers during their training program. At the start of each training cycle, I gear up for the usual rituals: walking them through foundational concepts, catching anti-patterns in their code, nudging them to think beyond syntax, helping them internalize best practices and understand what clean, purposeful programming looks like, and providing industry insights.

But this batch was different. In one of the early sessions, they submitted a frontend module that looked strikingly polished. The code structure was clean, variable names were meaningful, and even the comments read like something from a mid-level developer. This wasn’t something I had seen in previous batches—not this early, and not with this level of consistency. My first instinct was admiration, followed immediately by suspicion. Did these rookies really write this? A little probing revealed the answer: they had leaned heavily on AI tools, mostly ChatGPT, Claude and GitHub Copilot.

Now, I’m not against AI-assisted coding. In fact, I see enormous potential in it. But as the weeks progressed, I noticed a pattern. Many of them couldn’t explain how the code worked under the hood. Some misunderstood DOM manipulation or basic cascading in CSS. Others confidently submitted near-perfect solutions—yet stumbled when asked to tweak behavior or debug something slightly out of scope. It became clear that while the code had quality, the comprehension often didn’t.

This experience gave me a front-row seat to a growing phenomenon: AI-assisted learning is transforming how new developers learn to code. But it comes with a cost if left unguided.

Metacognition is being aware of your own problem-solving process. In coding, this means knowing how to break down a problem, plan a solution strategy, monitor if your approach is working, and adjust when you hit a dead end or even understanding if the AI generated code is valid. Research has long shown that novice programmers lack metacognitive strategies, and AI is now amplifying the problem by making working code feel like understanding when it isn’t [1].

The 2024 study “The Widening Gap: The Benefits and Harms of Generative AI for Novice Programmers” [1] took a deep dive into this phenomenon. Students who had decent problem-solving skills used AI to accelerate their work and filter out bad suggestions. But those who were still forming basic mental models of programming? They became more confused. They often finished tasks without understanding what they had done, falling into what the researchers call an illusion of competence.

The study observed 21 students solving a programming task with AI tools. Nearly all completed it. But how they got there differed dramatically.

  • Accelerators: Students with strong foundations used AI sparingly and finished in 5–13 minutes. They leveraged it to automate routine code, while staying in control. They understood when a suggestion was wrong and why.

  • Strugglers: These students misinterpreted AI output, skipped problem-solving stages, and got off the rails by misleading suggestions. Worse, they often thought they had done well. Their code passed tests, but they lacked insight into how or why it worked.

This dichotomy, the widening gap—is what mentors must grapple with. AI can widen the achievement divide between prepared and unprepared students.

When used intentionally, AI tools can act as accelerators for learning—not just in getting code to run, but in nudging novices toward industry-aligned habits.

  • Speed and Structure: AI tools offer instant scaffolding. For example, when a learner types a function name, Copilot often proposes a complete, well-structured block of code, sometimes even with helpful comments or basic test stubs. This shortens the feedback loop and introduces consistent structure, which helps juniors internalize conventions like readable naming, modular design, and test coverage standards—habits that usually take much longer to develop.

  • Bridging Gaps Between Ideas and Implementation: New programmers often know what they want to build, but not how to translate that into code. With AI, they can express intent in natural language and receive suggestions that help them break down problems into meaningful steps. This reinforces early skills like reading functional requirements and mapping them to user-facing behavior or technical logic.

  • Hands-On Exposure to Tools and Patterns: AI-generated code frequently includes concepts that would otherwise require weeks of experience to discover—like how to handle user input validation, construct API requests, or organize components within a frontend framework. While learners may not grasp all the details immediately, repeated exposure helps build familiarity with common libraries, design patterns, and performance-conscious coding styles.

  • Raising the Floor for Code Quality: Even when juniors don't know best practices, AI suggestions often reflect them. Suggestions may include checks for edge cases, more accessible markup, or cleaner CSS hierarchies—nudging learners toward better quality, even before they know why it's better.

  • Confidence Through Progress: Seeing things work on screen—when a bug gets fixed or a component renders correctly, can be incredibly motivating. AI helps learners reach those moments faster and more frequently, reinforcing confidence and encouraging deeper experimentation. This effect can compound: the more progress a learner sees, the more inclined they are to keep exploring and pushing their limits.

But the same capabilities that make AI so helpful also carry risks—especially when learners accept its outputs uncritically.

  • Knowing the Code, But Not the Concept: A common trap is that learners produce syntactically correct, even elegant-looking code—yet can’t explain how it works. They may include automated tests, but not know what they’re actually testing for. They might set up a loop or recursion pattern because the AI did, not because they understood the logic.

  • Skipping the Thinking Process: AI can short-circuit planning and problem decomposition. Novices may jump straight to code generation without clarifying requirements or constraints, leading to mismatches between what’s built and what was needed. This undermines foundational habits like planning before coding or evaluating multiple design options before committing to one.

  • False Sense of Progress: When AI makes suggestions that “look right” but are subtly off—e.g., they work only in ideal scenarios, use deprecated methods, or break under different screen sizes—novices often miss the flaws. And because the code runs, they assume it's correct. This creates a false sense of mastery, which is particularly hard to detect without structured review or testing.

  • Overengineering: AI can introduce unfamiliar constructs that learners include without truly grasping. For instance, a suggestion might use an advanced hook in a React component or introduce a timing function in JavaScript that the student hasn’t seen before. Without pausing to investigate, whether the suggested code is required, they inherit complexity without understanding it.

  • When It Works, But Isn’t Safe: One overlooked danger of AI-generated code is the tendency to skip over crucial security considerations—especially when the suggestion looks correct and runs without error. For example, a beginner building a login form might accept a suggestion that directly interpolates user input into a query string or stores passwords in plaintext—without understanding the implications. Because AI is optimizing for syntactic correctness and completion, not for threat modeling, it may omit input validation, fail to escape user-provided data, or bypass critical checks (like CSRF tokens or secure headers). Novices rarely spot these omissions because they’re still focused on getting the code to work, not on defending it.

  • Phantom packages: In a recent article I wrote, about The Dark Side of AI Coding: How Hallucinated Packages Create Security Blind Spots — where malicious actors publish packages with names closely resembling hallucinated ones suggested by AI tools. For beginners who are still learning to differentiate between official, maintained packages and unknown ones, this creates a perfect storm. They trust the AI, the install works, and no red flags are raised—until something goes wrong.

  • Over-Reliance and Passive Learning: Perhaps the most concerning trend is dependency: using AI for every task, even trivial ones. Instead of building mental models for solving problems, learners develop prompting habits. This inhibits creative problem-solving and makes learners less capable in environments without AI support—like whiteboard interviews, understanding legacy systems, or providing support in applications demanding high-reliability and security.

The way forward isn’t avoiding AI. It’s using it deliberately and reflectively. Here’s what that looks like:

  • Stay in control: Learners must remain the primary problem solvers. AI is a helper, not a driver.

  • Plan before prompting: Write pseudocode or outline your approach before asking AI for help. Compare suggestions to your plan.

  • Struggle first: Use the 15-minute rule—try first, fail, then ask. Struggle builds understanding and hence metacognition.

  • Trust but verify: Treat AI like an overconfident peer. Test its output. Cross-check explanations.

  • Teach metacognition explicitly: “Could I do this without AI?” or “What did I learn from this AI suggestion?” build self-awareness.

Advice for Mentors and Educators

If you’re a mentor, don’t just review the code. Review the thinking behind the code. Ask juniors to walk you through it. If it came from AI, treat that as a teaching opportunity: “Do you understand why this approach was taken?”

If you’re in a classroom, require reflective prompts or AI usage logs: what was the prompt? why did you accept that code? This fosters metacognitive habits.

AI is reshaping how novice programmers can code. But clean code is not the same as clear thinking.

My recent experience mentoring graduates confirmed what the research warns us about: AI can make novices look competent, while keeping them novices. The next generation of engineers will likely enter the workforce with portfolios full of AI-enhanced projects. But what will matter is whether they can explain that code, adapt it, debug it, and most importantly—learn from it.

Used thoughtfully, AI can produce stronger learners. Used passively, it risks producing a generation of surface-level coders. The challenge and opportunity for mentors is to close that widening gap, ensuring that the next batch of “unusually good code” from new grads comes with well-earned competence behind it!

References:

  • [1] Prather, J. et al. (2024). “The Widening Gap: The Benefits and Harms of Generative AI for Novice Programmers.” (ICER 2024 Research Paper) - https://arxiv.org/pdf/2405.17739

Discussion about this post

Read Entire Article