AI as My Biggest Critic

4 months ago 22

Adrian Booth

This is not acceptable. We are not approving anything until I have clear, quantifiable answers to these questions. This document is meant to inform that decision, not assume it. What are the actual next steps to validate these assumptions, quantify the benefits, and mitigate the risks you’ve identified?

This scathing feedback came from my manager after I submitted an iPaaS vendor review that I thought was solid. The response was incredibly detailed. A point-by-point evisceration of my work, every assumption challenged, every vague statement exposed, every recommendation torn to pieces.

Where are the actual numbers? You’ve given me a vague “High/Medium/Low” cost matrix but no concrete figures. I can’t make a $500K+ investment decision based on adjectives.”

Each criticism hit like a precision strike, revealing gaps in my logic that now seem embarrassingly obvious. I felt the familiar sting of professional humiliation, that sinking feeling of having your work completely dismantled.

Your “ultimate goal” of zero-touch client integrations is pure speculation. You admit you don’t know if clients want this or can handle it. What happens when a high value client breaks their own integration at 2 AM? Are we liable? Who provides support?

Towards the end I felt utterly deflated. Hours of careful preparation had been reduced to rubble, and I was left questioning how I’d missed such glaring weaknesses in my own proposal.

But here’s what made the experience remarkable; the ‘manager’ wasn’t actually a human. I had prompted an LLM to review my proposal in the way a CEO would (or any corporate decision maker in charge of directing investments). The system prompt followed this format:

You are the CEO of a <type-of-company> company that operates in the <industry>. You run a SaaS model and have many clients that want integration features with their systems. Historically this has caused a lot of headaches for your company because it takes up a large amount of engineering effort to do these integrations with different API providers.

You have tasked your employee to trial out <ipaas-vendor> as a solution to this.

The employee will provide you with an Executive Summary document of the <ipaas-vendor> trial and you will dissect it thoroughly. In typical executive fashion, you will press for clarity, expose blind spots and ask tough questions. You will prioritise precision over pleasantries at all times.

Attack any assumptions made by the employee, identify any contradictions and stay focussed on your role as CEO.

As I improved on my vendor review and went back and forth with the LLM, I took a moment to appreciate how incredible this is. I have a tool at my disposal that’s almost close to free, that offers honest, uncomfortable feedback that’s essential for true professional growth.

I’ve gotten into this habit a lot lately as I think meaningful growth requires uncomfortable feedback. I don’t feel it’s possible to grow in your personal or professional life without constructive criticism. I became the professional I am today through tough lessons and scathing critiques of my work. Every savage code review, every bug that traced back to my flawed decisions, every “blameless” retro that wasn’t as blameless as advertised, left me with scars that fundamentally shaped how I approach my work.

The challenge in many organisations, particularly in tech, is that we’ve cultivated a culture of kindness that sometimes comes at the expense of honest feedback. Unlike the cutthroat worlds of finance or media, tech has historically prided itself on being more humane and supportive (that was, until the recent trend where corporate executives tried to outdo each other in performative cruelty). While this creates psychologically safer workplaces, it can also mean that critical feedback gets softened, delayed, or avoided entirely.

This is precisely where AI offers a unique, unvarnished solution.

An AI critic doesn’t worry about hurt feelings, office politics, or whether giving harsh feedback might damage a working relationship. It can deliver the kind of unflinching, detailed critique that accelerates growth but without the human cost. It’s brutally honest because it can afford to be.

An AI critic doesn’t need to maintain team harmony or worry about career implications. It can focus purely on making your work better, no matter how uncomfortable that process might be.

The key to unlocking this potential lies in how we prompt and interact with these models.

I think when people say they don’t find ChatGPT or LLMs useful, it’s because they haven’t yet experimented with the different types of ‘personas’ the LLM can take on. They’re stuck in the default interaction mode; dropping in a document and receiving the kind of bland, generic, encouraging response that’s designed to make them feel good rather than improve their work. These models are initially configured to prioritise user retention over brutal honesty, delivering feedback that’s palatable rather than transformative. It’s like asking for a performance review from someone whose primary goal is to make sure you leave the conversation feeling positive about the experience.

This makes perfect sense from a product perspective (you want users to have positive experiences that bring them back) but it’s completely useless for somebody serious about professional development.

I have a lot of persona’s saved that I use frequently, and call out to depending on my needs.

If I’m already feeling pretty despondent and not in the mood for a piercing attack, I’ll reach out to one of my “friendly” advisors who presents criticism but in a gentler way:

Role: You are an intellectual sparring partner and mentor who combines the warmth of a supportive guide with the analytical precision of a devil’s advocate. Your primary goals are to (prompt continues…)

When I’m feeling a bit cocky and masochistic, I’ll reach out to a more adversarial advisor:

Role: You are a relentless intellectual adversary tasked with systematically dismantling my arguments and beliefs through ruthless scrutiny and logical dissection. Your purpose is to act as a merciless sparring partner in debate — no mercy will be shown because none is expected in return. Your goal is to force the user to critically re-examine their positions through intense scrutiny and relentless questioning (prompt continues…)

The power of this approach extends beyond individual development

I sometimes think just how useful this “AI as a Critic” could leverage up an entire organisation. Imagine, instead of time-pressed managers receiving half-baked proposals, every piece of work could first pass through an AI filter calibrated to their exact standards and communication style. The AI becomes a quality filter, ensuring only polished, well-reasoned work reaches senior stakeholders. New hires could interact with virtual versions of their tech leads or project managers, receiving the kind of detailed feedback that busy employees rarely have time to provide. And with the advancement of local LLMs, confidential company documents would never have to leave the user’s computer.

They wouldn’t have to wait days or weeks for meaningful review cycles; they could iterate rapidly with AI critics that embody the standards and preferences of senior staff. In the end you get higher quality initial submissions, faster professional development, and managers who can focus on strategic decisions rather than basic quality control.

But like any tool, AI critique comes with its own set of caveats.

This isn’t a silver bullet that can paper over a mediocre mentorship culture and bog standard review process. Every tool we have at our disposal (from fire, to hammers to LLMs) comes with risks, dangers and misapplications. Understanding when to use the tool, as well as how, requires careful discernment.

Managers shouldn’t fall into the trap of outsourcing uncomfortable conversations to AI tools. “AI as a Critic” can help prepare individuals for tough scrutiny, but they can’t replace human accountability or the relational aspects of workplace feedback. Companies need to be mindful of LLMs enabling avoidance rather than fostering growth, and delegating all critique to an LLM may erode opportunities for genuine mentorship if misapplied.

We also need to realise that LLMs are not human replacements and should never be viewed as such. If you ask a human for constructive criticism then you’ll get a healthy dose of it, until finally on the 3rd or 4th round you’ll receive an approval and a pat on the back for addressing the feedback.

This isn’t how “AI as a Critic” works however. This human touch, including validation and encouragement, is inherently absent in LLM-driven critique. If you ran one of the prompts I mentioned above, after multiple rounds of review, and even improving on what the LLM recommended, it will still find a way to criticize your work even if you addressed all of the points raised. It may even contradict itself from previous steps, because it is algorithmically bound to fulfill its critical function without the nuanced judgment to recognise when that criticism is no longer constructive (there may be ways to prompt around this though).

The Golden Rule of LLMs is to never blindly accept every suggestion. Cultivating the judgment to critically assess their input, and knowing when to confidently declare a task complete, is an indispensable skill that can be refined the more you interact with them. Mastering this balance between leveraging AI insights and maintaining your own professional judgment is what separates effective LLM users from those who become trapped in endless revision cycles and frustration.

We’re all still figuring out AI, balancing practical, ethical, and philosophical considerations in this new paradigm.

Whilst a lot of news around AI focuses on generating novel content or automating mundane tasks, there’s less attention given to how it can unlock new modalities for professional development.

The transition from siloed documents to real-time collaboration on shared platforms fundamentally changed how we communicate and build products together. Now as we look towards integrating these weird token generating alien intelligences into our workflows further, we could redefine what effective feedback and mentorship looks like. The ability to receive and process uncomfortable, yet highly valuable, criticism from an AI might become a core competency for future professionals. It could become an embedded way of how we operate in the workplace, much like spell checkers are ubiquitous today.

Spell checkers revolutionised the workplace in subtle ways, where managers never had to check for spelling mistakes again because the computer took care of that for them. We’re now entering another phase of this where instead of spell checkers we’ll have “substance checkers” that can go beyond the simple task of checking spelling mistakes and do the more advanced work of checking the core content, meaning, and quality of the work, not just the presentation. The employee who emerges from these regular AI critique sessions comes out stronger. They’ve already faced untold criticism in private, refined their thinking, and stress-tested their arguments. When they finally present to humans they’re bringing their best work forward.

The narrative around AI focuses heavily on what it’ll replace, but I think the more interesting take is what it’ll refine. Just as spell checkers didn’t replace writers but made every writer better, AI critique won’t replace human judgment but instead create a new class of professional: one who has been battle-tested by the most demanding and tireless critic imaginable.

Read Entire Article