I ran my latest essay through an AI detector called Pangram. The results came back. 100% AI-generated.
It felt like I had been punched in the stomach. Not because the detector was right, but because I wasn’t sure it was wrong. I stared at the screen long enough for the indictment to burrow into me. Something cracked that I thought was solid. It had questioned my authorship.
If software can declare the words aren’t mine, then whose words are they?
The essay it analyzed is “The Great Compression,” Part II of “The Winner Constant.” TL;DR: I devised a formula to measure overcapitalization of the startup ecosystem. If you’re still awake, I was happy with how it turned out. Proud even.
These essays proved to me I could give readers a different way to see the world. Everything I wrote in those articles, and everything I’ve written on Substack — the ideas, the analysis, the opinions — all mine. But I wanted more than just cogent analysis. I wanted it to be the best analysis anyone reads all week.
To be honest, I’m a perfectionist. I doubt the quality of my writing. I want people to see a writer worth reading. Problem is, I know nothing about editing. Or writing, for that matter. So I relied on AI for polish. Harmless enough. Then I read a paper from NBER titled Artificial Writing and Automated Detection. So I decided to plug my writing into Pangram to see what it said.
The result? I’m a fraud.
Is there a worse feeling in the world than feeling like a fraud? The result feels so final. The words are no longer yours. The ideas irrelevant. Tossed aside because a tool said so. And if a detector says it’s 100% AI-written, then what am I? A writer who uses AI tools to polish his ideas, or a hack who uses a machine to write for him? Does it even matter? The verdict has been given. Fraud. A life sentence.
I’ve tried to rationalize away the judgment. The research is mine. The analysis is mine. The tone unmistakably me. The desire to yank the curtain back on the system and show the absurdity lurking beneath? Also mine.
That’s editing, right? Isn’t that what editors do?
But why do I feel dejected? Do published authors feel this way after their manuscript has been edited so much it’s not even their words anymore? Why is it acceptable to say work was edited by a human, but unacceptable to say it was edited by AI?
I thought about adding a disclaimer to every article. “I use AI as an editor. The research, arguments, and analysis are mine.” But it’s defensive. And I’m not sure what I’d be apologizing for.
Something doesn’t sit right with me. There are questions that need answering. Where does authorship end? Where does editing begin?
I don’t think I’m wrong. I think I’m pissed for being told my words don’t belong to me. What gives AI the right to take them away from me?
So I decided to fight back.
You want proof of authorship? Fine. Let’s play.
If you think writers have ownership over every word in their novel, you’re wrong.
CEOs slap their names on ghostwritten books. Full credit. No consequences. The James Patterson literary-industrial complex cranks out ten books a year. His co-writers do the actual writing. His name gets 72-point font.
Politicians deliver speeches written by professionals and pretend the words are theirs. Magazine editors transform pieces so thoroughly, the final version isn’t even a second cousin twice removed.
Yet none of this triggers moral outrage. Ghostwriting is just business as usual, and editorial collaboration is how books get made. The ends justify the means. What matters is who said it and how well it was said.
I use AI the same way a published author uses an editor, and somehow, I’m the fraud.
Human editors have been restructuring prose for centuries and last time I checked the public wasn’t burning books that had a little help. AI does the same thing, and suddenly it’s taboo. The only difference is this: You can see one but not the other.
We want to believe lone geniuses exist. The writer who makes our hearts ache with beautifully written prose. The scientist who changes how we see the world. The singer who always makes us dance. We don’t want to believe they had help. So the publishing industry does its part by selling the illusion.
I’m not against AI detectors. But they measure the wrong thing. Statistical patterns. Rhythm. Transitions. Word choice.
What they can’t tell you is whether AI thought for you or just cleaned up your mess. You know, the part that actually matters.
Let’s talk about Raymond Carver, the short story writer everyone pretends to have read. (If you’ve actually read Carver, congratulations. You’re lying.) After he died, scholars discovered his editor Gordon Lish had gutted his work, cutting thousands of words, restructuring narratives, and entirely transforming his prose style.
The difference between Carver’s situation and mine? His editor was human and undetectable. Mine is algorithmic and public. Same collaborative process. Different tool. One gets a Pulitzer, the other gets called a fraud by a Python script.
OriginalityAI claims anything edited more than five percent counts as AI-written. No one can explain why it’s five percent instead of four. The 5% threshold appears to have been pulled directly from someone’s ass, presumably the same place they keep the methodology. There’s no explanation for why light editing stops at 5% rather than 4% or 10%. “Heavy editing” remains conveniently undefined. Does it mean restructuring sentences? Changing word choice? Smoothing transitions? I’m sure they have a very scientific reason they’d be happy to share if you paid for the enterprise plan.
But here’s the tell. Compare these two lines in Pangram’s methodology:
· AI Research and Human Written = Original Human-Generated
· Human written, heavily AI edited → “AI-Generated Text”
So which is it? Does thinking matter or doesn’t it?
According to OriginalityAI, AI can do your research, the actual intellectual work, and you’re still the author, as long as you write the sentences yourself. But if you do all the research and thinking, then AI heavily edits your writing? That’s AI-generated.
It’s a purity test written by people who still think spellcheck is cheating.
If intellectual work is what counts, then using AI for research should disqualify you. But it doesn’t. If execution is what counts, then using AI to edit shouldn’t matter.
But it does.
My analysis doesn’t lose value because Claude helped shape the sentences. The logic stands whether it’s written in ink or code.
Detection technology says polish itself is suspicious. And optimization is evidence of fraud. So we get perverse incentives: write worse to seem more human. Keep you’re typos two prove you wrote it. Sacrifice precision so some algorithm believes you worked alone. Apparently being mediocre is how you prove you’re real.
Detection also penalizes collaboration. It creates a false binary: 100% human-written or AI-generated, with no recognition of the middle category where most professional writing actually lives. I can’t be the only person who thinks that doesn’t feel right.
I need an editor as much as, if not more than, a NYT bestseller. Whether or not AI touched my prose doesn’t matter. What matters is what work the AI actually did.
If AI generated the analysis, formulated the arguments, and structured the piece while I just signed my name, that’s fraud. If AI helped me tighten sentences, smooth transitions, and cut messy phrasing while I provided the words, research, arguments, and intellectual framework, that’s editing. Period.
It’s the same thing humans have done for centuries. Different tool. Unfortunately, this one doesn’t come with an MFA and a loft in Brooklyn, so it doesn’t count.
Here’s what nobody wants to admit: editorial input has always been part of professional writing. The romantic myth of the lone genius creating in isolation is bullshit.
The industry just agreed to maintain the fiction because it sells better. Every great writer you’ve heard of had an editor you haven’t, unless you read the acknowledgements section. You sicko.
The detection industry hasn’t solved the attribution problem. They decided human collaboration is legitimate and algorithmic collaboration is fraud, then built technology to enforce that distinction. It’s gatekeeping pretending to be fraud detection. The same people who think CEOs write their own books are now very concerned about the originality of my Substack.
Published authors work with human editors who restructure their prose, tighten arguments, smooth transitions. I work with Claude. Both processes turn chaos into form, until what’s left finally sounds like something worth saying.
If my VC analysis is wrong, argue with my logic. Challenge my data. Question my conclusions. But don’t dismiss the work because I used an AI editor to make my work slightly less laughable.
Using AI to edit isn’t fraud. Fraud would be pretending I didn’t.
So here I am. Not pretending.
I’m sorry I’m a fraud. I’m sorry I’m not a published author with a human editor on retainer. I’m sorry I don’t have a ghostwriter with an airtight NDA and a mortgage to keep them quiet. I’m sorry I want to make my writing enjoyable for you to read.
Actually. No. I’m not sorry at all.
You want to hide behind a detector that can’t tell the difference between authorship and sentence structure? Go ahead. Let an AI wrapper think for you. But that makes you lazier than anything you’re accusing me of.
Because here’s what actually happened. I did the research. I formed the arguments. I wrote the analysis. Those are my ideas. My opinions. My work. And I’m fucking proud of it. I apologize to nobody. I have guilt for nothing.
You? You outsourced your judgment to an algorithm and called it thinking. So, who’s the fraud here?
I wrote The Pledge. AI wrote The Turn. If you couldn’t tell the difference, maybe the detector is measuring the wrong thing.
Or did I mix that up? Hard to say.
.png)

