First up, a small bit of Counter Craft housekeeping. I recently melted my eyes going through all my entire archive—several hundred posts at this point—and organizing them into a few categories that you can now see on the homepage. I’ll likely add more categories in the future. These should make it easier for any interested readers to browse the (ever-growing) Counter Craft archives.
A reminder that I post all articles for free initially, but I do paywall some older posts. After four years of Counter Craft, the paid subscriber archive is pretty large! So, if you’re considering a subscription upgrade…
Your regular reminder that my comic-autofiction-meets-space-opera novel Metallic Realms was released a couple weeks ago. I recently wrote about the dwindling review space for new books and how authors can expect fewer reviews that even a couple years ago. It’s hard times for selling books. So, I’ve been especially thrilled to see some Metallic Realms reviews appear post-publication. This week Tobias Carroll wrote a thoughtful and lovely review in Tor’s Reactor. Other excellent post-publication reviews have appeared in The Bulwark (“if Ignatius J. Reilly wrote Pale Fire about an issue of Clarkesworld”), Shelf Awareness ("sly, intricate, genre-hopping”), Crime Reads (“succeeds resoundingly”), and The Chicago Tribune (“a total blast”). I can only thank the reviewers for taking the time to read and consider the novel. I deeply appreciate it.
Anyway, you can find more information here. And if you’ve read and enjoyed the novel, please consider reviewing on Amazon or Goodreads. Lastly, a reminder that there is a FREE SUPER SECRET EXTRA BONUS STORY for any interested Counter Craft readers. Just put an email address into the linked Google Form if you’d like the PDF.
AI writing has been in the news again since I last wrote about the AI-hallucinated book recommendation list. There was yet another AI-hallucinated book list, multiple self-published authors found leaving ChatGPT prompts in their novels that apparently they couldn’t be bothered to even read before publishing, and the return of disgraced “memoirist” James Frey with an AI-assisted book. (For younger readers who don’t know Frey, he got famous for selling a couple fictionalized memoirs that presented himself as a “bad boy”—basically self-fanfic—then was roasted by Oprah on live TV. Afterwards, he started a fiction factory where he got MFA students to sign bad contracts to ghostwrite YA novels.)
Jane Friedman’s The Bottom Line newsletter mentioned a writer (or group of writers) using the pen name “Sindo Hane” to publish an endless stream of “low-quality, AI-generated genre fiction.” This pen name has published more than 150 titles this year. And there is no reason to assume this person or people don’t have countless other pen names going, each spamming Amazon’s Kindle Unlimited store with unlimited and unedited AI outputs. One of my first predictions when ChatGPT came out was that it would probably eat up the self-pub space. Not because self-publishing can’t be legitimate but simply because there are no guardrails and few costs. I fear this is what we can expect for much of the internet going forward unless platforms institute serious countermeasures.
I expect most AI use in writing to be like the above: slop, scams, and gimmicks. I also believe that writers should mostly avoid using GenAI outside of rote tasks like spellchecking, because there is a risk that by outsourcing your creativity you will begin to lose it—just as many students are noting that GenAI use has degraded their thinking. But. I do not think that this is inevitable. AI can be used in useful ways, including in art. A few readers challenged me on this the last time I wrote about AI, so I figured I should be more explicit here.
As GenAI tools improve, they will likely become better spell and grammar checkers than current word processing tools. GenAI already can be a useful research tool, if used correctly and fact-checked. Those are obvious good uses. And I do think there are plenty of ways to incorporate actual AI-generated text thoughtfully and artistically.
I know of at least three recent novels that have successfully used GenAI programs to help craft dialogue for fictional artificial intelligence characters/programs: Sympathy Tokyo Tower by Rie Qudan, After World by Debbie Urbanski, and Do You Remember Being Born? by Sean Michaels. There are probably others. In nonfiction, Vauhini Vara’s recent Searches: Selfhood in the Digital Age uses GenAI text and images to examine and critique the ways AI (and other tech products) are affecting us. (See my interview with Vara here). And other authors, such as Sheila Heti, have used GenAI text to create new work to varying degrees of success.
What separates these works from the slop overflowing the Kindle store? Three things. First, they aren’t using AI for corner cutting or trying to pump out books as quickly as possible. These authors might even tell you that working with AI took more time. Secondly, they are using AI in intentional ways with thematic or conceptual purpose. These writers are not using AI to write scenes they are too lazy to write or to “rewrite the text in the style of [some popular author]” or anything like that. There is an artistic point to the use. But thirdly, and perhaps most importantly, they are open about what they are doing.
This leads me to what I think is a pretty good rule of thumb for how we can think about using AI in your work. Let me stress here that I am only talking about the artistic ethics of using AI and not about the many legitimate ethical and political questions about environmental destruction, biases encoded into the programs, how LLMs are trained on stolen work, how they were made with exploited cheap labor, and so on that AI brings up. Those issues may or may not be solved in the future (my guess is “not” for the most part) but either way the use of AI is already widespread and we need to come to some agreements about the guidelines in contexts like academia and publishing.
Anyway, I proposed a simple rule on Substack Notes that seemed to resonate:
By the above, I do not mean that AIs are “like humans.” They are not sentient or meaningfully intelligent. Maybe some future tech will produce Data from Star Trek artificial intelligence. It won’t be LLMs. The rule I’m proposing is not about AI. It is about the human artist. When does your art contain so little of your own work that it isn’t really yours? How much change does an artist need to make to an existing image or text for it to be considered a new work of art? When does a musician’s contribution rise to the level of credit for a song? What is the line between inspiration and plagiarism? Between a beta reader and a co-creator?
My contention is that most of these questions about artistic individuality, appropriation, and incorporation of others’ work into your own work have… been figured out already. At least as much as they ever can be. There is always debate, of course, but we have largely agreed on the outlines of these boundaries in different art forms. From the standpoint of artistic integrity, I don’t see why AI changes these questions. Sometimes AI will be a mere tool like spellcheck, sometimes a legitimate collaborator, and sometimes artists will simply be passing off work they didn’t do as their own.
For example, we do not expect authors to give co-writing credit to every person (or computer tools) that spell-checked, grammar-checked, or suggested a research idea. It is good practice to thank all the humans in your acknowledgements! I make sure to. But we don’t thank Microsoft Word’s spell and grammar check tool or Google search for research help. By the same token, you don’t need to thank ChatGPT for those tasks. However. If much of your book’s text was generated by ChatGPT then you should be up front about that. Just as you would if half your book was written by another person. Perhaps there will be a future in which all authors are always working with LLMs, and then these questions may be moot. We certainly aren’t there yet. If an author is using AI-generated text in their book and not telling the reader, the reason is almost certainly that they want to trick the reader into thinking the novel was written entirely by the human author. They know they are being dishonest.
(Side note: Yes, many celebrities, politicians, and name-brand commercial fiction authors like James Patterson use ghostwriters to write their books. But no one thinks they have artistic integrity, right?)
We should apply the same rule to academia IMO. AI hasn’t really raised new questions of academic integrity. AI has caused an issue with detection. From the standpoint of academic integrity, the rules of plagiarism haven’t changed. Again, apply the “what if it was a human?” rule. It is not plagiarism for a student to get advice on research, revision, or spellchecking from a friend. That is encouraged in academia. Similarly, it isn’t plagiarism for an LLM to do those tasks. But if a student turns in a paper where some or all of the text was taken from another source—whether a classmate’s paper, an online essay-writing service, a Wikipedia page, or anything else—the student has plagiarized. Why would turning in text an LLM generated as your own work be any different?
The detection issue AI poses is a real problem in academia. Although even that is perhaps not as novel as the discourse pretends. It was always easier to detect some forms of plagiarism (e.g., copy-and-pasting text from the internet) than others (e.g., paying an essay-writing service). The ease of internet access, combined with the rise of online courses, also caused massive detection issues with students cheating on exams. The response was not to say, “Oh well. The internet is the future. Now Googling every answer isn’t cheating.” Likewise, I see no reason to say, “AI is the future. Now, it doesn’t matter if you did any work on any assignment yourself.”
So, my rule here is that if you want to have any artistic integrity you should be upfront about your AI use beyond tasks like spellchecking or research. I think this is the ethical thing and also the smart thing. People really don’t like feeling tricked by artists. Long before GenAI, we’ve had artistic careers tanked because of plagiarism, lip synching, and other forms of artistic dishonesty. We’re likely to see at least one big AI scandal in traditional publishing within the next year. Why risk having that be you?
Anyway, this is the rule of thumb I’m going by until I see something better.