- AI turns Amazon coders into Amazon warehouse workers: A reverse-centaur in the loop.
- Hey look at this: Delights to delectate.
- Object permanence: 2005, 2010, 2015, 2020, 2024
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
On a recent This Machine Kills episode, guest Hagen Blix described the ultimate form of "AI therapy" with a "human in the loop":
https://soundcloud.com/thismachinekillspod/405-ai-is-the-demon-god-of-capital-ft-hagen-blix
One actual therapist is just having ten chat GPT windows open where they just like have five seconds to interrupt the chatGPT. They have to scan them all and see if it says something really inappropriate. That's your job, to stop it.
Blix admits that's not where therapy is at…yet, but he references Laura Preston's 2023 N Plus One essay, "HUMAN_FALLBACK," which describes her as a backstop to a real-estate "virtual assistant," that masqueraded as a human handling the queries that confused it, in a bid to keep the customers from figuring out that they were engaging with a chatbot:
https://www.nplusonemag.com/issue-44/essays/human_fallback/
This is what makes investors and bosses slobber so hard for AI – a "productivity" boost that arises from taking away the bargaining power of workers so that they can be made to labor under worse conditions for less money. The efficiency gains of automation aren't just about using fewer workers to achieve the same output – it's about the fact that the workers you fire in this process can be used as a threat against the remaining workers: "Do your job and shut up or I'll fire you and give your job to one of your former colleagues who's now on the breadline."
This has been at the heart of labor fights over automation since the Industrial Revolution, when skilled textile workers took up the Luddite cause because their bosses wanted to fire them and replace them with child workers snatched from Napoleonic War orphanages:
https://pluralistic.net/2023/09/26/enochs-hammer/#thats-fronkonsteen
Textile automation wasn't just about producing more cloth – it was about producing cheaper, worse cloth. The new machines were so easy a child could use them, because that's who was using them – kidnapped war orphans. The adult textile workers the machines displaced weren't afraid of technology. Far from it! Weavers used the most advanced machinery of the day, and apprenticed for seven years to learn how to operate it. Luddites had the equivalent of a Masters in Engineering from MIT.
Weavers' guilds presented two problems for their bosses: first, they had enormous power, thanks to the extensive training required to operate their looms; and second, they used that power to regulate the quality of the goods they made. Even before the Industrial Revolution, weavers could have produced more cloth at lower prices by skimping on quality, but they refused, out of principle, because their work mattered to them.
Now, of course weavers also appreciated the value of their products, and understood that innovations that would allow them to increase their productivity and make more fabric at lower prices would be good for the world. They weren't snobs who thought that only the wealthy should go clothed. Weavers had continuously adopted numerous innovations, each of which increased the productivity and the quality of their wares.
Long before the Luddite uprising, weavers had petitioned factory owners and Parliament under the laws that guaranteed the guilds the right to oversee textile automation to ensure that it didn't come at the price of worker power or the quality of the textiles the machines produced. But the factory owners and their investors had captured Parliament, which ignored its own laws and did nothing as the "dark, Satanic mills" proliferated. Luddites only turned to property destruction after the system failed them.
Now, it's true that eventually, the machines improved and the fabric they turned out matched and exceeded the quality of the fabric that preceded the Industrial Revolution. But there's nothing about the way the Industrial Revolution unfolded – increasing the power of capital to pay workers less and treat them worse while flooding the market with inferior products – that was necessary or beneficial to that progress. Every other innovation in textile production up until that time had been undertaken with the cooperation of the guilds, who'd ensured that "progress" meant better lives for workers, better products for consumers, and lower prices. If the Luddites' demands for co-determination in the Industrial Revolution had been met, we might have gotten to the same world of superior products at lower costs, but without the immiseration of generations of workers, mass killings to suppress worker uprisings, and decades of defective products being foisted on the public.
So there are two stories about automation and labor: in the dominant narrative, workers are afraid of the automation that delivers benefits to all of us, stand in the way of progress, and get steamrollered for their own good, as well as ours. In the other narrative, workers are glad to have boring and dangerous parts of their work automated away and happy to produce more high-quality goods and services, and stand ready to assess and plan the rollout of new tools, and when workers object to automation, it's because they see automation being used to crush them and worsen the outputs they care about, at the expense of the customers they care for.
In modern automation/labor theory, this debate is framed in terms of "centaurs" (humans who are assisted by technology) and "reverse-centaurs" (humans who are conscripted to assist technology):
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
There are plenty of workers who are excited at the thought of using AI tools to relieve them of some drudgework. To the extent that these workers have power over their bosses and their working conditions, that excitement might well be justified. I hear a lot from programmers who work on their own projects about how nice it is to have a kind of hypertrophied macro system that can generate and tweak little automated tools on the fly so the humans can focus on the real, chewy challenges. Those workers are the centaurs, and it's no wonder that they're excited about improved tooling.
But the reverse-centaur version is a lot darker. The reverse-centaur coder is an assistant to the AI, charged with being a "human in the loop" who reviews the material that the AI produces. This is a pretty terrible job to have.
For starters, the kinds of mistakes that AI coders make are the hardest mistakes for human reviewers to catch. That's because LLMs are statistical prediction machines, spicy autocomplete that works by ingesting and analyzing a vast corpus of written materials and then producing outputs that represent a series of plausible guesses about which words should follow one another. To the extent that the reality the AI is participating in is statistically smooth and predictable, AI can often make eerily good guesses at words that turn into sentences or code that slot well into that reality.
But where reality is lumpy and irregular, AI stumbles. AI is intrinsically conservative. As a statistically informed guessing program, it wants the future to be like the past:
https://reallifemag.com/the-apophenic-machine/
This means that AI coders stumble wherever the world contains rough patches and snags. Take "slopsquatting." For the most part, software libraries follow regular naming conventions. For example, there might be a series of text-handling libraries with names like "text.parsing.docx," "text.parsing.xml," and "text.parsing.markdown." But for some reason – maybe two different projects were merged, or maybe someone was just inattentive – there's also a library called "text.txt.parsing" (instead of "text.parsing.txt").
AI coders are doing inference based on statistical analysis, and anyone inferring what the .txt parsing library is called would guess, based on the other libraries, that it was "text.parsing.txt." And that's what the AI guesses, and so it tries to import that library to its software projects.
This creates a new security vulnerability, "slopsquatting," in which a malicious actor creates a library with the expected name, which replicates the functionality of the real library, but also contains malicious code:
https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
Note that slopsquatting errors are extremely hard to spot. As is typical with AI coding errors, these are errors that are based on continuing a historical pattern, which is the sort of thing our own brains do all the time (think of trying to go up a step that isn't there after climbing to the top of a staircase). Notably, these are very different from the errors that a beginning programmer whose work is being reviewed by a more senior coder might make. These are the very hardest errors for humans to spot, and these are the errors that AIs make the most, and they do so at machine speed:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
To be a human in the loop for an AI coder, a programmer must engage in sustained, careful, line-by-line and command-by-command scrutiny of the code. This is the hardest kind of code to review, and maintaining robotic vigilance over long periods at high speeds is something humans are very bad at. Indeed, it's the kind of task we try very hard to automate, since machines are much better at being machineline than humans are. This is the essence of reverse-centaurism: when a human is expected to act like a machine in order to help the machine do something it can't do.
Humans routinely fail at spotting these errors, unsurprisingly. If the purpose of automation is to make superior goods at lower prices, then this would be a real concern, since a reverse-centaur coding arrangement is bound to produce code with lurking, pernicious, especially hard-to-spot bugs that present serious risks to users. But if the purpose of automation is to discipline labor – to force coders to accept worse conditions and pay – irrespective of the impact on quality, then AI is the perfect tool for the job. The point of the human isn't to catch the AI's errors so much as it is to catch the blame for the AI's errors – to be what Madeleine Clare Elish calls a "moral crumple zone":
https://estsjournal.org/index.php/ests/article/view/260
As has been the case since the Industrial Revolution, the project of automation isn't just about increasing productivity, it's about weakening labor power as a prelude to lowering quality. Take what's happened to the news industry, where mass layoffs are being offset by AI tools. At Hearst's King Features Syndicates, a single writer was charged with producing over 30 summer guides, the entire package:
That is an impossible task, which is why the writer turned to AI to do his homework, and then, infamously, published a "summer reading guide" that was full of nonexistent books that were hallucinated by a chatbot:
Most people reacted to this story as a consumer issue: they were outraged that the world was having a defective product foisted upon it. But the consumer issue here is downstream from the labor issue: when the writers at King Features Syndicate are turned into reverse-centaurs, they will inevitably produce defective outputs. The point of the worker – the "human in the loop" – isn't to supervise the AI, it's to take the blame for the AI. That's just what happened, as this poor schmuck absorbed an internet-sized rasher of shit flung his way by outraged social media users. After all, it was his byline on the story, not the chatbot's. He's the moral crumple-zone.
The implication of this is that consumers and workers are class allies in the automation wars. The point of using automation to weaken labor isn't just cheaper products – it's cheaper, defective products, inflicted on the unsuspecting and defenseless public who are no longer protected by workers' professionalism and pride in their jobs.
That's what's going on at Duolingo, where CEO Luis von Ahn created a firestorm by announcing mass firings of human language instructors, who would be replaced by AI. The "AI first" announcement pissed off Duolingo's workers, of course, but what caught von Ahn off-guard was how much this pissed off Duolingo's users:
But of course, this makes perfect sense. After all, language-learners are literally incapable of spotting errors in the AI instruction they receive. If you spoke the language well enough to spot the AI's mistakes, you wouldn't need Duolingo! I don't doubt that there are countless ways in which AIs could benefit both language learners and the Duolingo workers who develop instructional materials, but for that to happen, workers' and learners' needs will have to be the focus of AI integration. Centaurs could produce great language learning materials with AI – but reverse-centaurs can only produce slop.
Unsurprisingly, many of the most successful AI products are "bossware" tools that let employers monitor and discipline workers who've been reverse-centaurized. Both blue-collar and white-collar workplaces have filled up with "electronic whips" that monitor and evaluate performance:
https://pluralistic.net/2024/08/02/despotism-on-demand/#virtual-whips
AI can give bosses "dashboards" that tell them which Amazon delivery drivers operate their vehicles with their mouths open (Amazon doesn't let its drivers sing on the job). Meanwhile, a German company called Celonis will sell your boss a kind of AI phrenology tool that assesses your "emotional quality" by spying on you while you work:
https://crackedlabs.org/en/data-work/publications/processmining-algomanage
Tech firms were among the first and most aggressive adopters of AI-based electronic whips. But these whips weren't used on coders – they were reserved for tech's vast blue-collar and contractor workforce: clickworkers, gig workers, warehouse workers, AI data-labelers and delivery drivers.
Tech bosses tormented these workers but pampered their coders. That wasn't out of any sentimental attachment to tech workers. Rather, tech bosses were afraid of tech workers, because tech workers possess a rare set of skills that can be harnessed by tech firms to produce gigantic returns. Tech workers have historically been princes of labor, able to command high salaries and deferential treatment from their bosses (think of the amazing tech "campus" perks), because their scarcity gave them power.
It's easy to predict how tech bosses would treat tech workers if they could get away with it – just look how they treat workers they aren't afraid of. Just like the textile mill owners of the Industrial Revolution, the thing that excites tech bosses about AI is the possibility of cutting off a group of powerful workers at the knees. After all, it took more than a century for strong labor unions to match the power that the pre-Industrial Revolution guilds had. If AI can crush the power of tech workers, it might buy tech bosses a century of free rein to shift value from their workforce to their investors, while also doing away with pesky Tron-pilled workers who believe they have a moral obligation to "fight for the user."
William Gibson famously wrote, "The future is here, it's just not evenly distributed." The workers that tech bosses don't fear are living in the future of the workers that tech bosses can't easily replace.
This week, the New York Times's veteran Amazon labor report Noam Scheiber published a deeply reported piece about the experience of coders at Amazon in the age of AI:
https://www.nytimes.com/2025/05/25/business/amazon-ai-coders.html
Amazon CEO Andy Jassy is palpably horny for AI coders, evidenced by investor memos boasting of AI's returns in "productivity and cost avoidance" and pronouncements about AI saving "the equivalent of 4,500 developer-years":
Amazon is among the most notorious abusers of blue-collar labor, the workplace where everyone who doesn't have a bullshit laptop job is expected to piss in a bottle and spend an unpaid hour before and after work going through a bag- and body-search. Amazon's blue-collar workers are under continuous, totalizing, judging AI scrutiny that scores them based on whether their eyeballs are correctly oriented, whether they take too long to pick up an object, whether they pee too often. Amazon warehouse workers are injured at three times national average. Amazon AIs scan social media for disgruntled workers talking about unions, and Amazon has another AI tool that predicts which shops and departments are most likely to want to unionize.
Scheiber's piece describes what it's like to be an Amazon tech worker who's getting the reverse-centaur treatment that has heretofore been reserved for warehouse workers and drivers. They describe "speedups" in which they are moved from writing code to reviewing AI code, their jobs transformed from solving chewy intellectual puzzles to racing to spot hard-to-find AI coding errors as a clock ticks down. Amazon bosses haven't ordered their tech workers to use AI, just raised their quotas to a level that can't be attained without getting an AI to do most of the work – just like the Chicago Sun-Times writer who was expected to write all 30 articles in the summer guide package on his own. No one made him use AI, but he wasn't going to produce 30 articles on deadline without a chatbot.
Amazon insists that it is treating AI as an assistant for its coders, but the actual working conditions make it clear that this is a reverse-centaur transformation. Scheiber discusses a dissident internal group at Amazon called Amazon Employees for Climate Justice, who link the company's use of AI to its carbon footprint. Beyond those climate concerns, these workers are treating AI as a labor issue.
Amazon's coders have been making tentative gestures of solidarity towards its blue-collar workforce since the pandemic broke out, walking out in support of striking warehouse workers (and getting fired for doing so):
https://pluralistic.net/2020/04/14/abolish-silicon-valley/#hang-together-hang-separately
But those firings haven't deterred Amazon's tech workers from making common cause with their comrades on the shop floor:
https://pluralistic.net/2021/01/19/deastroturfing/#real-power
When techies describe their experience of AI, it sometimes sounds like they're describing two completely different realities – and that's because they are. For workers with power and control, automation turns them into centaurs, who get to use AI tools to improve their work-lives. For workers whose power is waning, AI is a tool for reverse-centaurism, an electronic whip that pushes them to work at superhuman speeds. And when they fail, these workers become "moral crumple zones," absorbing the blame for the defective products their bosses pushed out in order to goose profits.
As ever, what a technology does pales in comparison to who it does it for and who it does it to.
- The CIA Secretly Ran a Star Wars Fan Site https://www.404media.co/the-cia-secretly-ran-a-star-wars-fan-site/
-
Atomic Highway – Post Apocalyptic Roleplaying https://archive.org/details/atomic-highway-ebook-v-2/mode/1up
-
THE SILICON VALLEY EFFECT https://law.stanford.edu/wp-content/uploads/2025/04/SJIL_61-1_Arun.pdf
#20yrsago Bulk of American calories comes from sweet drinks https://www.sciencedaily.com/releases/2005/05/050527111920.htm
#20yrsago Chicago’s Bean sculpture is free to photograph, at last https://memex.craphound.com/2005/05/27/chicagos-bean-sculpture-is-free-to-photograph-at-last/
#15yrsago Man single-handedly building a metro rail https://englishrussia.com/2010/05/24/the-most-unusual-metro-in-the-world/
#15yrsago Canada’s copyright minister: superinfringer https://memex.craphound.com/2010/05/27/canadas-copyright-minister-superinfringer/
#15yrsago Pinkwater’s ADVENTURES OF A CAT-WHISKERED GIRL, sequel to Neddiad and Yggyssey https://memex.craphound.com/2010/05/27/pinkwaters-adventures-of-a-cat-whiskered-girl-sequel-to-neddiad-and-yggyssey/
#10yrsago Real estate bubble drives urban blight https://www.newyorker.com/business/currency/why-are-there-so-many-shuttered-storefronts-in-the-west-village
#10yrsago IRS leaks 100K taxpayers’ data to identity thieves https://theintercept.com/2015/05/27/data-breaches-wreak-havoc/
#10yrsago Swiss cops’ dawn raid snags top FIFA officials https://www.nytimes.com/2015/05/27/sports/soccer/fifa-officials-face-corruption-charges-in-us.html
#5yrsago The Toronto Star's new owners donated to far-right Tories https://pluralistic.net/2020/05/27/literal-gunhumping/#plutewatch
#5yrsago How to pay artists while fighting censorship and Big Tech https://pluralistic.net/2020/05/27/literal-gunhumping/#pay-artists
#5yrsago Ammosexuals point their guns at their crotches https://pluralistic.net/2020/05/27/literal-gunhumping/#youll-shoot-your-eye-out
#5yrsago Twitter's porn filters are dampening discussions of "cumgate" https://pluralistic.net/2020/05/27/literal-gunhumping/#cumgate
#5yrsago West Virginia's governor Jim Justice: billionaire, deadbeat https://pluralistic.net/2020/05/27/literal-gunhumping/#injustice
#5yrsago Hertz's bankruptcy was caused by private equity looting https://pluralistic.net/2020/05/27/literal-gunhumping/#hertz-uranus
#5yrsago Facebook shelved research that showed they were sowing division https://pluralistic.net/2020/05/27/literal-gunhumping/#joel-kaplan
#5yrsago Youtube is automatically blocking criticism of the Chinese Communist Party https://pluralistic.net/2020/05/27/literal-gunhumping/#communist-bandit
#1yrago Against Lore https://pluralistic.net/2024/05/27/cmon-do-it-again/#better_to_remain_silent_and_be_thought_a_fool_than_to_speak_and_remove_all_doubt
- Seattle: Cascade PBS Ideas Festival, May 31
https://www.cascadepbs.org/festival/speaker/cory-doctorow -
Virtual: Fediforum, Jun 5
https://fediforum.org/2025-06/ -
PDX: Teardown 2025, Jun 20-22
https://www.crowdsupply.com/teardown/portland-2025 -
PDX: Picks and Shovels with bunnie Huang at Barnes and Noble, Jun 20
https://stores.barnesandnoble.com/event/9780062183697-0 -
Tualatin Public Library, Jun 22:
https://www.tualatinoregon.gov/library/author-talk-cory-doctorow -
London: How To Academy with Riley Quinn, Jul 1
https://howtoacademy.com/events/cory-doctorow-the-fight-against-the-big-tech-oligarchy/ -
Manchester: Picks and Shovels at Blackwell's Bookshop, Jul 2
https://www.eventbrite.co.uk/e/an-evening-with-cory-doctorow-tickets-1308451968059 -
Manchester: Co-operatives UK Co-op Congress keynote, Jul 4
https://www.uk.coop/events-and-training/events-calendar/co-op-congress-2025-book-your-place -
New Orleans: DeepSouthCon63, Oct 10-12, 2025
http://www.contraflowscifi.org/
- Kick 'Em In the Dongle (Understood)
https://www.cbc.ca/listen/cbc-podcasts/1353-the-naked-emperor/episode/16148346-kick-em-in-the-dongle -
The Big Story
https://www.seekyoursounds.com/podcasts/the-big-story/cory-doctorow-explains-who-broke-the-internet -
Keynote (Pycon)
https://www.youtube.com/watch?v=ydVmzg_SJLw
-
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
- Enshittification: Why Everything Suddenly Got Worse and What to Do About It, Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
Enshittification, Why Everything Suddenly Got Worse and What to Do About It (the graphic novel), Firstsecond, 2026
-
The Memex Method, Farrar, Straus, Giroux, 2026
Today's top sources:
Currently writing:
- Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
-
A Little Brother short story about DIY insulin PLANNING
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X