Can a leftist position on the use of generative AI be materially held?

2 days ago 2
Women working on SodaSteam devices at the SodaStream factory in Israel. Shot during press trip to SodaStream factory in 2019.
The labor that's not seen in production, via Unsplash.

I posted a question on Bluesky about the placement that leftists can occupy when it comes to generative artificial intelligence. Since 2023, I've passively explored the use cases of generative AI, specifically in the case of software development; my primary field of work. My background has forced me to take a collectivist perspective when looking at the impact of it on balancing labor's power and how the tech industry, despite its brief support of progressive campaigns, has a knack for falling back to its roots of putting the levers of finance over anything else. Putting it more plainly, I'm less interested in the shiny things that come from the industry I'm in and more focused on the immediate and planned impact of what comes from it. I've also taken some time to observe a lot of the opinions and stances of folks who have more sway and influence that I do in this industry as well folks who've existed in my orbit for some time to juxtapose against the position I'm forming here.

First things first; if the following things don't click, then I don't think much can be said going forward that you'd be able to gleam from what I'm expressing here:

This also won't be the last thing I write on this topic, unfortunately, because it's a complex topic. With that said, let's get into it.

When Rich Workers and Executives Walk Post on Hacker News

Despite what someone who probably makes more than half of Americans mentioned about the weakness of tech labor, the scene for the such has been swelling since 2018 — something I've helped contribute to. Due to such a rise, this has me looking at what opportunities management in industry can, will and have taken with their levers of control. This came to be of importance after the sale of Twitter by Jack Dorsey and its board to Elon Musk, notably when we saw how the company was able to keep running at a moderately okay pace despite firing 6,000 people. The company dipped in market valuation for some time but capital management, especially when in balance with the State helped bring it back up, according to the Wall Street Journal. Fortunately, a lot of the workers who had the ability to find new work have ended up in places like Tiktok, Facebook and Google — all places (sans Tiktok?) that also have experienced a wave of layoffs. There's a need to keep in mind that as generative AI becomes better at convincing management, there's more need to put in levers against hot swapping folks in favor of it. Although Musk didn't declare this with his move, as far as we know; that kind of behavior proved something that Thomas Ptacek of fly.io and Elon Musk agree on (emphasis mine):

LLMs really might displace many software developers. That’s not a high horse we get to ride. Our [software developers] jobs are just as much in tech’s line of fire as everybody else’s have been for the last 3 decades. We’re not East Coast dockworkers; we won’t stop progress on our own.

Despite being a statist (which Musk would also considered himself to be), Thomas seem to be more in favor of private governance versus public. Their lack of depth when it comes to what tech workers have been fighting towards, which even ChatGPT is able to poorly produce in a query about tech workers organizing highlights this disconnect — even to a point of where tech workers who are in the organizing space supported (fiscally and otherwise) the East Coast dockworkers and their strikes. It's not something you'll find if you find places like Hacker News to be the sole perspective of the tech industry. I invite them to reevaluate this position after reading this in full (if they ever do). Statists are conventionally folks who are in favor of big governance, and to ignore how Musk relied on strong (capitalist-centric) governance; the same way a lot of American tech companies do seems like an oversight by Thomas.

Steve Klabnik has written about their dismay in the generative AI discourse, which reads as a want from "both sides" to do better in how they approach conversations around the topic. He linked to another piece, by James Dennis, that takes a perspective on art and creativity to highlight that humans (people?) will continue to create and produce novel things in spite of generative works. Another one is more specific to software engineering, about the eventual decay of the "craft" of software engineering that books and conferences have formed around peoples' cleverness, through the lens of the software engineer's identity crisis. These sit closer to the "center-right" (bear with me) position on how one can look at this technology and how it impacts the craft. Unfortunately, it ends in a way that reinforces the notion of forced evolution of a field as necessary to growth. As someone who's worked in public consulting for a short period of time; the last thing you want to do is rush ahead with a trend or sense of progress because Hacker News prescribes it. In fact, it's always wiser to give it time to iron out. However, there's more positions that lean towards something you'd expect Ptacek to agree with in a piece by Campos on the notion that AI crticism has become lazy. These stances tend to lean toward what you'd find as you read The Network State; a book that overindexes towards techno-solutionism as the end-all-be-all and a means of saving us from ourselves. Notably, this piece would fit in around the third chapter in Balaji's book, around tripolarality in power since points are declared towards succumbing education towards technology despite the strongest proponents for "ethical AI in education" tend to be the biggest bullshitters. It does end with this a semi-honest point that capitalism currently dictates the direction of this industry but with no real call-to-action despite demanding more from the space of criticism, which is disappointing because it gives AI proponents more of an excuse to do nothing about most of the issues Campos outlined.

Leveraging Generative AI for the Public?

The thing that folks do like to mention, especially in my left-leaning circles, when it comes to generative AI, is China's introduction of smaller, cheaper and efficient LLMs that can, at times, outperform the American made ones. This seems to be a habit with Eastern technology & from cars to computer manufacturing. The most notable one is the ones produced by its namesake, DeepSeek. As mentioned, I've been testing some usecases with these solutions at home, most recently with ollama and aider, allowing me to flip between different downloaded models when working with them. The output is moderately okay — if I give it a "solved" problem, it can get to a particular distance (~40% to 60%) before I need to intervene and correct things. I struggle to replicate the level of performance that Harper's company produced with his journey into social "agentic" coding. Despite it being described as not comparable to that of the output of an actual software engineer, folks are comfortable doing the software engineering equivalent of what they're doing with OpenAI's Sora (emphasis mine):

What’s also happening here is a massive outsourcing of labor. OpenAI has cleverly packaged what would otherwise be expensive training and evaluation work as a "fun social experience." Every video prompt, every video tweak, every video that gets shared or discarded, what goes viral, what doesn’t, is training their video generation model. That’s all free labor that would cost millions to replicate in a controlled environment with paid testers. They’re essentially getting millions of people to volunteer as unpaid quality assurance testers, prompt engineers, and data labelers. They have gamified reinforcement learning at scale.

The focus here is what I see mirrored in public sector work: a want to "increase response times" (or efficiency or whichever business-centric term you'd like to improve) while not taking into consideration what human decisions (almost always policy) that cause slowdowns and the like. What's happening — as it tends to and was even noted by Thomas Ptacek in his company's blog and ignored in Campos' — is that folks who champion these technologies rarely stop to consider how other people can use their tools for malice. The Wright brothers didn't (couldn't?) and look how that turned out for the future of war and invasion. A particular United States Marine Corps colonel, however, was already operating from a position of violence, on behalf of the state and its interests, and had no issue asking for even more efficiency in how the M45 MEUSOC semi-auto pistol can be used — especially in places like Iraq or by the Los Angeles Police Department.

In none of the pieces mentioned above was there any strong considerations around how generative artificial intelligence has increased the dififculty of folks finding work due to the (now speculative but not improbable) case of lower ranking software engineering posoitions being made redundant. In fact, in Annie's aforementioned post, they needed to rewrite history just a bit in order to justify a transformation in labor (emphasis mine):

The pendulum metaphor offers us wisdom here. Just as many of us have swung between engineering and management roles, we can embrace a similar fluidity with AI. Some periods we’ll dive deep into the code, experiencing that thrill of crafting elegant solutions. Other times we’ll step back to guide AI systems - not as overseers, but as master builders who understand every part of their craft. Like the Industrial Revolution’s workers who became experts at optimising the machines that transformed their craft, we can master these AI systems - making them instruments of our creativity, not replacements for it.

Ironically, the term "overseer" is more apt since what an AI engineer is doing is "guiding" the outputs of a machine without a requirement of understanding the depth of the craft — that's the whole premise of vibe coding. This rephrasing also helps to ignore what many workers of the Industrial Revolution were against: hyper-specialized machinery that directly threatened their ability to work and negogiate the terms of work. This is discussed at length in Brian Merchant's latest book, Blood in the Machine, in the brief chapter, "The Machinery Question" that discussed how this perspective on the impact of machines on work depended on one's positionality (and interest in placement) of class. In short, did one want to be a worker of merit or an entrpreneur of control? Software engineering, post the dot-com boom, has enjoyed a comfortable place in pay, especially in the United States, that has warped people's understanding of how that loyality is bought (and can be easily retracted). Digging into the folks that helped craft the concept of modern overseers, or the professional managerial class, we can see how that also leaned on a system that mimicked what Aristotlian philosophy on the need for human exploitation towards automation (warning for those uncomfortable with the linkage of technology and plantations due to their identity and nationality):

To understand the link between Babbage’s engines and his theories of labor control, we can first look to his view on automation itself. During Babbage’s time, the term “engine” was a synonym for “machine” and was applied to the swell of industrial machinery that was used to transform traditional labor practices. His engines take their place alongside other mechanical tools for labor automation, distinguished by their purposive automation of mental (rather than manual) labor. Babbage understood automation generally—including his engines—as dependent on the division of labor. He observed that “[t]he division of labour suggests the contrivance of tools and machinery to execute its processes,” reasoning that “[w]hen each process has been reduced to the use of some simple tool, the union of all these tools, actuated by one moving power, constitutes a machine.” Division and rationalization of labor—specification of each piece of a given job in order to render the work process (and the people doing it) observable, quantifiable, and controllable “from above”—was, for Babbage, the enabling condition for automation. Thus, in order to design engines to automate mental labor, Babbage first needed to borrow (or develop) systems of labor division and control.

You can't divorce these notions without actively ignoring history and present day impacts of technological innovation. Doing so is tremendously easy because it's not a requirement to download Microsoft's Visual Studio Code or to install Windsurf — the same way we see no lapse in a sense of judgement between faciliating genocide for profit — in the form of state interest. The point here is that by choosing to narrow the scope of production and impact to a point of comfort for one's discourse, folks are doing the work of the alt-right in technology in the left spaces. This has to be something folks acknowledge lest we slide more and more developmental progress in their favor.

Countermeasures in genAI

I opened up this mentioning a collectivist perspective. It's a novel one for me, frankly, because I once did see technology as a means of giving folks more power in a world that has it held and hoarded by a few. It took me moving to California, closer to the American crux of technology worship to come to terms that it is largely an extension of the means in which American capitalism operates. In fact, Ruha Benjamin's book, Race As Technology makes many cases — new to many, old to some — about how the most immediate deployments of technology tend to have racial underpinnings to operate on behalf of a larger agenda of integrating what she calls The New Jim Code. Before we can begin to talk about democratizing AI, making it fit some definition of open; we have to be honest about who it's being made open for and what we're defining as accessible. Routinely, this is not for the folks who could benefit from having more control over their indirectly leased technologies but for the folks who can afford thousand-dollar machines and phones off-lease. To this day, technology is made and optimized in the perspective of a white man. We have small efforts towards changing this but they're not just far and few — they're intentionally underfunded and driven out of sight. Relying on the mimicry of capital to drive a new direction will result in its trend of burnout or worse for the founders and workers involved.

There's quite a few folks working on this from a perspective that recenters people over the outputs of the machine. One of note is AI as Normal Technology; a longer read that yoyos between wanting a neoindustrial agenda to push us towards allowing AI to be more deeply integrated into society and development such that it's not necessarily controlled by a few industry titans. They're more honest about the progression of AI than more advocates:

According to the normal technology view, such sudden economic impacts are implausible. In the previous sections, we discussed one reason: Sudden improvements in AI methods are certainly possible but do not directly translate to economic impacts, which require innovation (in the sense of application development) and diffusion.

Innovation and diffusion happen in a feedback loop. In safety-critical applications, this feedback loop is always slow, but even beyond safety, there are many reasons why it is likely to be slow. With past general-purpose technologies such as electricity, computers, and the internet, the respective feedback loops unfolded over several decades, and we should expect the same to happen with AI as well.

Another argument for gradual economic impacts: Once we automate something, its of production, and its value, tend to drop drastically over time compared to the cost of human labor. As automation increases, humans will adapt, and will focus on tasks that are not yet automated, perhaps tasks that do not exist today (in Part II we describe what those might look like).

They highlight a forecasting of what job closure and restructing will eventually look like given how generative artificial intelligence operates as a "labor maximizer" towards the end of Part II:

In addition to AI control, task specification is likely to become a bigger part of what human jobs entail (depending on how broadly we conceive of control, specification could be considered part of control). As anyone who has tried to outsource software or product development knows, unambiguously specifying what is desired turns out to be a surprisingly big part of the overall effort. Thus, human labor—specification and oversight—will operate at the boundary between AI systems performing different tasks. Eliminating some of these efficiency bottlenecks and having AI systems autonomously accomplish larger tasks “end-to-end” will be an ever-present temptation, but this will increase safety risks since it will decrease legibility and control. These risks will act as a natural check against ceding too much control.

It links to one paper that I've shared while working on a LLM project to highlight my concern about the echo chamber of technology and government:

What’s most notable is that McDermott’s warning is from 1984, when, like today, the field of AI was awash with confident optimism about the near future of machine intelligence. McDermott was writing about a cyclical pattern in the field. New, apparent breakthroughs would lead AI practitioners to predict rapid progress, successful commercialization, and the near-term prospects of "true AI." Governments and companies would get caught up in the enthusiasm, and would shower the field with research and development funding. AI Spring would be in bloom. When progress stalled, the enthusiasm, funding, and jobs would dry up. AI Winter would arrive. Indeed, about five years after McDermott’s warning, a new AI winter set in.

Anil Dash wrote on their blog that runs counter to the above mentioned of artifical intelligence criticism being lazy: more on the point that a "moderate" position is nearly not possible/available in most spaces. I disagree with this for a number of reasons made clear by the number of conferences, product launches if one scrolls on LinkedIn and capital raised in favor of promoting generative artifical intelligence. They've written themselves in enthusiasm of retrofitting a API standard for models to communicate as groundbreaking as Web 2.0 itself — disrespectful to the actual gains of that space since it was something done collectively (despite corporate capture) whereas the Model Context Protocol was an amplifying tool for Amazon's Antrophic to enable what Doctorow describes as the flywheel effect of platform capitalism in his book, Chokepoint Capitalism. He's also written what I think is the clearest definition of the MIT-license equivalent of what good generative artificial intelligence model development could look like but this would require what China's doing — some level of state intervention or a wealthy benefactor to fund the basis of this research and work. This wouldn't happen in a capitalist society, especially in the United States, without some sort of nationalistic agenda to ramp up domestic talent.

Wait, so can there be a leftist position on AI?

I actually don't think so — at least, not in a completely puritanical way. As I've mentioned, I've worked on providing generative AI solutions to government at work and I experiment with its efficacy largely to prevent the hype outputs to cloud my perspective, at least from an individual perspective. The individual perspective also tends to be the limiting scope of most of the folks I've mentioned above are approaching it. There's been little mention of how we can reshape policy to handle this transition. Relying on executives down to middle management to take a firmer stance yoyos between being beholden to investors to executives that lean into that Aristotlian stance mentioned earlier. So how does it move from that to one of a collectivist, people-centric position?

From a Labor Organizing Perspective

It is disappointing that AI As a Normal Technology danced around labor and softly ignored the impact of said productivty gains in relation to the sociopolitical evolution of the landscape as well as what regions of the world had to operate as the battery and labor. This tends to result in the inherent utopian perspective of trusting industry leaders (or developers) to do The Right Thing. This has doesn't tend to work out in favor of the people who need it the most: folks who don't have a fleet of lawyers at their disposal or like me, folks who live in a state whose legislature down to the local level are against any sort of progressive stances. So that returns us back to what we can do together as workers. I would love to see a sectoral bargaining unit across engineers, designers, lower management, product managers, researchers — the whole plethora of folks so we can stand shoulder to shoulder like the folks who keep your smartphone's network running, the power that fuels your home and hobbies and the construction of the data centers where you can run your instance of Headscale to get back to your homelab from wherever you are in the worker-built world. This would push back on what Ptacek initially mentioned about our inability of approach with dockworkers but it requires political education and a commitment to folks you don't know as well. That's why events like Circuit Breakers are important so folks can bond, learn about what steps we need to take to get here and learn about meaningful tech labor history.

I don't expect much to shift here, especially since the soft decline of people-centric community events has been overtaken by corporate cosplay of the such. By cosplay, I mean the developer relations community spearheading, with corporate funding, the moves to "reboot" community spaces that went dormant during the (still ongoing) COVID-19 pandemic. Events like WaffleJs have been upsurped by Google Developer communities and the like. And with the advent of generative AI, sidecar events are all about what folks are spending money on to make that they could have spent 30 more minutes developing themselves — or with a bit more curiousity.

Instead, more work and effort needs to be spent on countering the systems that rely on the inputs of generative AI. This enters a level of "black-hat" work since this would also pollute public datasets that folks would be using but unfortunately, until the larger actors that fund companies like https://brightdata.com/ or even Google's own search proxying infrastructure, this is necessary. More efforts in making things like Glaze and Nightshade more integrated in tools that folks use on a regular basis and a means of submitting content for extending the efficacy of said tools. Social media networks can allow folks to opt-in into such protections as they've a hot target for non-consenual scraping. It's weird; these projects technically fall under generative AI since it also modifies images but since it's adversal to further modifications, you'll rarely find any advocates pushing in favor of it. That highlights how the advent of such production isn't necessarily around making the act of "generating art" more accessible but mirrors the plantantion-like behavior mentioned before (though coded with race — as technology inherently is):

The specter of the plantation that hangs over computation and industrial labor regimes also speaks to the need to revisit the terms of "free" industrial labor, and to recognize the contested process through which this particular category of "freedom" was created and guaranteed. To do so, we must directly confront the unmarked presence of Black unfreedom that haunts "free" labor and reweave links that have been strategically severed between race, labor, and computational technologies.

Put differently, the production and training of this work is non-zero and the need to move with the veneer of the such helps justifies further extraction of the work of people for the sake of "scratching a visual itch".


If you've read this all the way through, I appreciate any feedback and corrections. As I started, my politics lead my stances and that means taking a critical lens at the industry, its impact and the players within in. If I had to propose a "critical" reading list on AI that's balanced on its development and denigration; the following would be a start:

I hope this'll help advocates understand the contentions and history behind the push against this work. I also hope this helps anti-use proponents a sense of understanding of the scope of the space and avoid repetition of things that have either debunked or made non-relevant. I don't think criticism or advocacy has gotten lazy in its delivery but I do think that we need to consider more — not just the economic impact but the sociopolitical, cultural and societal impacts of this technology. We missed this opportunity with cellphones and the Internet - to a degree, so let's try now.

Read Entire Article