Large language models, the computer programs often referred to as artificial intelligence are everywhere these days. There's a lot of hype, a lot of doomerism, and a lot of nay-saying.
But despite what all the commentators are saying, the current AI technology is neither a magical god-being nor a tremendous scam. It’s just a really useful technology that is going to be here in the long term, regardless of any of our individual opinions of it. That makes it worth thinking about in more practical terms for what it means to us—you and me. How do LLMs actually work and how do we live with them?
Joining me to talk about this is someone who has been asking those kinds of questions long before ChatGPT entered the international conversation. Venkatesh Rao is a writer, independent researcher, and consultant best known for his influential former blog Ribbonfarm and his incisive takes on technology, culture, and organizational behavior. Currently, he writes on his own website now called Contraptions.
In the episode, we talk about Venkat’s distinct approach to AI, seeing it as an emergent, messy, and deeply human technology shaped by what he calls mediocrity. That might sound like a knock, but it's actually something much more—a compliment both to humans and the human-created technologies that we are exploring here today.
I hope you'll enjoy.
The video of this episode is available, the transcript is below. Because of its length, some podcast apps and email programs may truncate it. Access the episode page to get the full text.
Grok’s ‘Mecha Hitler’ meltdown and MAGA’s rage about the Epstein files show the consequences of broken epistemologies
Social media moderation debates are more about epistemology than technology
An ancient Greek philosophical tradition has become extremely relevant in the social media age
The strange nexus of Christian fundamentalism and techno-salvationism
How faulty facial recognition software led to a man’s false arrest
The political history of Bitcoin is not what you may think
Why Elon Musk and other technology investors have become so politically extreme
00:00 — Introduction
09:29 — Arthur C. Clarke's magic and current technology
15:00 — AI's practical applications
18:05 — The concept of mediocrity
23:06 — Evolution and mediocrity
27:44 — Supply chains and resilience
31:01 — The importance of reserves and openness
35:25 — Copyright and historical context
43:15 — Technological evolution and commodification
52:24 — The philosophical implications of AI as mirror
57:09 — Embodiment and somatic reasoning in AI
The following is a machine-generated transcript of the audio that has not been proofed. It is provided for convenience purposes only.
MATTHEW SHEFFIELD: I'm really looking forward to this discussion because I think that you are one of the far too few people who is taking a more measured approach to AI. There's, it's it's almost, it's 99% of people it seems like, that are commenting about it, tend to be relentlessly hyping it or saying that it's just a bunch of nonsense.
And that neither one of those approaches seem to be correct, I think. And you've, written as much, quite a bit.
VENKATESH RAO: Yeah, it's as with any big technology, I think if you start fundamentally with a stance of curiosity and trying to figure out what exactly you're even looking at, and that's your first order of business, you'll fundamentally go down interesting routes, whether you end up being critical or, positive about it or you're like paying more attention to the upside or downside.
If you don't start with curiosity, usually you end up in one sort of derp or the other. Either it's very predictable, optimistic derp, or very predictable, critical derp. So I think is the key. And I think, the discourse we are hearing in public, you don't see much curiosity and evidence because all the actually curious people who I think are in fact the majority, [00:04:00] they're too busy actually having fun playing with the technology to like in the meta commentary around it. And of course the, and scientists working directly on the production side of the technology. typically far too busy to join in the public discourse, except occasionally, and when they do, often they're like incomprehensible to regular people. So the net effect ends up being what you're talking about, right?
Like you have a lot of unimaginative incurious discourse on both the positive and negative sides.
SHEFFIELD: You do. and I actually see a strong parallel in this discourse with, with regard to vaccines. I think, especially mRNA, that that here, was this technology that had been around actually for a long time. And that's also true with language models or what was, you got the ELIZA coming out in the sixties.
So as a technology, these are not, new things per se, but obviously they're much different, it goes without saying. And the same thing is true with mRNA. So it was proven in the lab in some sense already for quite a long time. But because the people who had developed them were just so far removed from the public discourse that a lot of people had this natural suspicion of it, for, something that's unfamiliar.
So I, I can't necessarily fault that it's, it is a failure of, discourse on, on all sides in many ways.
RAO: Yeah, I would agree with that. And I think it's the degree of the reaction rather than the direction of it that I think I use primarily to sort my responses like, any new technology, whether it's extremely minor or very profound, like in this case, I do believe it's a very profound new technology. Ultimately it's still made of like atoms based on laws of [00:06:00] physics. It has its properties. Some of them are unexpected, some of them are expected. Some you can model and, anticipate other things. You just have to like, fuck around and find out what it does. And sometimes you will, I know blow yourself up.
Other times you will miraculous effects. And you brought up vaccines and, yeah, I know you've spoken with Rene, who's been at the, Rene Rista, who's been at the forefront of the vaccine debates for a while. And vaccines are interesting case. like Rene was the one who told me about the original vaccines way back in the day, the exact same discourse, repeated itself. And, I, think the key to recognize is that there's never going to be anything like a holy grail technology that only has positive effects and no negative effects, and. never going to be actually be a technology that only has negative effects and no positive effects as well. And on the, second category, I like to think of bombs, right?
Like nuclear power is generally understood as a dangerous, but generally positive technology. But if you ask people, do you think nuclear bombs are a good thing? Most would say uniformly bad. But if you look at, certain ideas for terraforming, other planets, they're based on exploding nuclear bombs and like causing positive climate change and making them habitable for humans.
So even, or you want to blow apart an asteroid that's coming at earth, maybe a nuclear bomb is the right tool you need. So everything has both sides and I think where it throws people who are not used to working directly with technology and like getting a sense of it's hands-on behavior as a real thing, is they tend to. Inflated to mythical proportions of either light or dark varieties, and insist on taking it in. As a mythical thing. It's [00:08:00] either a monster or an angel. It cannot be anything in between. just hey, we do this with other human beings as well. Like your friends and people around you that you know well, you tend to like, give them a lot of benefit of the doubt.
And they inhabit a narrow range of human fallibility. It's there's good people, bad people, no outright cartoon villains and no outright, absolute saints, right? But, when you encounter like a very distant culture, often you will do that kind of extreme. So I think part of that is what's, going on. and in general I'm like, yeah, critics need their room to have their reactions. think, one of the thing one of the ways in which I differ from a lot of other people in this discourse is actually happy to ignore the extremists on both side. I don't think they matter in the long run at all. they'll be forgotten in a generation and even if they're extremely noisy now. actually safe to ignore them. You don't have to counter them. You can for the noise to settle. You don't have to treat this as a debate that you must win because this is the extremes. They can have a powerful effects, right?
if they get control of say, regulatory app paras like how AI is regulated at the government level or international level, then you should start paying attention because they can have serious impact. But if all they're doing is shit posting and whining on social media and writing like, inflamed editorials, they're safe to ignore.
SHEFFIELD: Yeah.
SHEFFIELD: and, there's a, there's another interesting parallel in that, Arthur C. Clark is famous for, saying that any sufficiently advanced technology is indistinguishable for magic and people often apply that I. To these fanciful, sci-fi things that, wouldn't exist for thousands of years, if ever.
But it actually applies to technologies that exist in the current day. and and you really do see that not just with vaccines, but like with television, with, [00:10:00] as you said, nuclear power. just a variety of things. the internet was supposedly going to turn everybody into a satanist, and so was rock music.
Rock music was going to turn everybody into a, satanist and turn everyone gay. Like these were all things that, were very confidently proclaimed by various people who were not, didn't, know how these technologies worked or really what they could be, or the limitations that they had.
RAO: Yeah. And that's partly because, that's partly why I feel comfortable completely ignoring them. But I do enjoy analyzing the responses of people who are consequential. yes, I, know you, you a lot of interest in unpacking the critical discourse. I think for me, a lot more. Value is to be found in analyzing the That's neither critical nor super positive, but just puzzle that responds to the mystery of the thing, right? Because there are things that are courtesy Clark type mysteries, but only to lay people like you ask the experts who build the technology, they understand what's going on, even if the lay people don't.
And a generation or two later, even the lay people catch on.
AI is interesting in that even the people who know the most about the technology and are working at the, in the depths of it, they struggle to understand what's going on, how to make up, mental models of it. And I've been thinking about this aspect in terms of a wonderful book I read a couple of years ago by Benjamin LaBute.
Thanks for reading! Please share this episode so others can join the conversation.
It's called When We Cease to Understand The World. won a couple of major prizes. He's a Chilean author, but he, it's semi-fictional, semi factual. Collection of like biographical stories of famous physicists who were the first to encounter really profoundly new ideas and thoughts that were like, crazy enough to drive people insane.
So the idea of a black hole, things like that, right? the book is [00:12:00] fascinating and a recurring theme that happens is these pioneers who are the first to encounter these mysterious realities, they often don't know how to even wrap their minds around it. And many of them literally go insane or have like other psychotic, reactions to it. so this was a labate theory, but then I came up with a three phase response theory for this kind of, profound new technology. I, think of it as the first phase is the Laban, when the first wave of people encounters this technology and has like an unprocessed first order response to it. Then you have what I call a Lovecraftian phase after, HP Lovecraft, the science fiction author, you're starting to wrap your mind around what it is, what its shape is, how it behaves, but it's still fundamentally horrifying to you. And at that point you come up with basically love.
Craft and mental models of what it is. And we've already like speed run to that phase of AI. Like literally we are using Lovecraftian models. We are calling AI OTs soho Gods for, those who are not familiar is it's a species of ancient alien that Lovecraft imagines this part of the world. But it's it's an interesting kind of ancient alien. It's basically protoplasm matter that's just f. Just below the threshold of being sentient or conscious. And it was created by another alien race that created it as like a slave, organic technology. But then it gets just smart enough to, get into a war with the other regular aliens. And it's not surprising at all that the sh got turned out to be a very interesting mental model for the Lovecraftian phase of AI. And then the third phrase that I think of is, I call it ardian after JG Ballard. Another, luminary of science fiction and the ardian phases. It's not that you fundamentally sorted out the mystery or solved Arthur c Clark's, sort conundrum of, it looks like magic and you can't unpack it. At some level you've [00:14:00] normalized your relationships with it, right? so we get in planes all the time and fly at 40,000 feet in this amazing technology. We don't understand it.
I'm an aerospace engineer by trade a long time. Haven't worked in aerospace for a long time. But yeah, one of the interesting things is we still don't understand fluid dynamics well enough to like have a clear non mysterious explanation for how lift works in wings. Like we have, like the mathematics and we get it. But at some level we don't understand how flight works. it's still mysterious magical technology, it's not. A horror, right? Every time you get into a plane, you don't think about kullu and OTs and other things eating you up.
You don't have the paranoid psychotic reactions that the early physicists had to like fundamental physics discoveries. So flying in airplanes, despite remaining kind of magical, has reached its ardian kind of normalized phase. And I think they're starting to see that with AI too.
RAO: Like I'm fascinated by the number of Astoundingly banal, but still very powerful things, people are using, AI for. Like a couple of examples from my own life, I found, one of my hobbies is like tinkering with electronics. I'm not very good at it, but you can buy these cheap bags of assorted parts from Chinese stores and it'll come with chips and parts and stuff. with like sometimes, a little chart showing the part numbers or something. You can just take a photograph of that, upload it to chat GPT and say, what is this? Explain these components to me, and then suggest some experimental toy circuits I can build with this to learn how these components work. So that's like a kind of banal. Use I'm doing with it. That's not profound. That doesn't have a spiritual or epistemological dimension, but it's normalized. The magic has been normalized. And another one from my home life is my wife has discovered that it's a really great, mentor for, helping [00:16:00] her make her own skincare products.
So she likes making face creams and things like that. And she's using it as a consultant, chemist and formulary. And again, it's a lot of really banal conversations of which ingredient will make the face cream more oily or less oily. But you're seeing this all over. yes, the critics are still having their, don't know, pull clutching reactions.
The crazy theological, boosters are still like talking about acceleration or doom or whatever, the everyday people are doing surprisingly banal and normal things to it. Now it's important though, not to this just because you can, 99% of the things people are doing with AI are banal. Doesn't mean that, Lovecraft in horrors can't come out of it as two, right? swarms of drones being programmed to attack military installations. We are already seeing that in wars around the world. So yeah, AI does have like profoundly powerful dark side technologies as well as upside, users.
the number of proteins that had like plausible structures went from like around 3% to nearly a hundred percent when DeepMind did its, big protein model, right? So yeah, both these profoundly I don't know, powerful things happen as well as banal things, but I think the overall shape of how. AI is entering society. The person who's written, I think most cogently about this is a professor at Princeton named Arvin Nan. He wrote this, wonderful paper called AI as Normal Technology. And it presents a worldview that argues that is not a fundamentally different kind of technology. You don't have to invent new religions and talk about like weird multiple universe thought experiments to think about it.
You can think about this the same way we talk, about nuclear technology, vaccines, many other big technologies. It is a normal technology and I think, yes, that's the frame with which to engage with AI.
SHEFFIELD: Yeah, to be aware [00:18:00] of the possibilities, but also the limitations. And really that's what it's about.
SHEFFIELD: And I guess related to that is the concept that you have of mediocrity and that people tend to—mediocrity has a bad rap. But at the same time, everybody loves it for the things that they do, the things that they don't want to have to worry about.
I think that's probably, you're talking about the banal applications of AI. What I mean, this, is mediocrity, available for everyone in as many things as you can imagine it. And, but under, as long as you understand the limitations and, so it's not good at current events, it's not going to help you with that. It's not going to help you with, understanding the nature of reality contrary to some of these people that, get wrapped up in, in, Jeff GPT God stuff.
RAO: it may not be good at that now, but it's
SHEFFIELD: Yeah.
RAO: in a way where it can help you with all those questions.
SHEFFIELD: Yeah.
RAO: much better at current events. Even in the two years I've been using it, the lookup and web browsing capabilities have really gotten good. thing with first time I tried to have a metaphysical discussion on the, ontology of consciousness or whatever, two, three years ago, it was like crappy. It was not even as good as like bad sophomore, smoking pot. Now it can hold up its end of the conversation really well and challenge
is my relatively sophisticated view on that topic.
it can't do, I think it's important to qualify with yet, like a lot of
SHEFFIELD: yeah.
RAO: rushing to judgment on. It fundamentally cannot do X because they've come up with their own pet theory. You have one, I have one a dozen. Other people I know have these kind of like metaphysical treatments of what AI is that leads them to of as.
SHEFFIELD: or what it isn't. I would say.
RAO: like [00:20:00] pseudo theorems of fundamentally AI cannot do X because I have this, abstract model of it that says it cannot. So this is like all those people who said, heavier than air flight is impossible. your model was wrong. It turned out not that
SHEFFIELD: Yeah.
RAO: air, air flight was not possible.
So I would
SHEFFIELD: Yeah,
RAO: anything you think AI cannot do, qualify with yet the
SHEFFIELD: correct.
RAO: right? And
SHEFFIELD: Sure.
RAO: do, it'll probably do better. But it may not get to like super intelligent or super capable version. This is why. I'm a huge skeptic of like generalized claims of super intelligence or even general intelligence.
if you talk about computers being better than us in some narrow way, that's been true for 70 years. They were already better than us at Arithmetics 70 years ago. And other things like, image recognition, translation 15 years ago, they were better than us in some ways. but other things, yeah, they may reach like maybe of mediocrity that's below us. Sometimes they'll be slightly better. And this is because intelligence is fundamentally not a quote unquote thing that has generality to it. Like we've fooled ourselves into thinking about intelligence as a generality thing because we hallucinated this idea of a G factor and these statistical results of IQ testing and it's like a whole house of cards that allows us to think in terms of intelligence as a generality thing. But to connect your point about mediocrity, the AI aspect of it is at the very origins of, of course, you look back to Turing's original formulation of the Turing test, he made like half cynical, version of the formulation, which is, hey, I'm not aiming to create like a really superior intelligence.
I just want to create a mediocre intelligence. Something like the of at and t. And this was basically him snaring at the, this was, this happened in the at t Bell Labs cafeteria, and he was saying it loudly because the president was just walking in and he wanted to troll the guy. but there's something important there, like [00:22:00] is. I think profoundly high potential and powerful technology it's mediocre. It does things in like these fumbling human, human-like ways that suggest was broader potential. Whereas if it did one thing really, well, right? airplanes fly hundreds of times faster than humans can run.
So they do one thing well compared to humans. But it's because of that, that we scared of airplanes, we just think of them as it's a specialized technology. We know exactly how it works and we aren't afraid that suddenly airplanes will start doing better philosophy than us because airplanes are like so good at being faster than us.
That is evidence that they're not the kind of thing that can be that can compete with us
SHEFFIELD: a threat.
RAO: But it's the mediocrity and the reason, this is like a profound philosophical theme that I push on. So partly I do it to troll people because we live in an age of excellence and hustling and things like that. And it's fun to, be the. Bad guy pushing like a meme everybody hates. But also I do have like foundational beliefs on why mediocrity is like a fundamental property of the universe.
RAO: for example, Darwinian evolution. A lot of people who haven't actually studied the nitty gritty of mechanics of evolution have this weirdly dumb idea that evolution is an optimizing process, that it somehow gets better and better at quote unquote fitness. And then it creates like the optimized version of everything. No, it doesn't. It's actually really, good at not getting trapped in like local Optima. It gets bumped out. And what evolution is really good at is actually maintaining enough reserves of e evolutionary potential so that it can adapt to completely new circumstances. And there's a whole book about this view of evolution. It's, I think it's called good enough. So it's by a evolutionary biologist that. explores various, examples of how how evolution does not in fact optimize and everything [00:24:00] to the 99% performance range. Like a simple example I like to use is, the reason we can hear, the reason we have ears is of course, the three little bones in our ear, those tiny ear bones that are part of our oral canal and allow us to become hearing agents.
They actually were the result of evolution working on the surplus material in our jaw bones. So our jaw bones were not optimized. They were like. In a sense fat, they were not lean, they were fat. There was too much bone for the job. We had our jaw were too strong for the kind of food we had to eat. this is not true. Not just of us, but all mammals and therefore some of that surplus material. When the environment of, evolution was right, it evolved into ear bones. And then we got this extra ability to hear out of the surplus material of unoptimized ears. And this is how all evolutionary processes work.
There is slop, there is fat, there is noise in the system that bumps it out of local Optima and the global processes. There is no global optimum to search for because it's a constantly changing adaptive fitness landscape. So evolution is constantly probing and trying to like. Chase a moving target. It never quite lands.
It's always chasing the moving target of how is the world changing and how can I keep up in a good enough way? So that's mediocrity in evolution. And you see this in like human cultural evolution too. So my favorite example of that is there's this wonderful paper, by, I think the author is Shameless. It's called the Mundanity of Excellence. So he studied competitive swimmers in swim meets, the league, stack of leagues of swim meets, and he discovered something very interesting, the finest swimmers who get all the way up to Olympic grade. They typically don't actually max out their potential at any given lower level of competition. What they do is once they get good enough to like consistently start winning at say, city or state or high school or whatever level, don't try to world records at that level at that point. So maybe at the 70% [00:26:00] level, they start actually making changes to their technique. They experimenting, they start experimenting with new ways of swimming. They find new coaches that break them out of their rut and they level up. So they go from say the 80% tier of, one level to say the 50% median level of the next tier. And then they start improving that, there. So then never go to the end of the S-curve on any one level. They jump to the next S-curve, right? So this is what chainless means by mundanity of excellence. It looks like excellence. When you look at the end product and you're looking at an Olympic grade swimmer, like looking like a completely out of the world alien, but they didn't get there by chasing excellence at each level. They got there by understanding how to stay in the zone of mediocrity at each level, but then simply leavening up so that what was. Trending towards excellence at one level simply becomes mediocre at the next level. And you can't continue to improve by doing the same things you used to do. So this is like my natural history of mediocrity across AI, biological evolution, and human cultural evolution. And this is why I find a AI so exciting.
It's not one thing that you can do Lean Six Sigma on and Toyota Lean manufacturing approaches on and polish to a absolute state of perfection. There are technologies that do that, and they're great. They go into a local minimum and they serve us well there, they become completely perfected and recede into the background where we can forget about them.
They just serve us in the background. But there are technologies that are open-ended in this mediocre evolution way, and they can be our core evolutionary technologies for the rest of our existence. So those are powerful technologies. Indeed. That's why I'm so, excited about AI.
SHEFFIELD: Yeah.
SHEFFIELD: and it is a fundamentally different approach to technological development, as you're saying, because and we saw during the pandemic that these perfected technologies of of supply chains, they had extracted all possible [00:28:00] value and all possible, minimized all possible waste.
But what that did is that made them fragile and it made it so that they broke when they had, to deal with any sort of, variation. and, that's, and that's the, and that's another parallel with evolution, so that, the fact that appendixes exist.
it shows that, evolution does not maximize for all, waste, to, minimize all waste and maximize efficiency. But that's why also why it works is because there is, there, there waste in some sense is good. and what you call, what people call waste is actually, capacity that is necessary to handle future disruption.
RAO: Yeah, it's, I, think the mistake made by. and optimizers of any sort is the belief that any reserve capacity or surplus you have must be used towards goals you already have. So if you want to run faster, every ounce of like reserves of any sort you have must be used up in going faster and faster. But maybe going faster is not actually the adaptive thing to do next week. Maybe what you want to do is lift the heaviest object, which means you have to start doing something else with your reserves. and the supply chain example is interesting actually because, I think, more recent research on what happened during COVID I is adding like a twist to the tail.
So it turned out that the supply chains weren't actually fragile in the way people thought they were actually capable of meeting the demand and keeping up what actually broke them was extremely poor regulatory responses. so one version, like a crude version of the story is. Governments around the world gave too much by way of cash stimuli to citizens to them over the, period. And because there was also inflation starting to get underway and cash is worth less during an inflation, were hurrying to buy far more than they [00:30:00] actually needed. So it wasn't that the supply chains were not fat enough to do the job of meeting like demand to weather the pandemic, but they didn't have enough fat to also deal with, like above and beyond, stimulus mechanisms and poorly conceived economic management, regimes. And even within this, there are like, tweaks. i, used to do a bunch of consulting work for Amazon at one point, and Amazon actually survived The Pandemic Sur Survive, surprisingly well because it's, it basically built up its own shipping logistics fleet with its own ships, its own agreements with different ports, and therefore when basically the global trade system based on national treaties and like movements of goods and, services, it started to break down because governments were like mismanaging what was happening. Amazon's own system actually was much more, robust, so they actually managed to continue serving their customers a lot better.
RAO: So it's not a single story, but I think the overall lesson to take away from COVID is no matter how well you plan, there will be contingencies that are not in the thinking that went into the design of a system that are not within living memories.
So maybe you have to dig back a hundred or a thousand years to find appropriate things to learn from. And the only way you can actually deal with them well is to have unallocated fat in the system. And there's, a lot of good research that supports this basic conclusion that the only way to survive long term.
So this is, there's a book called, the Living Company that came out of, research from, BP in the eighties. But the headline conclusion there was the companies that last the longest simply have two properties. One is they have enough reserves and surpluses to like weather a lot of uncertainty. And the second is. They are open to ideas, so they're not closed off, they're not hide bound in their thinking. So that's really [00:32:00] it. You have to have reserves and you have to be open to like external inputs that can like, give you new ways to behave. If you close yourself off or you get your reserves down too far, you will die.
That's basically it.
SHEFFIELD: yeah, it is. And and there are, there are so many companies that are examples of that. just, mobile phone technology, how many times that the, top players, just completely ceased to function in the market. the. Happened so many times now.
Just as one example. But the other thing though about just, in the, AI context is that I think that, people who, people are still trying to figure out, how can I use this thing? Because, like that's, and it's not going to be the same answer for everybody.
And, some answers that people have, or that they want to have in a given moment might not, be functional for what they need at that moment. but it might be extremely useful for somebody else, So and I think a, great example is, art. Like the capabilities for art generation just were horrible, for such a long time.
And, people would mock it, things like that. But, if you try it now, it's very good. Like a lot of, it's very good. And and there are legitimate concerns. Because visual representations are in some ways more, there's not as many of them as linguistic permutations.
and so, artists have some very valid concerns in this regard, but compared to linguistic recombination, because we have so many words that, the number of, possible permutations to make a sentence, we're talking in the quadrillions here, right? so nobody can really say per se that, that AI is going to rip off [00:34:00] their, writing, although, obviously we have instances of that.
But art is a different thing. But nonetheless, the capabilities got there or, and are just only continuing to improve, like with video and things like that. it's, that. But then other people are getting, their brains, messed up by using AI improperly and not understanding that, it's not a, this is not a spiritual advisor, you should not take, religious advice from a chatbot.
But, and, the media is trying to hype that now, I think in, a lot of ways. But, as, as, you had, we had talked, before this, you had said that AI is more of a, is it, is the product you get from AI is related to what you put in and it's what you expect.
it's almost like the, the Yoda, scenario with Luke in, the Empire strike, back strikes back. It's, AI is only what you take with you in a lot of sense.
RAO: Yeah, so I, I think I'm probably, I'll probably come across as an extremist on this particular topic with respect to attitudes that are prevalent today in living generations. But in historical time, I will probably come across as a moderate. So with that qualifier, let me explain the position I'm, outlining here.
RAO: I'm an extremist on thinking copyright is bullshit. So do I mean by that? in language, it's easy to see, if you go back far enough in history and you look at, say, the Preprinting press, literature or folk traditions and stuff, there was no notion of one person laying claim to the Arthurian, legends, for example, to take a random example I'm reading at the moment. So the Arthurian Legends is a good example to take they. Evolved as a folk tradition across France and England over several centuries. They were at some point, like [00:36:00] compiled in France and various clusters. There was Jeffrey of Monmouth who had a history of kings of England and so forth. And at some point in around 1480s or so Thomas Mallory came up with the Death of Arthur, which is the definitive edition.
But the reason it got locked down as a definitive edition, and he became known as quote unquote, the author of, the King Arthur Tales is printing press was invented and the printing press when it was invented, it introduced a certain kind of like freezing or, what's there's this book by Elizabeth Eisenstein called, the, printing Revolution and its impact on Europe, where she talks about this notion called fixity. It took a lot of fluid. traditions that had huge numbers of people chaotically contributing to it and introducing a fixedness to it. And at the same time, it created this archetype of an author as a figure in the public imagination that had a certain relationship with a fixed work. And that relationship is what eventually got legal, teeth as what we now understand is copyright. Same thing with visual arts, right? Like you go look at very traditional painting techniques before the Italian Renaissance in Europe and much later in other parts of the world, will find that authorship is a very loose and fluid concept.
you look at ancient temples in Asia, you'll have like lots of like carvings and paintings. Nobody knows who the hell did them. They were like parts of traditions where everybody collectively. Owned a commons of creative production knowledge. And they were used to like, decorate public works and copyright was not necessary to mediate human relationships with their collective output.
So I think one of the things we are seeing right now is the printing press in some sense created a few centuries of a very anomalous relationship between human creative labor and the products of that labor that was very individually circumscribed by a particular [00:38:00] legal, regime. Right? and it didn't exist before about 1450, and I suspect it'll not exist again after about 2100 because we are now getting back to a mode where our relationship to Our artistic production is at some level, a fractal, collective mutualist thing. Yes, there's individual threats to it, but there's also group threats to it. There's like national thread to it. Let's take an example, the big over, studio Ghibli. Now Miyaki is a very well-known and deservedly so he's done legendary movies and they're like, for what they are, they're exceptionally good movies, but. would be a bad mistake to pretend that Studio Ghibli, which is the studio he runs, is solely the author of the works of Studio Ghibli. if you go beyond Miyazaki and say, let's count the hundred or so artists who have worked on Studio Ghibli movies for the last 30 years, even that's not a big enough circle. he had a lot of inspiration from Disney and American animators earlier in the century. He had a lot of inspiration from earlier traditions in Japanese visual arts. So you start counting up and rolling up all these collective legacy and inheritances of human creative labor, and you find that there is something I'm a deeply. Self-absorbed and narcissistic and individualistically, small-minded about a relationship to the artistic tradition that I have a big problem with. So I think we're going to go back to the pre-print era of like more collective collective ownership of commons of artistic production. So that's one half of what I want to say, but I don't want this to be mistaken as the ethical responsibilities of AI companies, right? So I disagree with the, his hypocrisy of individual artists and also that kind of like performative gestures, putting little signif notifications on their articles or artwork saying, Hey, no AI was used in this. You use the equivalent of like [00:40:00] thousands of years of human creative production as an artistic heritage, which is the equivalent of A LLM in pre-computer eras.
So you did use an LLM. The LLM just happened to be human history itself. So that's one way to think of it. So yes, I don't let artists of the hook for being bad faith and disingenuous about the arguments they make. But this is not the same thing as excusing, open AI or atropic or any of them.
They have an ethical responsibility too, but they need to work it out for themselves, who that is to, and how to actually deliver on that responsibility, right? if they think they're drawing on a lot of common tradition from the internet or the long history of humanity, give back to that tradition, right?
They should release open source weights. They should like, they should leave the commons of human artistic heritage richer than they found it. They should not leave it extracted and devastated. So anything that's going into the LLMs, yes, they have as much right to it as any individual artist, but what comes out of LLMs, I think it should create a new commons.
And it's starting to happen. This, the private companies are not exactly doing great work here but the people pushing hardest on open weights, LLMs, for example. And it's ironic that China is the leader on this with deep seek and others that they're like going really hard at this. But beyond that, we need even better, there's a couple of efforts to do basically public commons AI. so Kevin Kelly has this notion of, public intelligence and I just came across this, let's see if I can find, this thing there's another effort to build a truly fully open source stack around AI. Yeah, I'll, share it later if I can. But yeah, there's a bunch of efforts that's happening like this that are about like building AI in a distributed, open access public way, that leaves the commons much richer, than they found it.
And I think, yeah, to some extent there's a humane there's a humanitarian concern here, as traditional artisans and crafts people who do [00:42:00] who sit around in villages around the world practicing like centuries old crafts, they deserve to be somehow taken care of.
And technology messes with what they're doing. modern artists deserve the same like humane considerations, but, we should not take their, ethical posturing at face value any more than we should take the ethical posturing of the AI companies at face value. We basically need to renegotiate our collective relationship with our knowledge and art heritage in a completely new way.
Like what has worked for the last 500 years won't continue to work.
SHEFFIELD: Yeah. And I would say the same thing also applies to software patents as well. And because, and I when you look at the way and the technologies that go into, pretty much every model, they're based on open source technologies and many ways. And and I, suspect that if somebody were somehow to do an audit, of these technologies, there would be a lot more that is under the hood that is not being, disclosed, perhaps.
That is my suspicion.
RAO: and there's even an. A pragmatic, selfish reason to promote this commons and, open source and public view of things.
RAO: So whenever we think of the evolution of technologies, often people note that the end stage of any technology is quote, unquote commoditization. At which point it either becomes like there's no, alpha left in it, or it even becomes a nationalized utility run by the government or something. There's something not quite true about it, right? trains became nationalized and became like a commons commodity like a hundred years ago. But then new technologies came up for maglev, new kinds of like train technology, and the frontier opens up again. Now the question is. What do you do with technologies that go through a cycle from like frontier pioneering innovation to a temporary commoditized stage when the frontier shuts down.
So maybe for 50, 60 years, nothing new happens in that [00:44:00] sector, but there is still like subterranean forces that can erupt at any moment and restart that sector, right? So you have to be open to that potential reopening. And I think the best way to steward that latent potential that might, reopen 50, a hundred years down the line is in fact as a public commons, not as a nationalized by that nationalized government entity or as completely extracted private sector.
You have to have a commons, to stewarding late stage technologies because that's how you nurture their long-term potentialities. And I think we are just starting to The societal disciplines needed to do that? open source software is one of the first cases where we've learned how to do that well.
we are not saying that, hey, this kind of software is really take word processes. There have been word processes for 50 years, but what we are not doing is saying, Hey. This is old enough. Let's give it to IBM to maintain as a legacy technology and IBM's Department of Word processing can do it.
We're not saying that, nor are we saying, Hey, this is basically a utility technology. Let's give it to the government to run in the Department of Word processing technology. Both would be absolutely terrible ideas. Just because word processing as a class of application has doesn't mean it's going to stay the same 50 years from now.
like AI is actually opening up that frontier again. Like what it means to work in an environment like Google Docs or Microsoft Word is being radically rethought now with AI being plugged in, right? So obviously there's going to be new kinds of word processing in the next five, six years. So that means we need to learn how to steward these things really well.
And, commons, open source. These are all going to be societal disciplines for us now.
SHEFFIELD: Yeah. And and, the other thing about this perspective is, getting technologists themselves to not be paranoid about a matured [00:46:00] technology. So like we see, Peter Thiel, I think is. Us in this regard for claiming that, oh, we don't have any innovation anymore. innovation's dead and it's all, the communist fault or whatever.
And it's no, because, as you said, word processing spreadsheets, these are things that, largely we figured, that they're, they're mature technologies anymore than, we don't need to have a new Astro lab technology. And we're not lamenting that we don't have any new AstroLabs, they're fine with what we have.
and, the real innovation is in, either, completely extending existing mature technologies or inventing new things that do the same function. and that's, the, I think that there is of late, especially, as, some as some investors and, have, and, tech liberation types.
they, they, haven't understood this point that, a maturation is not a bad thing. It's a good thing.
RAO: I, would characterize their views slightly differently, but this is fine for our conversation here. Yeah. I think Peter Thiel is an overrated idiot on these topics, honestly. his famous line about Twitter versus, I wanted flying cars. I got for one 40 characters. It's sofa morrick like, he was one that described to me as his thinking chopped with a lot of the rings in Iron Rand.
And, he has apparently read some re Gerard and, I think his he's overrated. He, plays a couple of good bets as an investor and he's really smart as an idiot savant kind of, way in certain, kinds of behaviors. But I wouldn't take his views on this sort of thing seriously. but, what's a sophisticated view of maturation and I would say rea awakening of technologies. It's that nothing is ever done, even AstroLabs are not done. It [00:48:00] could be that the next interesting technology that we invent suddenly does something very weird to AstroLabs. fact, they have not been done, like people have been updating AstroLabs with lasers and other smaller refinements.
to take a similar era of technology sales, for example, technology reached a peak of. Perfection with, I would say the 1850s tee clippers just before steam ships in, America. phenomenal sailing ships. They wouldn't hold a candle to the sailing ships we see in the America's Cup type races, today, right?
Because these new sailing ships, they use fiberglass. They use like complicated like computing CFD software to optimize the sales. If you've ever seen one of these, yacht races, it is amazing. These things fly like far faster than any sailing ship. 200 years could. So it's not true that we've lost or that it's stagnant or only incremental things are happening. And it's not even just like it, it's not that it's a marginal sideshow because of, change and, energy concerns. People are actually now starting to build cargo ships that use wind power and powerful new waste. So there's a few new designs for there's literal sails that are like carbon fiber based that are used on container ships.
And then there's other things like, there's a form of wind propulsion that uses rotating cylinders. That was a type of wind power that was never used at all in the earlier sailing era. So yeah, you can't write off anything. So
SHEFFIELD: Yeah.
RAO: finished story until, our species is dead and earth is burned to a cinder.
It's isn't over until it's over. uh, it,
SHEFFIELD: and I'm sorry, just on your point on sailing, like the other thing, the, technology is continuing to be extended even further because like with, spacecraft and using the solar wind,
RAO: exactly.
SHEFFIELD: that's the ultimate application of sailing innovation. and if people hadn't kept thinking about, how can we refine how it works with wind, we wouldn't have had that.
RAO: And, Astros too, like this has [00:50:00] been standard on spacecraft since the beginning. One of the ways, do attitude positioning so they have lots of methods to do it. But one of them is literally star scopes. So they have like little tiny telescopes that orient on particular stars, and then they use that to get a sense of like their orientation and correct them because they need a multiple redundant systems to position themselves.
so yeah,
SHEFFIELD: Yeah.
RAO: are alive and well,
SHEFFIELD: Yeah, exactly. there's, the other thing about that's circling back in a lot of ways is that that AI it's opening up the idea of what is consciousness, what is intelligence in a way that, that hasn't really been thought about for a long time.
And, I don't think, he did gets enough discussion nowadays, I don't think, or, and maybe I'm reading the wrong stuff, but Marvin Minsky's society of mind. So that actually appears to be true of both humans and of LLMs. That so LLMs function, through, and the analogy is not a very good one with the idea of a neuron.
but nonetheless, the idea that a, that a, token has multiple relationships across many different spaces. and so, basically an attention head algorithm is functioning in the same way that a neuron in a human or animal is that, our, consciousness, there is no fixed self.
It doesn't exist. and really what that's, I think in some ways is scary to people who want to imagine that there's something magical inside of us. But when we look at, other animals like dolphins, they have language, they have abstract thinking, they have names for themselves, with their signature whistles.
there, there's just, we're not nearly as special as we think we are. and I think that might be ultimately what motivates a lot of people who are, obsessed [00:52:00] about things like, Quaia or stuff like that. It's just we're and it's fine. It, like understanding that we're not that special.
That's, I think that mediocrity there, like that's the ultimate mediocrity perhaps.
RAO: Yeah, we have a very mediocre flavor of consciousness, not a special flavor. But yeah, I think that this is one of the, threads of discourse that are most worth watching.
RAO: gets to the point I think we were talking about before our prep call on AI as a mirror, right? a lot of people have pointed out that every technology that's ever come along, every major technology, let's say people tend to like, see their own minds reflected in it.
But I think AI is indeed special because it's the first technology that was literally inspired by, our own, not our own brain, but cat's brains. Because the first neurons that was studied that eventually became neural networks was cat neurons. So yes, are special. They're not like, if you look in, say, the era of Descartes you would find clockwork analog for how the mine works. And he had an idea that consciousness lives in the pineal gland and things like that. But clockworks are good for understanding certain aspects. So it's an okay mirror. It's not a high quality mirror.
You can understand some aspects of brain functioning that way. Butis are exceptionally good mirrors. And, yes, Minsky is I think, one important thread of understanding what both ais and brains are. And I think we are getting closer and closer to that model. the latest mixture of experts model that has now become the standard architecture.
It already has a multiple agent structure. It's still a little bit of a hierarchical, architecture that I think is not quite close to the way the human brain works, but it's getting there. So I would say today's AI's in the mixture of experts architectural paradigm. There's somewhere between a Cartesian theater approach where the prompts come in and then they get out to the mixture of experts and so forth. And future [00:54:00] state where we perhaps won't need that. Like when we have embodied ais and robots that have their own sensors and actuators, that loop is closed in a richer way. We'll drop even this slight fiction of a Cartesian theater that we have going on in current AI models. and it's getting there, like on the models front, a lot of people are getting inspired by, cortical columns, which are these structures within the brain that have would say Minsky, society of mind resonant, architecture.
So that's one direction that's to watch. Another in interesting direction is this is a little bit of like a. Cultish line of research, but it still is interesting. It's on active inference, stuff by RIS term and people who study things like the free energy principle and so forth.
So people building models of the brain as thinking and cognitive, machinery, but as thermodynamic systems described with in terms like, entropy, free energy, flows of energy and signals of information. So there's lots of really interesting work happening along those lines, and I think exciting stuff is going to happen once we start putting LLMs and LMS and multimodal models of all sorts into robot bodies adding like reinforcement learning loops and. Operationalizing some of these newer models of free energy principle and active inference. And what'll happen then, I think is we will lose this, we'll lose what's left of this Cartesian theater metaphor that's still lingering with us. And we'll get to, AI embodiments that can mirror human thought much more richly.
I, think at some point they'll start
SHEFFIELD: Yeah.
RAO: because silicon and, neurons are fundamentally different types of hardware. So they
SHEFFIELD: Yeah.
RAO: start diverging. They've already started diverging with transformers. It's not quite the wi, same way neurons work, but they will [00:56:00] be equivalently expressive. And by that that. Even if they turn into completely alien intelligences that are nothing like us on the inside and think very differently, they will still work as the most expressive mirrors we have, where when we interact with them, we will in fact be able to ourselves in them. And yeah, to go back to your earlier point, conversations about like philosophy and spirituality if you're so inclined and have that be as good or better or more interesting than with other humans.
SHEFFIELD: Yeah.
RAO: to rank all these on the same scale. to a great philosopher about your religious views might have one kind of value, but talking to the best AI model 20 years from now about your views on religion might have a completely different kind of value.
SHEFFIELD: Yeah.
RAO: sitting around on your couch with your dog or cat and communing with another species that way is a fundamentally different spiritual experience than talking to another human with whom you resonate a lot and. I, I would say they're like equally valuable, like when I sit with my cat and we are watching TV together, it's a very different but equally valuable spiritual experience as talking to somebody who's sophisticated about those topics.
SHEFFIELD: Yeah.
SHEFFIELD: and yeah, and ultimately, I think that the, where we're headed with all this stuff is that, within cognitive psychology, there is the broad consensus that there are two types of reasoning. there is our somatic reasoning, our bodily based preservation instincts, things like that.
and then there's our abstract reasoning and AI. Because it hasn't been embodied yet. it doesn't have that somatic reasoning. And and, but that ultimately is the foundation of all meaning, as, because David ,Hume was right, when he was talking about this, hundreds of years ago.
That our reason is and forever will be the slave to the passions, and that this must be the case and it should be. And so that's ultimately what meaning is, that [00:58:00] me as a physical, being in limited space time, this is where I see things and this is what matters to me. And so whatever the substrate is, whether biological or, mechanical, that doesn't matter as much as, as where as that meaning can be created.
RAO: Yeah, and I think we are already getting there. Like embodiment is farther along than people realize. Like already people are sitting in, self-driving cabs and like after five minutes the novelty wears off. that's what happened with me when I sat in a Waymo. But this is a beast with a lidar and a lot of sensors and loops and it, has a certain organic embodied presence that is very different from interacting with chat GPT.
It's a very different type of intelligence. And, yeah, I think the philosophical, track your, gesturing that is in fact the right one. hue I think came a little too early to, he's a useful thinker on, these matters. But if you fast forward a little bit I think Gilbert Wild is the philosopher to, think about quite a bit these days to some extent, high Decker, like I'm not a huge fan of, high Decker. There's lots of problems with his thinking, but some of his ideas about being in the world and, how you, how embodiment and, you're calling the somatic aspect of consciousness this, interplay. They're really, interesting. So there's a line of hi thought that I found interesting which comes out of his. Otherwise, like crappy philosophy of technology. But he has this notion of readiness at hand and readiness to hand of
SHEFFIELD: Yeah.
RAO: integrates into your body. And, a bunch of philosophers, Simon Dunn and lately Han and others, they've carried on the Erian tradition. And they're say, starting to say very interesting things about how being an embodiment work in the modern world, their views colored by the general kind of like. Dismal nature of Hi [01:00:00] Arian thought. So if you can pull that out and
SHEFFIELD: Yeah. Get past the,
RAO: bit. Yeah,
SHEFFIELD: yeah.
RAO: way to think of.
And Gilbert trial I think is under, underrated. I have to go back and read more of how he came at these problems. But yeah, a lot of philosophical work to be done and I think more and more exciting raw material will come for the philosophically minded in the next 10 years. And I, think the same thing I say to people who are like, trying too hard to optimize their AI experiences right now. I would say to philosophers as well as well. So a lot of the AI hustlers as I like to, call them, they very eagerly make up like really complicated prompts and recipes and things like that. And I'm lazy. I just use AI right outta the box. And my response to those people is, you're doing all these complicated things, it'll be obsolete in three months because the models themselves will become sophisticated to do that.
initially chain of thought prompting was something you had to do manually. Now it just does it right, so you, it's, very valuable to be lazy these days, like both as a hands-on technology user, just be lazy because anything you try to overdo, chances are the technology will do it better in six months.
And I think the same is true of philosophy. very energetically trying to form about consciousness or quality or something, I would recommend just. a lot more lazy because six months from now you'll probably see three or four very intriguing new things happening in AI that'll actually lead you to better conclusion.
So I, I think my current stance is around and find out and experiment in a playful manner with the actual technology, but really as much as possible, leave open without an urgent need to answer them. It's important, it's more important right now to collect the good questions than to find the good answers because so much interesting data is coming our way that we'll keep finding better and better answers as the years go by.
SHEFFIELD: Yeah.
RAO: Yeah.
SHEFFIELD: Yeah. [01:02:00] Ultimately what matters is what you can do with it, not what slot you can put it in. I think that's what it comes down to. Yeah. All right. we, we could probably keep going for a lot longer, but, I don't want to do that to everybody. But, so for, people who want to keep up with your writings and such, what's your recommendation for them?
RAO: I write a Substack newsletter called Contraptions and I also run a program called the Summer of Protocols for the Ethereum Foundation. So a lot of my work is in those two, places. So if you Google, if you look for contraptions on Substack or the summer of protocols, just search it on Google.
You'll find a lot of, what I'm up to these days. But beyond that, yeah, basic information about me is kote.com. So the basic 4 1 1 stuff is there.
SHEFFIELD: Okay, cool. All right. Thanks for being here.
RAO: Thanks for having me.
SHEFFIELD: All right, so that is the program for today. I appreciate everybody joining us for the conversation, and you can always get more if you go to Theory of Change show.
We have the video, audio, and transcript of all the episodes. And my thanks to everybody who is a paid subscribing member. Thanks very much for your support. We have options on both Patreon and Substack. If you are interested in becoming a paid subscriber, that is very helpful. Thank you very much. And if you can't afford to do that right now, I understand these are tough times for a lot of people.
but you can help out the show if you share it with your friends and your family. That would be much appreciated as well. And if you are watching on YouTube, please do click the like and subscribe button so you can get notified whenever we post a new episode. All right, so that'll do it for this one.
Thanks a lot for joining me, and I'll see you next time. [01:04:00]