I have been forced recently to cover many statements by US AI Czar David Sacks.
Here I will do so again, for the third time in a month. I would much prefer to avoid this. In general, when people go on a binge of repeatedly making such inaccurate inflammatory statements, in such a combative way, I ignore.
Alas, under the circumstances of his attacks on Anthropic, I felt an obligation to engage once more. The All-In Podcast did indeed go almost all-in (they left at least one chip behind) to go after anyone worried about AI killing everyone or otherwise opposing the administration’s AI strategies, in ways that are often Obvious Nonsense.
To their credit, they also repeatedly agreed AI existential risk is real, which also makes this an opportunity to extend an olive branch. And some of the disagreements clearly stem from real confusions and disagreements, especially around them not feeling the AGI or superintelligence and thinking all of this really is about jobs and also market share.
If anyone involved wants to look for ways to work together, or simply wants to become less confused, I’m here. If not, I hope to be elsewhere.
I first covered many of his claims in Fighting Obvious Nonsense About AI Diffusion. Then I did my best to do a fully balanced look at the UAE-KSA chips deal, in America Makes AI Chip Diffusion Deal with UAE and KSA. As I said then, depending on details of the deal and other things we do not publicly know, it is possible that from the perspective of someone whose focus in AI is great power competition, this deal advanced American interests. The fact that many of Sacks’s arguments in favor of the deal were Obvious Nonsense, and many seemed to be in clearly bad faith, had to be addressed but did not mean the deal itself had to be an error.
This third post became necessary because of recent additional statements by Sacks on the All-In Podcast. Mostly they are not anything he has not said before, and are things he is likely to say many times again in the future, and they are largely once again Obvious Nonsense, so why cover them? Doesn’t Sacks rant his hallucinations about the supposed ‘AI Existential Risk Industrial Complex’ all the time?
Yes. Yes, he does. Mostly he falsely rants, and he repeats himself, and I ignore it.
What was different this time was the context.
The Trump Administration is attempting to pass what they call the ‘Big Beautiful Bill.’
Primarily this bill is a federal budget, almost none of which has to do with AI.
It also contains a provision that would impose a 10 year moratorium, on the state or local level, on civil law enforcement of almost any laws related to AI.
Many people, including myself and Anthropic CEO Dario Amodei, are not afraid to say that this is a bonkers crazy thing to do, and that perhaps we might want to take some modest actions on AI prior to it transforming the world rather than after.
Dario Amodei (CEO Anthropic): You can’t just step in front of the train and stop it. The only move that’s going to work is steering the train - steer it 10 degrees in a different direction from where it was going. That can be done. That’s possible, but we have to do it now.
Putting this provision in the BBB is also almost certainly a violation of the Byrd rule, but Congress chose to put it in anyway, likely as a form of ‘reconnaissance in force.’
It is not entirely clear that the administration even wants this moratorium in this form. Maybe yes, maybe no. But they very much do care about the BBB.
Thus, someone leaked to Semafor, and we got this article with the title ‘Anthropic emerges as an adversary to Trump’s big bill,’ claiming that Anthropic is lobbying against the BBB due to the AI provision, and this and other Anthropic actions are making Trump world very angry.
The other main trigger, Semafor reports, was Anthropic’s hiring two Biden AI staffers, Elizabeth Kelly and Tarun Chhabra, and Biden AI advisor Ben Buchanan, although it is noted by Semafor that Anthropic also employs Republican-aligned policy staff, like Benjamin Merkel and Mary Croghan. Buchanan, the architect of the Biden Diffusion rules, has (as one would expect) personally opposed the UAE-KSA deal and other ways in which Biden administration rules have been reversed.
Bizarrely, the Trump administration also expressed annoyance at Anthropic CEO Dario Amodei warning about imminent loss of up to half of white collar jobs. I think that projection was too aggressive, but I am confident he believes it.
Semafor bizarrely frames these lobbying tactics as potentially savvy business moves?
Reed Albergotti: Opposing the bill preempting state AI laws may not be necessary anyway, because it faces high hurdles in both congress and in the courts.
In other words, Anthropic’s federal lobbying probably won’t make much of a difference. Influencing the White House on its executive orders would have been the best shot.
In the long run, though, maybe it’s a smart strategy. AI researchers may see Anthropic as more principled and it could help with recruiting. The Trump administration won’t be around forever and Anthropic may be better positioned when the next president takes office.
Yeah, look, no, obviously not, if you agree with Reed (and I do) that Anthropic can’t have a substantial impact on the BBB proceedings then this was clearly a misstep given the reaction. Why would anyone think ‘antagonize the Trump administration’ was good business for Anthropic? To help a bit with recruiting because they would look slightly more ‘more principled’ at the risk of facing a hostile White House?
Anthropic and the White House being enemies would help only OpenAI and China.
Anthropic’s lobbying of course is partly motivated by what they believe is good for America and humanity, and partly by what is good for Anthropic.
Anthropic has been, up until recently, seemingly been pursuing a very deliberate insider strategy. They were careful not to antagonize anyone. They continue to downplay public statements about AI existential and catastrophic risks. They have offered only very careful and measured support for any AI regulations. Dario has very much publicly gotten behind and emphasized the ‘need to beat China’ framework. Not only does Anthropic not call for AI to ‘slow down’ or ‘pause,’ they call upon American AI to accelerate. On SB 1047, Anthropic called for and got major softening of the bill and then still refused to endorse it.
This has been extremely frustrating for those who are worried about AI killing everyone, many of whom think Anthropic should speak up far louder and make the case for what is actually necessary. They see Anthropic as having largely sold out on this and often other fronts. Because such an approach is very obviously good for Anthropic’s narrow business interests.
What was said on the All-In Podcast recently, and is being reiterated even more than usual on Sacks’s Twitter recently, is a frankly rather unhinged attack against anyone and everyone Sacks dislikes in the AI space, in an attempt to associate all of it together into a supposed grand diabolical and conspiratorial ‘AI Existential Risk Industrial Complex’ out that, quite frankly, does not exist.
What is different this time is primarily the targeting of Anthropic.
Presumably the message is, loud and clear: Back the hell off. Or else.
This post has five primary objectives.
Actually look concretely at the arguments being made in case they have a point.
Have a reference point for this event and for this general class of claims and arguments, explaining that they simply are not a description of reality and illustrating the spirit in which they are being offered to us, such that I can refer others back to this post, and link back to it in the future.
Extend an olive branch and offer of help to Sacks and those at the All-In Podcast.
Ensure that Anthropic understands the messages being sent here.
Provide a response to the podcast’s discussion on jobs in their Part 2.
For various reasons, I am, shall we say, writing this with the maximum amount of charity and politeness that I can bring myself to muster.
You should proceed to the rest of this post if and only if this post is relevant to you.
I used the YouTube transcript. This was four podcasts in one.
A rather misinformed and unhinged all-out attack on and an attempt to conflate through associations and confusions and vibes some combination of Anthropic, diffusion controls on advanced AI chips, anyone supporting diffusion controls, anyone opposing the UAE deal especially if they are a China hawk, more generally anyone who has a different opinion on how best to beat China, anyone worried about AI job losses, anyone worried about AI existential risk (while admitting to their credit that AI is indeed an existential risk several times), those who cause AIs to create black George Washingtons, several distinct classes of people referred to as ‘doomers,’ EA, The Biden Administration, anyone previously employed by the Biden Administration at least in AI, OpenPhil, Dustin Moskovitz, Netflix CEO Reed Hoffman, woke agendas, a full on dystopian government with absolute power, a supposed plot to allocate all compute to a few chosen companies that was this close to taking over the world if Trump had lost.
This was then extended to Barack Obama via Twitter.
As presented this was presumably in large part a warning to Anthropic, that their recent activities have pissed people off more than they might realize, in ways I presume Anthropic did not intend.
A much better discussion about AI job losses and economic growth, in which new startups and new jobs and cheap goods will save us all and everything will be great and we’ll all work less hours and be wealthier. I largely disagree.
It also makes clear that yes, by existential they do (often) mean the effect on jobs and they do not in any way feel or expect superintelligence or even AGI. Or at minimum, they often speak and think in ways that assume this.
A discussion of the ‘big beautiful bill’ also known as the budget, without reference to the attempted 10-year moratorium on any local or state enforcement of any civil law related to AI. Mostly here I just note key claims and attitudes. I thought a lot of the talk was confused but it’s not relevant to our interests here.
A discussion of other matters outside our scope. I won’t comment.
If those involved believe what they are saying in part one and what David Sacks often says on Twitter on related topics, then they are deeply, deeply misinformed and confused about many things. That would mean this is a great opportunity for us all to talk, learn and work together. We actually agree on quite a lot, and that ‘we’ extends also to many of the others they are attacking here.
I would be happy to talk to any combination of the All-In hosts, in public, in private or on the podcast in any combination, to help clear all this up along with anything else they are curious about. We all benefit from that. I would love to do all this cooperatively. However differently we go about it, we all want all the good things and there are some signs there is underlying appreciation here for the problems ahead.
However it ended up in the podcast - again, this could all be a big misunderstanding - there was a lot of Obvious Nonsense here, including a lot of zombie lies, clearly weaponized. They say quite a lot of things that are not, and frame things in ways that serve to instill implications that are not true, and equate things that should not be equated, and so on. I can’t pretend otherwise.
There’s also a profound failure to ‘feel the AGI’ and definitely a failure to feel the ASI (artificial superintelligence), or even to feel that others might truly feel it, which seems to be driving a lot of the disagreement.
There’s a conflation, that I believe is largely genuine, of any and all skepticism of technology under the umbrella term ‘Doomer.’ Someone worries about job loss? Doomer. Someone worries about existential risk (by which perhaps you mean the effect on jobs?)? Doomer. Someone worries about AI ethics? Doomer. Someone worries about climate change? Doesn’t come up, but also doomer, perusambly.
But guys, seriously, if you actually believe all this, call me, let’s clear this up. I don’t know how you got this confused but we can fix it, even if we continue to disagree about important things too.
If you don’t believe it, of course, then stop saying it. And whether or not you intend to stop, you can call me anyway, let’s talk off the record and see if there’s anything to be done about all this.
The transcript mostly doesn’t make clear who is saying what, but also there don’t seem to be any real disagreements between the participants, so I’m going to use ‘they’ throughout.
I put a few of these notes into logical order rather than order in the transcript where it made more sense, but mostly this is chronological. I considered moving a few jobs-related things into the jobs section but decided not to do this.
As per my podcast standard, I will organize this as a series of bullet points. Anything in the main bullet point is my description of what was importantly said. Anything in the secondary sections is me responding to what was said.
They start off acknowledging employment concerns are real, they explicitly say people are concerned about ASI and yes they do mean the effect on jobs.
Then start going hard after ‘doomers’ starting with Dario Amodei’s aggressive claims about white collar job losses, accusing him of hype.
Pot? Cryto-kettle?
I do actually think that particular claim was too aggressive, but if Dario is saying that it is because he believes it (and has confusions about diffusion, probably).
Later they say ‘Anthropic’s warnings coincidence with key moments in their fundraising journey’ right after Anthropic recently closed their Series E and now is finally warning us about AI risks.
They are repeating the frankly zombie lie that Anthropic and OpenAI talk about AI existential risk or job loss as hype for fundraising, that it’s a ‘smart business strategy.’ That it is a ‘nefarious strategy.’ This is Obvious Nonsense. It is in obvious bad faith. OpenAI and Anthropic have in public been mostly actively downplaying existential risk concerns for a while now, in ways I know them not to believe. Stop it.
Then claim broader AI risk concerns expressed at the first AI safety summit ‘have been discredited,’ while agreeing that the risks are real they simply haven’t arrived yet. Then they go on about an ‘agenda’ you should be ‘concerned about.’
They essentially go all Jevon’s Paradox on labor, that the more we automate (without loss of generality) coding there will be better returns so you’ll actually end up using more. They state this like it is fact, even in the context of multipliers like 20x productivity.
This claim seems obviously too strong. I won’t reiterate my position on jobs.
These venture capitalists think that venture capitalists will always just create a lot more jobs than we lose even if e.g. all the truck drivers are out of work because profits, while investing in a bunch of one-person tech companies and cryptos.
‘Fear is a way of getting people into power and they’re going to create a new kind of control.’ I… I mean… given who is doing this podcast do I even have to say it?
They claim Effective Altruism ‘astroturfs.’
This is complete lying Obvious Nonsense, and rather rich coming from venture capitalists who engage in exactly this in defense of their books, with company disingenuous lobbying efforts from the likes of a16z and Meta massively outspending all worried people combined and lying their asses off outright on the regular and also being in control in the White House.
Every survey says that Americans are indeed worried about AI (although it is low salience) and AI is unpopular.
They then outright accuse OpenPhil, EA in general, Anthropic and so on of being in a grand conspiracy seeking ‘global AI governance,’ then conflate this with basic compute governance, then conflate this with the overall Biden AI agenda and DEI.
Which again is Obvious Nonsense, at best such efforts are indifferent to DEI.
I assure everyone Anthropic does not care about a woke agenda or about DEI.
My experience with EA reflects this same attitude in almost all cases.
Then they claim this ‘led to woke AI like the black George Washington.’
I refer to what happened with that as The Gemini Incident.
The causal claim here is Obvious Nonsense. Google was being stupid and woke all on its own for well documented reasons and you can be made at Google’s employees if you want about this.
They make it sound as sinister as possible that Anthropic hired several ex-Biden AI policy people.
I get why this is a bad look from the All-In Podcast perspective.
However, what they are clearly implying here is not true, and Anthropic has hired people from both sides of the aisle as per Semafor, and is almost certainly simply snapping up talent that was available.
They accuse ‘EA’ or OpenPhil or even Anthropic of advocating ‘for a pause.’
This is unequivocally false for OP, for Anthropic and for the vast majority of EA efforts. Again, lies or deep deep confusion, Obvious Nonsense.
Anthropic CEO Dario Amodei has put out extensive essays about the need to beat China and all that. He is actively trying to build transformational AI.
A ‘pause’ would damage or destroy Anthropic and he thinks a pause would be obviously unwise right now. Which I agree with.
I am very confident the people making these claims know the claims are false.
They say ‘x-risk is not the only risk we have to beat China.’
And I agree! We all agree! Great that we can agree these are two important goals. Can we please stop with the claims that we don’t agree with this?
Dario also agrees very explicitly, out loud, in public, so much so it makes a lot of worried people and likely many of his employees highly uneasy and he’s accused of selling out.
David Sacks in particular has accused anyone who opposes his approach to ‘beating China’ of not caring about beating China. He either needs to understand that a lot of other people genuinely worried about China strongly disagree about the right way to beat China and think keeping compute out of the wrong hands is important here, or he needs to stop lying about this.
Someone estimates 30% chance China ‘wins the AI race’ but thinks existential risk is lower than 30%.
I disagree on the both percentages, but yes that is a position one might reasonably take, but we can and must then work on both, and also while both these outcomes are very bad one is much much worse than the other and I hope we agree on which is which.
They say Claude kicks ass, great product.
I definitely agree with that.
The pull quote comes around (19:00) where they accuse everyone involved of being ‘funded by hardcore leftists’ and planning on some ‘Orwellian future where AI is controlled by the government’ that they ‘use to control all of us’ and using this to spread their ‘woke’ or ‘left-wing’ values.
Seriously no, stop.
I go into varying degrees of detail about this throughout this and other posts, but please, seriously, no, this is simply false on all counts.
It is true that there are other people, including people who were in the Biden administration, who on the margin will prioritize doing things that promote ‘left-wing’ values and ‘woke’ agendas. Those are different people.
They even claim that before Trump was elected they were on a path to ‘global compute governance’ restricted to 2-3 companies that then forced the AIs to be woke.
This is again all such complete Obvious Nonsense.
I believe this story originated with Marc Andreessen.
At best it is a huge willful misunderstanding of something that was said by someone in the Biden Administration.
It’s insane that they are still claiming this and harping on it, it makes it so hard to treat anything they say as if it maps to reality.
At this point I seriously can’t even with painting people advocating for ‘maybe we should figure out what is the best thing to do with our money and do that’ and ‘we should prevent China from getting access to our compute’ and ‘if we are going to make digital minds that are potentially smarter than us that will transform the world that might not be a safe thing to do and is going to require some regulations at some point’ as ‘we should dictate all the actions of everyone on Earth in some Orwellian government conspiracy for Woke World Domination these people would totally pull off if it wasn’t for Trump’ and seriously just stop.
They ask ‘should you fear government regulation or should you fear autocomplete.’
It is 2025 are you still calling this ‘autocomplete’ you cannot be serious?
We agree this thing is going to be pivotal to the future and that it presents existential risk. What the hell, guys. You are making a mockery of yourselves.
I cannot emphasize enough that if you people could just please be normal on these fronts where we all want the same things then the people worried about AI killing everyone would mostly be happy to work together, and would largely be willing to overlook essentially everything else we disagree about.
I honestly don’t even know why these people think they need to be spending their time, effort and emotional energy on these kinds of attacks right now. They must really think that they have some sort of mysterious super powerful enemy here and it’s a mirage.
These are the same people pushing for their ‘big beautiful bill’ that includes a full pre-emption of any state or local regulations on AI (in a place that presumably won’t survive the Byrd rule, but they’re trying anyway) with the intended federal action to fill that void being actual nothing.
Then they’re getting angry when people react as if that proposal is extreme and insane, and treat those opposed to it as being in an enemy camp.
They do some reiteration of their defenses of the UAE-KSA chips deal.
I’ve already said my peace on this extensively, again reasonable people can disagree on what is the best strategic approach, and reasonable people would recognize this.
David Sacks in particular continues to repeat a wide variety of highly unhinged claims about Effective Altruism. Here he includes Barack Obama in this grand conspiracy, then links to several even worse posts that are in transparently obvious bad faith.
David Sacks (2025, saying Obvious Nonsense): Republicans should understand that when Obama retweets hyperbolic and unproven claims about AI job loss, it’s not an accident, it’s part of an influence operation. The goal: to further “Global AI Governance,” a massive power grab by the bureaucratic state and globalist institutions.
The organizers: “Effective Altruist” billionaires with a long history of funding left-wing causes and Trump hatred. Of course, it’s fine to be concerned about a technology as transformational as AI, but if you repeat their claims uncritically, you may be falling for an astroturfed campaign by the “AI Existential Risk Industrial Complex.”
Claims about job loss (what I call They Took Our Jobs) are a mundane problem, calling for mundane solutions, and have nothing whatsoever to do with existential risk or ‘effective altruism,’ what are you even talking about. Is this because the article quotes Dario Amodei’s claims about job losses, therefore it is part of some grand ‘existential risk industrial complex’?
Seriously, do you understand how fully unhinged you sound to anyone with any knowledge of the situation?
David Sacks does not even disagree that we will face large scale job loss from AI, only about the speed and net impact. This same All-In Podcast talks about the possibility of large job losses in Part 2. Everyone who talks about this on the podcast seems to agree that massive job losses via AI automation are indeed coming, except they say This Is Good, Actually because technology will always also create more jobs to replace them. The disagreement here is highly reasonable and is mainly talking price.
Indeed, they talk about a ‘tough job market for new grads’ and warn that if you don’t embrace the AI tools, you’ll be left behind and won’t find work. That’s basically the same claim as Kevin Roose is making.
What did Barack Obama do and say? He retweeted a New York Times article by Kevin Roose that talks about job losses and illustrates some signs of it, including reporting the newsworthy statement from Dario Amodei, and then Obama made this statement:
Barack Obama: Now’s the time for public discussions about how to maximize the benefits and limit the harms of this powerful new technology.
Do you disagree with Obama’s statement, Sacks? Do you think it insufficiently expresses the need to provide miniature American flags for others and be twirling, always twirling towards freedom? Obama’s statement is essentially content-free.
How do these people continue to claim that this all-powerful ‘Effective Altruism’ was somehow the astroturfing lobbyist group and they are the rogue resistance, when the AI industry has more lobbyists in Washington and Brussels than the fossil fuel industry and the tobacco industry combined? When almost all of that industry lobbying, including from OpenAI, Google, Meta and a16z, is exactly what you would expect, opposition to regulations and attempts to get their bag of subsidies.
What is most frustrating is that David Sacks very clearly understands that AGI presents an existential risk. AI existential risk is even explicitly affirmed multiple times during this podcast!
He has been very clear on this in the past, as in, for example:
David Sacks (2024, saying helpful things): AI is a wonderful tool for the betterment of humanity; AGI is a potential successor species.
…
I’m all in favor of accelerating technological progress, but there is something unsettling about the way OpenAI explicitly declares its mission to be the creation of AGI.
Despite this, Sacks seems to have decided that reiterating these bizarre conspiracy theories and unhinged attacks is a good strategy for whatever his goals might be.
Here is another recent absurdity that I got forcibly put in front of me via Tyler Cowen:
David Sacks (June 2025, saying untrue things): Nobody was caught more off guard by the DeepSeek moment than the AI Doomers.
They had been claiming:
— that the U.S. was years ahead in AI;
— that PRC leadership didn’t care much about AI;
— that China would prioritize stability over disruption; and
— that if the U.S. slowed down AI development, China would slow down too.
All of this turned out to be profoundly wrong. Now, ironically, many of the Doomers — who prior to DeepSeek had tried to ban American models now currently in use — are trying to rebrand as “China Hawks.” If they had their way, the U.S. would have already lost the AI race!
David Sacks has to know exactly what he is doing here. This is in obvious bad faith. At best, this is the tactic of ‘take a large group of people, and treat the entire group as saying anything that its most extreme member once said, and state it in the most negative way possible.’
To state the obvious, going point by point, how false all of this is:
The USA remains ahead in AI, but yes China has closed this gap somewhat, as one would broadly expect, at least in terms of fast following. The impact of the DeepSeek moment was largely that various people, including Sacks, totally blew what happened out of proportion. Some of that was obvious at the time, some only became clear in retrospect. But the rhetoric is full on ‘missile gap.’ Also, this is like saying ‘you claimed Alice was two miles ahead of Bob, but then Bob caught up to Alice, so you were lying.’ That is not how anything works.
The PRC leadership was, as far as I can tell, highly surprised by DeepSeek. They were indeed far more caught off guard than the ‘AI Doomers,’ many of whom had already been following DeepSeek and had noticed v3 and expected this. The PRC then noticed, and yes they now care about AI more, but for a long time they very much did not appreciate what was going on, what are you even talking about.
China seems to have favored stability over disruption far more than America has in this case, they are absolutely caring about stability in the ways China cares about, and this is not what a China that was actually AGI-pilled would look like. China is happy to ‘disrupt’ in places where what they are disrupting is us. Sure.
This is a complete non sequitur. This claims that ‘we’ said [X] → [Y], where [X] is ‘America slows down’ and [Y] is ‘China slows down.’ [X] did not happen! At all! So how can you possibly say that [X]→[Y] turned out to be profoundly wrong? You have absolutely no idea. I also note that we almost always didn’t even make this claim, that X→Y, we said it would be good if both X and Y were true and we should try to get that to happen. For example, I did not say ‘If we slow down, China slows down.’ I said things of the form ‘it would be good to open a dialogue about whether, if we solved down, China would also slow down, because we haven’t even tried that yet.’
The reference to ‘attempts to ban models currently in use’ as if this applies broadly to the group in question, rather than to a very small number of people who were widely criticized at the time, including repeatedly by myself very very explicitly, for overreach because of this exact request.
The repetition of the false claim that there is an attempted ‘rebrand as China Hawks’ which I have discussed previously, and then the claim that these are the same people who tried to ban current models, which they aren’t.
I sincerely wish that David Sacks would stop. I do not expect him to stop. Given that I do not expect him to stop, I sincerely wish that I can go back to avoiding responding when he continues.
The discussion of the future of jobs and employment in Part 2 was much better.
There seemed to be a problem with scale throughout Part 2.
This all seems to take place in a tech and startup bubble where everyone can be founding a new startup or deeply steeping themselves in AI tools to get one of those cool new AI jobs.
This is great advice for podcast listeners in terms of career development, but it simply doesn’t scale the way they want it to, nor does it then broaden out as fast or far in terms of jobs as they pitch it as doing.
There’s ‘what can a bright young listener to this podcast who is into tech and startups and is situationally aware do’ and ‘what is going to happen to a typical person.’ You cannot, in fact, successfully tell most people to ‘learn to code’ by adding in the word vibe.
They assert ‘technology always means more jobs,’ and see concerns about job loss as largely looking at union jobs or those of particular groups like truck drivers that Biden cares about or coal miners that Trump cares about.
I think the worries are mostly far more general. I find it interesting they focus primarily on the non-LLM job loss from self-driving rather than the wider things coming.
I see union jobs as likely far more protected, especially government protected unions, as unions have leverage to prevent diffusion, until they are disrupted by non-union rivals, and similar for jobs protected by license regimes.
They point out that we will all be richer and the benefits will come quickly, not only the job losses.
True, although it will likely be cold comfort to many during the transition, the gains won’t flow through in ease of making ends meet the way one might hope unless we make that happen.
They emphasize that costs of goods will fall.
I think this is largely very right and yes people are underestimating this, but goods we can make without regulatory barriers are not where people are struggling and are a remarkably low percentage of costs.
In the past, getting cheaper food and clothing was a huge deal because that was 50%+ of expenses and it shrunk dramatically, which is great.
But now food is about 10% and clothing is trivial, the prices can’t go that much lower, and labor income might be falling quite a lot if there’s enough competition for jobs.
If the price of food is cut in half that is great, I do agree it would be good to automate food prep (and truck driving and so on) when we can, but this actually doesn’t save all that much money.
I think a lot of people’s focus on the price of food is essentially generational, historical and evolutionary memory of different times when food costs were central to survival.
They correctly ask the right question, what allows for the same lifestyle.
In the past, the main constraint on lifestyle was ability to purchase goods, so cutting goods costs via increased productivity means you need to work less to match lifestyle.
But now it is mostly services, and the goods with restricted supply, and also we are ratcheting up what counts as the baseline lifestyle and what is the required basket of goods.
The key question about lifestyle isn’t quality of goods. It’s about quality of life, it’s about ability to raise a family, as I will soon discuss in ‘Is Life Getting Harder?’
Their model seems to boil down to something not that different from ‘startups are magic’ or ‘lump of income and labor fallacy?’ As in, if you have a bunch of wealth and investment then of course that will create tons of jobs through new startups and investment.
But in a rapidly automating world, especially one in which the best startups will often be disruptors via automation, we’re talking about the need for tens of millions of new jobs over the course of a few years, and then those jobs start getting automated too, and AI keeps improving as this happens. If you think there really are this many ‘shadow jobs’ waiting for us I want a much more concrete model of how that can be true.
Note that if you think we don’t need more gears here, then think about why you think that is true here and where else that might apply.
Reminder: My expectation is that for a while unemployment won’t change that much, although there will be some extra unemployment due to transitional effects, until we exhaust the ‘shadow jobs’ that previously weren’t worth hiring people for, but then this will run out - there is a lot of ruin in the job market but not forever.
Prediction that we will ‘take our profits’ in 30 hour work weeks, speculation about 10% GDP growth if we have 10%-20% white collar job loss (one time?!). None of this seems coherent, other than a general ‘we will all be rich and trends of richness continue’ intuition.
Note the lack of ambition here. If only 20% of current white collar jobs or tasks get automated over a medium term then that isn’t that big. There’s no reason to think that causes persistent 10% growth.
I do think there is a good chance of persistent 10%+ growth but if so it will involve far more transformational changes.
I also don’t see why we should expect people to ‘take our profits’ in shorter work weeks unless we use government to essentially force this.
‘People say jobs are going to go away but I am on the ground and I see more startups than ever and they’re making a million dollars per employee.’
The statement is true, and I buy that the startup world is going great, but in terms of responding to the threat of massive job losses? These people seem to be in a bubble. Do they even hear themselves? Can they imagine a Democratic politician talking like that in this context?
Do they understand the relative scales of these employment opportunities and economic impacts? ‘The ground’ does not want to mean startup world in San Francisco.
They talk about how it is hard to automate all of a customer service job because some parts are hard for AI.
This is a distinct lack of thinking ahead.
In general it does not seem like this discussion is baking in future AI progress, and also still leaves room for most such jobs to go away anyway.
They say yes if we have 20% job loss government will have to step in but it is a ‘total power grab’ to demand the government ‘act now’ about potential future unemployment.
What is this word salad specter of Andrew Yang or something? How does this relate to anything that anyone is seriously asking for?
The thing about unemployment is that you can indeed respond after it happens. I strongly agree that we should wait and see before doing anything major about this, but also I don’t see serious calls to do otherwise.
Based on various statements where they seem to conflate the two:
I think that by existential risk they might literally mean the effect on jobs? No, seriously, literally, they think it means the effect on jobs? Or they are at least confused here? I can’t make sense of this discussion any other way. Not in a bad faith way, just it seems like they’re legitimately deeply confused about this.
They say diffusion rules wouldn’t solve existential risk but they’re open to suggestions?
I mean no they won’t do that on their own, the primary goal of diffusion rules is to hold back China so we can both win the race and giving ourselves enough freedom of action (and inaction) to have a chance to find a solution to existential risk, why is this so confusing.
And what is this doing in the middle of a discussion about job loss and economic growth rates?
More talk about ‘glorified auto compute.’
You can stop any time, guys.
(36:52) ‘tough job market for new grads in the established organizations and so what should new grads do they should probably, steep themselves in the tools and go to younger companies or start a company i think that's the only solution for them.’
This is great advice but I don’t think they understand how grim that is. The vast majority of people are not going to be able to do a startup, I wish this were possible and it’s good advice for their audience sure but this is innumerate to suggest for the population as a whole.
So the only thing, as they say, that young people can do in this type of future is deeply steep themselves in these AI tools to outcompete those that don’t do it, but obviously only a small portion of such people can go that route at once, this works exactly because everyone else mostly won’t do it. The vast majority of grads will be screwed on an epic level.
This is the same as the whole ‘learn to code’ message that, shall we say, did not win the votes of the coal miners. Yes, any individual sufficiently capable person could learn to code, but not everyone can, and there were never that many slots. Similarly, for a long time ‘learn to play poker and grind it out’ has been a very viable path for anyone who has the discipline, but very obviously that is not a solution at scale because it would stop working (also it doesn’t produce anything).
Again speculation that ‘the people who benefit the most’ are new coders willing to embrace the tech.
I mean tell that to the current SWE market, this is not at all obvious, but yes in a AI-is-super-productive world the handful of people who most embrace this opportunity will do well. They’re right that the people who embrace the tools will beat the people who push back, okay, sure.
I will never get the python love they also express here, or the hate for OOP. I really wish we weren’t so foolish as to build the AI future on python, but here we are.
(40:57) Again the conflation where blaming a layoff on AI is a ‘doomer story.’
This is, once again, a distinct very different concern. Both are real.
So they’re confirming that by ‘doomer’ they often simply mean someone who by existential risk does mean the effect on jobs.
That’s a mostly different group of people, and that’s not how the term is typically used, and it’s clear that they’re either being fooled by the conflation or using it strategically or both.
Pick a lane, I’m fine with either, but this trying to equate both camps to use each to attack the other? No.
They insist that when layoffs happen so far they’re not due to AI.
Okay, I mean, the companies do often say otherwise and you agree AI is making us all a lot more productive, but maybe they’re all lying and everyone only cuts management now but also then they say management jobs aren’t being eliminated due to AI yet.
Alternatively they are also telling the ‘the layoffs are due to AI because the people who won’t embrace AI now need to be fired and this is good, actually’ story, which is also plausible but you can’t have it both ways.
This all sounds like throwing everything at the wall that sounds like ‘AI is good’ and seeing what sticks.
This is perhaps related to throwing everything that sounds like ‘AI is bad’ into a pot and claiming all of it is the same people in a grand conspiracy?
As I understand them: The AI race is an infinite race with no finish line but it is still a race to see who is stronger and maybe USA wins maybe China wins maybe it’s a tie maybe ‘open source wins’ and nuclear deterrence led to peace and was good actually but this is better because it’s a system of productivity not destruction and everyone will have to compete vigorously but we have to watch out for something like 5G where Huawei ‘weren’t worried about diffusion’ they wanted to get their tech out, the race is about market share and whose technology people are using, and the pace of improvement is ‘holy shit.’
I covered a (more coherent but logically identical) version of this when I previously covered Sacks, this does not what matters and the ‘AI race’ is not about market share, and this reflects like the rest of this podcast a profound failure to ‘feel the AGI’ and certainly to ‘feel the ASI.’
It seems worth a few notes while I am here. I will divide the ‘BBB’ into two things.
The attempted 10-year moratorium on enforcement of any AI anything on the local or state level whatsoever. This is, in my humble opinion and also that of Anthropic’s CEO, deeply stupid, bonkers crazy, a massive overreach, a ‘of course you know this means war’ combined with ‘no one could have predicted a break in the levees’ level move. Also an obvious violation of the Byrd rule when placed within the budget, although sadly not in practice a violation of the 10th amendment.
Everything else in the bill, which is what they discuss here. The most important note is that they only talk about the rest of the BBB without the moratorium.
I am not an expert on Congressional budget procedure or different types of appropriations but it seemed like no one here was one either, and the resulting discussion seemed like it would benefit from someone who understands how any of this works.
They are very keen to blame anything and everything they can on Biden, the rest on Congress, and nothing on Trump.
They seem very excited by making the DOGE cuts permanent for reasons that are not explained.
I notice that there is a prediction that this administration will balance the Federal budget. Are we taking wagers on that? There’s a lot of talk of the need to get the deficit down, and they blame the bill not doing this on Congress, essentially.
It sees this expectation is based on creating lots of economic growth, largely via AI. Very large gains from AI does seem to me to be the only sane way we might balance the budget any time soon. I agree that there should be lots of emphasis on GDP growth. They are very confident, it seems, that lower taxes will pay for themselves and spur lots of growth, and they think the CBO is dumb and simplistic.
There’s a concrete prediction for a very hot Q2 GDP print, 3%-4%. I hope it happens. It seems they generally think the economy will do better than predicted, largely due to AI but also I think due to Trump Is Magic Economy Catnip?
They talk about the need for more energy production and some details are discussed on timing and sizing, I agree and would be doing vastly more to move projects forward but from what I have seen of the BBB it does not seem to be net positive on this front. I think they are right to emphasize this but from what I can tell this is not cashing out in terms of much action to create new energy production.
I don’t have anything to say about Part 4, especially given it is out of my scope here.
I hope that Anthropic understands the reaction that they seem to be causing, and chooses wisely how to navigate given this. Given how often Sacks makes similar claims and how much we all have learned to tune those claims out most of the time, it would be easy to miss that something important has changed there.
I presume that David Sacks will continue to double down on this rhetoric, as will many others who have chosen to go down similar rhetorical paths. I expect them to continue employing these Obvious Nonsense vibe-based strategies and accusations of grand conspiracies indefinitely, without regard to whether they map onto reality.
I expect it to be part of a deliberate strategy to brand anyone opposing them, in the style of a certain kind of politics, as long as such styles are ascendant. Notice when someone makes or amplifies such claims. Update on that person accordingly.
I would love to be wrong about that. I do see signs that, underneath it all, something better might indeed be possible. But assuming I’m not wrong, it is what it is.
My realistic aspiration is to not have to keep having that conversation this way, and in particular not having to parse claims from such arguments as if they were attempting to be words that have meaning, that are truthful, or that map into physical reality. It is not fun for anyone, and there are so many other important things to do.
If they want to have a different kind of conversation, I would welcome that.