Questions about AI 2025

5 days ago 3

While I spend 99% of my time thinking about hardware, synthetic fuels, and the solar industrial revolution, the progress in AI has not gone unnoticed. I’m writing this post not to share any particular insights but instead to record the questions I think are interesting and how I’m thinking about them as of today. 

What will the impact of AGI/ASI be on economic growth? 

Dwarkesh Patel, Eli Dourado, Noah Smith, and Tyler Cowen among others have recently discussed potential impacts of AGI ranging from not much (AGI will be slowed down by the same things as everything else) to 50% GDP growth (armies of humanoid robots systematically turning the crust into Capital). 

A model I’ve long been interested in is the Corporation as a stand in for AGI. We need some non-human autonomous legal and economic entity. A corporation is just that. The Fortune 500 are already non-human super-intelligence. They operate 24/7/365 according to inscrutable internal logic, routinely execute feats of production unthinkable for any human or other biological organism, often outlive humans, can exist in multiple places at once, etc etc. 

To take this analogy further you could even imagine spinning up a few million headless Nevada LLCs, assigning each to some agentic AGI running on the cloud somewhere, and turn them loose. Years ago I registered feralrobotics.com to explore the idea of mass producing ambient solar powered quadcopters with basic sensors and an Internet connection. But as Paul Graham says, the robots live in a data center for efficiency. 

There is one other interesting angle to this question when it comes to speculating about economic impact. Let’s imagine a corporation with a bunch of internal AI functionality that is able to perform at a higher level than fully human corporations and as a result compound growth at a higher rate. As an outside observer, how would this differ from a handful of existing extreme outlier companies who can already do things other companies have proven unable to do. 

Take for example SpaceX. Over the last 15 years, dozens of competing launch companies have been founded, often by SpaceX veterans who have already learned the hard lessons, often with significantly more money and a friendlier regulatory environment than SpaceX, and they’ve pretty much all failed. SpaceX is, culturally, often a pretty chaotic place to work, and yet they’ve landed the Falcon 9 booster over 400 times. 

I’m not saying Elon is ASI (though he’s obviously SI and many peer CEOs attribute his success to this as well as persistence and pain tolerance) but if he was, what difference would it make? Elon’s biographer Isaacson has speculated about succession planning at SpaceX, but maybe that’s what xAI is training Grok to do?

If Grok can simulate Elon, and the rest of the F500 uses Grok to run their organizations, and as a result they achieve SpaceX levels of productivity and innovation, I can’t imagine it wouldn’t at least double growth. But while Tesla and SpaceX have succeeded thus far, it has taken 20+ years. Coordinating large numbers of people has a steep cost in efficiency. 

Can someone please write a book that covers the organizational aspects of the Elon Industrial Complex?

To what extent do existing organizational outliers model what ASI can achieve in our economy?

Will ASI be able to help us formalize an ELO score for hardcore technical management?

What are the asymptotic properties of human and machine intelligence as a function of additional compute time?

Humans seem to be much more efficient in training, implying that whatever humans do that is like back propagation it’s at least one complexity class faster than pure back prop. That is, O(N log N) vs O(N^2), or maybe even better. 

But when it comes to inference, humans have different modes of thought over different time scales. Most of the time, we make decisions intuitively and almost instantly, with rationalizations arriving a second later. With collaboration or due consideration, or formal reasoning, we can sometimes achieve better decisions. With a pen and paper, we can execute problem solving algorithms in physics or math or poetry, extending the capabilities of our natural hardware to solve tougher problems. And over a long enough time scale, we can generate blog posts and books, both of which can embody compressed intelligence and a much higher signal to noise ratio than an average conversation. 

Similarly, LLMs that have exhausted the training set can still achieve better performance by running so-called Chain of Thought algorithms. Still an area of active research, these enable incrementally better results, albeit at the cost of significantly more compute time. Currently, it’s not clear that results continue to improve beyond a fairly basic level, with issues around context and coherence undermining performance.

The question, therefore, is something like “What is the asymptotic performance of human and artificial cognition as a function of flops, time, cycles, watts, or some other extensive measure?” Note here that I’m less interested in an absolute comparison of human and AI intelligence, but rather more interested in the scaling properties with effort.

My working hypothesis is that human cognition improves markedly once pen is put to paper, and in some cases can continue to improve with extended writing (but note many prominent failures). In contrast, the leading LLMs seem to achieve an incremental improvement with CoT and then flatline. For example, for the sorts of questions I’m obsessed with (physics first principles stuff) the LLMs give bad answers in general. With CoT, they take a lot longer to give an answer that is bad in a more obscure way, but the answer is usually not much closer to being correct. Sometimes when it is, it seems that it might have arrived there by exhaustion rather than the machine equivalent of what we would call insight or inspiration. 

I wanted more insight into this question, so I asked GPT o3 Deep Research, but it mostly agreed with me.

Wow AI is so bad at physics and what will it take to fix them?

I have a project to convert my IPhO notes into a beautiful and short textbook on the basics of first principles physics. 

Most of what we know about physics can be boiled down into about 50 pages of notes. This compression property of the hard sciences seems to leave the LLMs at a profound disadvantage, as their training requires the consumption of reams of material. Yet the actual step by step process of physics problem solving is not that hard. I learned it in high school, at a time when I couldn’t have written a 500 word essay worthy of ChatGPT if my life depended on it. 

And yet, the AIs still really suck at physics. What’s it going to take?

Will the economic bottleneck be managers or foot soldiers?

And if so, is the Anthropic Claude model of producing competent software engineers or computer engineers that need skilled human managers better? Or is the hypothetical Grok model of producing clones of Elon who can extend his reach and grasp into other parts of the economy better? Who will commoditize who? Which side of the API will humans end up on? 

In the AI as cognitive prosthesis model, what percentage of the gains will accrue to the ends of the bell curve versus the middle? 

It seems highly likely to me that AI tools will function as cognitive prostheses, improving productivity, life outcomes, and so on for people anywhere on the spectrum of human capability. But, like the previous question, it’s not clear where the benefits will accrue the best, either in an absolute sense or relative to basic human needs, particularly rivalrous ones.

For example, a world where AI cleanly doubles GDP per capita is a good, if boring, outcome.

A world where 99% of the productivity gains accrue to the 1% most productive people is hardly unlikely – it could look like the previous scenario, except with additional exceptional wealth creation at the very top. Everyone is much better off. Our scientific and technical progress accelerates beyond all previous limits. 

Consumption is likely to increase along with productivity growth, creating better lives for billions. But if consumption is proportional to wealth and the good is rivalrous, we could see exceptional productivity in tiny corners of the economy bid up prices for everyone. We’ve already seen this happen in housing, where the good fortune of better health created a politically powerful inverted demography that also emergently crushed the production of sufficient new housing to keep up with that same growth, and as a result, San Francisco among hundreds of other cities has become too unaffordable to function as a real city with a range of different professions, not just elite tier software developers.

If AI 100xes our GDP and 100xes housing prices, or gold prices, or food prices, we could end up in a bizarre situation where everyone is far richer than they were before, and yet some set of necessities are still unaffordably expensive. 

No kind of AI-funded UBI can solve this problem. Only the technological and regulatory innovation to ensure that everything people and AIs want can be made in more abundance, and therefore more cheaply, over time. We should start now with legalizing housing construction, obviously! 

In the limit, this could be important. It seems likely that AI economic output per watt of power consumed will far surpass even the most productive humans. That AI output per acre of solar photovoltaic land will surpass farming. What, then, will humans eat?

Read Entire Article