Dystopian AI Predictions Numb Us. AI Expert Sharat Chikkerur’s Insights Will Help You Switch from Freeze or Flight onto Creative Fight.
In times of optimization, it is so refreshing when serendipity serves you a treat. That is what happened to me last May when I interviewed Sharat Chikkerur, ex Google DeepMind Engineer, MIT PhD in Computer Science, Carnegie Mellon University MBA, father of an 11 year-old girl, and Ric Calvillo’s co-founder at Jamb.ai.
I was in Cambridge for my 20th Reunion at the Harvard Kennedy School (HKS), where I had graduated with a masters degree in international development and public administration. And also where, 21 years ago, I had not been allowed to take a course called “Human Nature” for credit, which was jointly offered by the Law School and the FAS Department of Psychology, taught by Steven Pinker and Roberto Manbageira Unger. The reason? A committee deemed it “not relevant for a career in public policy.” I stubbornly audited it and went on to spend the following 21 years learning about neuroscience, physics, evolutionary biology, digital technologies, etc. much more than about economics or policy.
Wonderfully, though, during a panel at the Reunion, an HKS professor explained that a course was soon to be launched at the school called “Being Human”. Twenty years later there is a better understanding that the depths of our existence, choices, businesses and policies are better illuminated by diverse disciplines.
Now that digital technologies have become so tangibly pervasive in our daily lives, and that phrases about AI replacing or surpassing humans are an everyday occurrence, I have been going back to my learnings and reflecting on what we are going through. The following conversation is an output of this process.
As I said, it was a serendipitous encounter. I was planning to interview Ric, whom I had met back as a grad student. But Ric had a busy morning and sent me a message that read “My partner and CTO Sharat will meet you first. I’ll join when I get there” and then another message that added “Very knowledgeable”.
Very knowledgeable indeed he was. Knowledgeable while also profoundly humanistic. Sharat spoke as the phenomenal AI insider he is, generously sharing practical, frank, and inspiring knowledge as well as his opinions, doubts, and fears.
Here goes a slightly edited version of our chat, so you can enjoy Sharat’s insights, laugh at some silly questions I asked and thoughts I had, and the many topics we touched, which came beautifully together.
SHARAT & ANA’S CONVERSATION
On Changes in Human Activity in the Age of AI
Ana: Can we start with the big changes in human activity in the age of AI?
Sharat: I am curious about what your interest in AI is, but the way I see it is as follows. In the industrial revolution changes happened in physical labor, which got automated. This applied to people working with their hands, or repetitive grunge work, which got replaced first by the steam engine, then by the electric motor, but the key principle was replacing human labor. Things that would take a lot of time manually could be done quickly with machines. That principle has stayed. Nothing really has changed in that parameter, in the way you power things, the way they are becoming more automatable.
What is happening with AI is that we are getting to the point where we can automate mental labor. So if your work involves reading documents and typing ten fields in a form based on them, or being able to reconcile multiple documents; or in procurement, where you get a document, you get prospectuses, you have to submit an invoice; or in translation … all of this mental labor is getting automated. And this includes software engineering, law … all of these are at risk of getting not replaced but changed, like what happened with other areas of work. There used to be a person whose job was to press buttons on an elevator. Architectural offices used to have rows of draftsmen whose job was to create blueprints … well, they are gone: now there’s one guy with AutoCAD on a machine. Take accounting: there would be this army of people with green hats and lamps. Those are gone. I think software engineering is heading that way, too: instead of having rows of desks, there are going to be a few people in an office like ours (a small space in a coworking building with two desks and four stations). I told Ric one of the reasons why I left Google to join him was I knew this field was changing; it is now possible to build companies with three or four people, where the average revenue per employee is really going up: Google generates around a million dollars per employee, whereas newer software agent companies generate three million per employee.
Ana: How different are those employees?
Sharat: I do not think they are different, I think the tooling is different. It is like the difference between a person making chairs by hand or with powered tools: you do not replace people, you just make them more efficient.
On the Enthusiastic Panic Around AI
Ana: Let me ask you something, because I was just at my 20th Year Reunion at HKS. I went to AI sessions and everybody was asking questions, but in general the answers were not that great. I think that is because most people were not specialists… Also, I felt like there was a state of “enthusiastic panic”.
Sharat: Yes, yes.
Ana: You are developing Strong AGI, right?
Sharat: No, not us, we are using it.
Ana: Right, you are going to use it. So you are using something that is in the works.
Sharat: Yes.
Ana: I have seen many forecasts on when that mark will be hit (the generation of strong AGI).
Sharat: Yes, I think some boundaries have been crossed, like the Turing Test has been crossed.
Ana: That is gone.
Sharat: Now it is a question of risk. Let me illustrate with coding. Coding is a very good example in which AI is immediately useful and that is why there is so much fuss and funding around it. OpenAI is trying to buy Windsurf, which is a two-year old company (an AI-powered code acceleration platform), for three billion dollars. And the reason is: you can de-risk the AI. You use AI to generate code, you test the code, you compile the code, you put specifications around the code. You use AI to do sort of the groundwork, the boilerplate stuff, and then you de-risk it.
Ana: You intervene.
Sharat: In areas like customer support, AI is replacing humans to an extent but not completely eliminating them because there is risk. You don’t want to lack control, guardrails. For example, Klarna is a Swedish company that famously replaced all of its customer support with AI, but then brought custom support people back in because AI went off the rails, such as by not treating high-value customers differently.
Ana: Yes. And there are scarier stories, like the chatbot at The National Eating Disorders Association in the US which was taken down after it produced diet and weight loss advice; unfortunately, once the nonprofit had already shut down its human-staffed helpline.
Sharat: So there is a lot of risk like and, like I said, AI is at a point when if a task requires mental labor but is predictable, like customer onboarding, where you ask for the same information and only content differs from one case to the next, well, those things I would say are ready for replacement, that kind of mental labor can be automated. But if it requires more…
The way I think about these models is as follows: smaller models equal a person with about a minute to read, but longer reasoning models equal a person who has like 15 minutes to think about things. The way to consider them is: larger models have more mental capacity so to speak, or they spend more time thinking. Eventually, there is no upper limit. Today Google, or OpenAI models can think for a few minutes, which equals a person thinking for about an hour.
There is an institute called Metr (https://metr.org/) which has a benchmark for saying what the task horizon for AI today replacing humans is. They measure what horizon of tasks can be automated. For example, if I can do the task in 5 minutes, and that threshold is broken already, AI can do it. Today it is 40 minutes or an hour; you can use AI to replace things that take people one hour to do, but most of what we do has a longer horizon, most projects are months, or years long… but it is getting there. I think the trajectory is in two or three years, AI should be able to do a month-long project like research.
On Great Uses of AI
Ana: Another question. Let us say there is a tool that can be used for many different things. When you think about the future, with respect to your clients or society in general, what do you think are the best possible uses of AI? The ones that make you think: Ahhh, this is going to enhance our life experience so much! And what is the most catastrophic use that you can imagine? But likely, not unlikely, and not like Russian, Putin-backed hacking, more like something along the lines of if we use AI for that, in the long run it will be a disaster. I am asking because you are precisely designing AI applications.
Sharat: For example, we (Ric and Sharat’s startup, Jamb.ai) are developing a construction solution. These are people who are not technology savvy, who hate to use apps: so it is a good use of AI, because it can generate an experience as frictionless as possible, like the way you talk to a person. The UX (user experience) makes a lot of difference. And I am actually not even talking about the UX: just the company’s interface is different: you talk to it and you can get output in any format you like. For example, with most software, your input and output are really fixed, right? Let us say you have to fill in a form, and you get your stuff back in writing. Why does it have to be the case? You should be able to talk to it and say “I am doing something”; then, it becomes more natural. That is what people are doing with the Meta glasses or Siri. Interaction becomes more natural.
I think the other side of things that is not here yet, but on the way, is output becoming more natural so you do not need to consult things on a screen. We are brainstorming things like: let us say a contractor wants an update on a project. Why does it have to be a pdf report? You can have a podcast on what is happening on the project, or a radio station. The output format can become flexible by taking the same information and rendering it any way you like. You can ask the AI to write a comic, for example. That is empowering.
Ana: Are you using VR at all?
Sharat: Not yet, but, but it is on the roadmap. I think the end goal is we want construction people to wear a glass and look around and we do the recording. You know, when people are working with their hands we do not want them to update status on a phone. Today there is a site supervisor, whose job on the site is to observe what is happening and update the project status. Or if an architect or a general contractor sends something, it is to tell people the current status of things.
Ana: So this streamlines a lot of things.
Sharat: Yes. The other thing is the language barrier. That is another problem with construction in Boston.
Ana: In Argentina the language barrier is also an issue: many construction workers, for example, speak mainly Guarani. Around the globe workers in this force also tend to be migrants.
Sharat: Yes. In Boston, the top two non-English languages are Spanish and Portuguese. There is a large Brazilian population. I love Brazilian bakeries (I suggested Sharat ask for brigadeiros, which he had not heard of and I love. Very relevant suggestion in the context of our conversation I guess…).
Sharat: So language barrier is a problem. That is another aspect in which AI becomes useful, because you can consume, apart from the format, also in the language of your choice. Real time translation is possible today. That is another enabling thing. Knowledge is not going to be a barrier any more.
On Catastrophic Uses of AI
Ana: Most catastrophic uses of AI. That is a broad question.
Sharat: That is speculation.
Ana: Yes! It is speculation. Futurology! I mean, I do not like dystopian stuff, I think we ought to be optimistic and think that we have always solved it.
Sharat: There is an issue with dual use. On the plus side, knowledge is no longer a barrier. You can find the information, translate it into different languages, consume it as audio, etc. Great. But that also makes dual use easier. The risk is I can do things I could not do before, like if I want to create an improvised explosive device, now it is easier. Before I had to be an expert, but now I can just point an AI to it and ask: “Am I doing this correctly?” And it will guide me on what components to use, on alternatives, etc. Dual use becomes easier not just in electronics but also, for example, in biology. There are a lot of models for synthesizing molecules, proteins, and materials. I think that is a risk.
There is also societal risk. You can take farming as an example. Less than a hundred years ago it would take two people per acre to farm it, now one person can have a farm that is 100 acres big. Output to input has changed. The yield of 100 acres can be now produced by one person and a few machines. What is happening is that people who were farmers before will be out of a job. You can take coding as another example. Microsoft laid off approximately 3% of its global workforce (around 6,000 employees). Salesforce is not hiring anyone in 2025. I think there will be fewer software engineers; the world does not need them.
On AI and Education
Ana: What about education? What do you think is going to be different? Do you have children?
Sharat: Yes, I do. A 10-year old.
Ana: Ok, so, for them, what do you imagine a good education -for whatever comes- is like? Not what the jobs of the future will be, we have no clue about that.
Sharat: I am not too worried about jobs. I would say the best use case of AI is more personalized education. A main reason why people spend on a private school is smaller class size. Better schools are those in which teachers pay more attention to students. That is why, for example, people choose Montessori.
Ana: I work with Montessori education in early childhood because it is ultra personalized (among other things, but that is a hallmark).
Sharat: In districts where there is Montessori there is a lottery because it is so popular. I guess it is because you can adapt to each child, children can learn at their own pace, and you can provide the level of attention each child needs.
Ana: It helps me to hear you! I am part of a team hoping to replicate a Montessori-based operations model for day care centers that we designed and tested, and I am thinking, now that AI is here, we could incorporate it. We have to figure out how AI can help us scale something up that is already personalized.
Sharat: It does not need to be active.
Ana. I do not want it to interface with the child.
Sharat: Actually you could. As long as the content is fixed, you define a curriculum, you put guardrails around it, you can. You can think of AI as a teacher with infinite patience. You can ask questions in a 100 different ways. The teacher cannot be answering questions to every student in every form. You can keep content fixed; you can say: restrict it to second grade math. Every kid understands things differently. My point is AI does not need to be active, all it needs to do is to respond to the kids in the way that they want. Like my daughter, she hates math. She does one problem, and she cannot generalize to the next problem, so she asks again. I do not have a lot of patience. AI does. Some children learn quickly, they can generalize faster. All could move at their own pace, faster or slower (A paragraph for another entire conversation).
With respect to personalization, what I think scares people about AI is it learning about their kids, acquiring data. I do not think there is reason to worry, even with classic AI. You can just use the infinite patience of AI.
On AI & Regulation (attempt 1, but we ended up talking about Montessori and interfaces)
Ana: And what about regulation? In the world of applications of AI with respect to social media and algorithms interacting and nudging us…
Sharat: Ana, what do you actually do in Argentina, are you in public policy?
Ana: I have worked in public policy in early childhood development, and now am part of a team facing the possibility of scaling up a Montessori-based operations model for day care centers that we created and tested. I really think the proper use of technology could play a role. Yet, in the case of my children, I often see that the use of technology in their education lowers teachers’ engagement.
Sharat: I think the format matters. Some kids like abstract stuff, others like stories…
Ana: That is one thing Montessori education is great at.
Sharat: Right. Some people like sensory stuff.
Ana: That is why you are going into new interfaces, right?
Sharat: Yes. Some are visual learners, some are auditory learners, others are tactile … that is where AI can help. Let us say you define a standard syllabus. The point of the syllabus is content: what needs to be learned at what stages, or in what progression. What is happening is that teachers today also take that as a guide for format, so they teach the way the syllabus is written, and they teach for the test. The idea is to disentangle content from format. This could mean that the AI, for example, if you want to hear a story about a principle, it can create a story; or if someone wants to do a back and forth debate about some topic, since some people like the socratic method -ask a question and find out what you would do- well, the AI can also do it. The thing with AI is you do not need to create hundreds of syllabi for these different formats, you only need to create one content: these are the principles, these are the things that need to be taught, and you can let AI do the formatting in any language of choice. I think that is the power: to separate content from format. Instead of hundreds of syllabi for different formats, you just need one syllabus and the power, via AI, to separate content from format. There is nothing AI is profiling you in, it is not learning, it is just answering questions in the presentation that you like, which for example can be visual: the AI can generate images, the AI can generate videos … think infinite remixes of the content.
On AI & Regulation (attempt 2, I got an answer, a very interesting one)
Ana: Going back to regulation, in the field in which you are working, construction. In practice, how do you deal with regulation? Does regulation help you think about solutions?
Sharat: For us, there are a bunch of compliance things. You have to file with the town. All the permits you have to file depend on the building and the materials you use, which may require special filing. So all of that is the laborious part of things, and every county has its own forms and processes for how to file things. You can think of these as a format, but the content is the same, it is the format that varies, right? So let us say the general contractor indicates they are using three-bar steels, etc. The idea is for us to be able to generate the pdf to file in a town based on the content in whichever format we acquire it.
Ana: So you take it as a given (that format changes and you make content fit different formats).
Sharat: Workers have to do it as manual labor and mental labor because they are buying supplies and they are talking to the subcontractors. They are talking about the same things but the presentation is different. For general contractors and workers it is operational, so I need to talk to them about operational stuff: this has to be done today, you need to go from this rung to this rung to this rung. To the town it is not operational but archival, so they need to know how many laborers are involved, where this is going to be done, what is the square footage, etc. But the underlying data is the same, just that the town cares about it in a different format, and the laborers care in a different format, too. So that is what we have been doing: disentangling the inputs, the indexing, vs. the outputs. So we dump everything into the index. Let us say, information comes via a phone call, through in-person meetings, through emails -architects love emails- subcontractors just call you, home-owners can either call you or Whatsapp you; on top of that, all on-site stuff happens face-to-face. The problem today is all of this is not reconciled. People on the construction site only know about the things that happened face-to-face, but they do not know about the architects’ emails, so the site supervisor has all of these (channels) open: she talks to the customers, she talks to the architects, etc… Her main job is to reconcile all of this in her head.
Ana: Sounds like a valuable person.
Sharat: Yes, but they cannot do more than a couple of projects at a time.
Ana: With AI it would be a different story.
Sharat: Twenty projects at a time.
Ana: Quick question. Can we wrap up in say 5, 10 minutes? Whatever you can.
Sharat: 10 is fine, I have a phone call with my daughter’s school at 10:20.
On the Dark Side of AI in the Manipulation of Human Nature (and on the regulation of that)
Ana: Besides the future of work, the main worries I hear are about social media and algorithms messing with our human nature and our children. How much of that is an issue, in your opinion, since you are an expert in this field?
Sharat: Ric and I were in advertising (digital) for 10 years, at optimization. And it is scary…
Ana: It is about people making choices in a different way…
Sharat: Correct. I would say the industry calls it “personalization” but it is more like targeting, it is profiling. We ran an ad company that is not Meta, called Nanigans. Even Nanigans, which was smaller than Meta, with around 100 people, could get an enormous, scary amount of information about people depending on devices and permissions. If you put AI to this task it can build a better profile of people and you can target more, so that is the scary side of things. Think of cults: the reason why you have charismatic leaders that empathize with everyone… well, the thing is, if I talk to you the way you want to hear things, you like me and I can do the same thing with another person, and another … that is why there can be a really psychopathic scheme with a large following: content can be adapted to the person you talk to and you can impress everyone.
Ana: So what is an antidote? What is an antidote thinking about the future? Because all of this is coming, it is here. Is it regulation?
Sharat: I think regulation can help. It can stop personalization.
Ana: Has it been done to some degree?
Sharat: Think about an airline. You get different grades of travel: you get Economy, Economy Plus, etc, There is a granularity in personalization. If you think of airlines, the extreme of personalization is if it knows you are an aisle seat; if it knows you want to be three rows behind the entrance. That is an extreme form of personalization. So, in ad space, if you personalize ads extremely there is going to be one perfect ad for you that we will show. But it does not work that way because companies like Google, if you use really powerful AI, really know what you want and could show exactly the thing that you want and it would be done. But this is not what happens. Meta will never do it. Google will never do it. Because then nobody would be competing for that ad space, and that would not make money for them. Their incentive as a company is for multiple people to bid on your attention so that they get the most revenue. Google or Meta are deliberately not personalizing as much so they can show five ads that are eligible and companies bid for those five ads.
Ana: So the market will save humanity, is that what you are saying? As long as there is competition we are a little better off?
Sharat: No, no. What I am saying is regulation is necessary because of the incentives companies have. TikTok is an example: maximizing engagement is not the same as giving the more informative content. So, going back to the airline example, companies can personalize but regulation can restrict the granularity.
Ana: So you think there is a role for regulation in the cases that it does not work for the client but against the client, or the user.
Sharat: The company is always trying to maximize something.
Ana: Engagement, and revenue for the company.
Sharat: Exactly. I think you have to be transparent about what it (the company) is optimizing. Companies do not say it. It is always sold as “people get more relevant content”, but that is not the case. If Google and Meta were really interested in giving you the most relevant content they would just show you one ad: the most relevant one. The reason they show five ads is because they go to auction and bidders compete for space. For example, even with Google today, let us say I am searching for Toyota cars: there is no reason why I should see an ad for Honda, but Honda bids for the Toyota keyword and Google grants it because it gets more money. It is about being transparent about what the company is optimizing and putting guardrails around it. It is about the granularity at which you optimize. For instance, before, Verizon, cable companies would sell information: they would give advertisers your address and what you were watching. Now, with regulation, they cannot localize you within less than a neighborhood. They can say “someone in this neighborhood is watching X show”, but they can no longer say “this household is watching X show”. The granularity came through regulation, not because companies wanted it. Another example: Google has been trying to launch a replacement for cookies for five years and going: for this replacement they wanted to narrow down interests to below 50 people. Yet, by law, any attribute shared by fewer than 50 people cannot be shared, it is not allowed. That is a granularity selection; you can do the same thing at 100 people. Google is abandoning the cookie replacement project because nobody is going to allow it. There is definitely use for regulation. Companies maximize shareholder value, they do not maximize societal benefit.
On AI & the Arts
Ana: One last question. Do you like visual arts? Performing arts? What is your favorite form of art?
Sharat: Oh, I love theatre. I am a huge theatre fan. Performing arts.
Ana: To you, what interesting role can AI have in the arts? Well, theatre is person-to-person…
Sharat: Yes, theatre is person to person, it is visceral. I guess .. have you seen the movie “I, Robot”?
Ana: No… I saw “Her” many years ago (I still do not understand the relevance of this answer I gave Sharat).
Sharat: I think the point I want to make is that people have thought computers are good at automation and not creativity, and that creativity was very human. But now AI can create music, it can create art, infinite art. Again, if you take the analogy of coding, you can create movies on demand, but then you have no control on what it produces. It could go on a very dark path.
Ana: The guardrails are hard to set there.
Sharat: One way to de-risk it is to look at AI as an idea generator. A “what if?” that can produce infinite variations. What if the camera angle was this, or that? Visualize the director’s viewpoint … in many different ways, with many lighting conditions. Then you compile it and you say “I want to do this!” and you go and produce it. The same with the story line: what happens if he acts in this way, or she … you could access a million variants of Little Women.
Ana: Whenever I talk to people that are really into technology and understand technology, they view it as a tool with which humans can thrive, like you do. Yet, often when I talk to people who are into tech, but as users, I encounter expressions like: “Soon there will be books written by AIs!” … as if that was exciting. What is your view on that? Is that what happens to you? Sometimes I feel people want to disappear!
Sharat: As a software engineer I see AI as a power tool, like a carpenter using a circular saw. A way to think about it is: the power of AI is representation, or reformatting, or retelling. Like if you had a comic book, if you had a Jane Austen novel in a comic form, you could retell that story, say what happened with some characters around it.
Ana: Or like what if it is in another country, or another planet?
Sharat: Right. So I think the point is how you frame it, similar to the syllabus thing: keep the content, change the presentation. AI is a tool to reimagine, retell.
Sharat’s Recommendations for a Night at the Theatre
Ana: I like it. So you have five minutes before your daughters’ school call. One last thing, what is your favorite play ever? If you could go to one theatre, watch one play? I am a pretty big theatre fan too (again, what is the use of this information, Ana??!!! Poor Sharat).
Sharat: I have seen Phantom of the Opera like twenty times.
Ana: I am fine with that! So you like musicals. And a romantic one.
Sharat: I like musicals. My second favorite is Book of Mormon.
Ana: I laughed to death.
Sharat: I listen to it in the car. Well, those are entertaining. You know … the one that is a conversation between C.S Lewis and Freud about God? Two people talking. (It is called Freud’s Last Session, there is the 2023 movie version with Anthony Hopkins and Matthew Goode).
Ana: I read a lot of C.S. Lewis growing up.
Sharat: C.S. Lewis is arguing before Freud about the meaning of God. I really liked that.
Ana: Thanks, Sharat (and thank you Ric as well).
So here you go. Hope you have found as much food for thought, for action, and for joy as I did. Stay tuned, as there may be a sequel.
.png)
