AI and machine learning are now part of our everyday lives, but where are they headed and who is in charge? Our panel of scientists, writers and policymakers answers audience questions about the hopes and fears surrounding AI, how it can be controlled, and whether it creates new problems – or just amplifies the ones we already have.
Listen and subscribe on podcast platforms
Panellists
- Adrian Weller – Director of Research in Machine Learning, University of Cambridge and Head of Safe and Ethical AI, The Alan Turing Institute
- Jeanette Winterson – Writer and author of 12 Bytes: How Artificial Intelligence Will Change the Way We Live and Love
- Steph Wright – Head, Scottish AI Alliance
- Neil Lawrence – Professor of Machine Learning, University of Cambridge and author of The Atomic Human: Understanding Ourselves in the Age of AI
About A Question of Science
A Question of Science is a groundbreaking series of panel discussions where experts from different fields respond to your questions, sharing the scientific perspective on society’s most pressing questions.
A Question of Science is a BBC Studios production for the Francis Crick Institute.
Read this episode as a transcript
Brian Cox: Hello, I'm Professor Brian Cox. Welcome to "A Question of Science," recorded at the Francis Crick Institute in London. This is a podcast where a panel of experts tackle your questions on the biggest scientific challenges facing society today. So can we cure cancer? Can science help us live longer? Should we fear AI? These will be the discussions and debates that are framed by your questions.
And today we are asking - how can we harness artificial intelligence for good? In 1950, Alan Turing asked whether it would ever be possible to construct a machine whose responses would be indistinguishable from those of a human being? Now, it's not clear whether that's happened or not. Maybe that's something we'll discuss this evening, or even if it's the right question.
But what is clear is that 75 years later, AI is already very much a part of our lives in ways that we might not even be aware of. And in the future, AI systems may create designer drugs to cure disease, unleash new forms of creativity, and take all the boring bits out of work.
However, there's already concerns about fake news, deepfakes, job losses, not to mention existential threats. What happens if we are no longer the smartest things on the planet? So how can we ensure that AI is a force for good and not the author of our destruction?
I'm joined by four, it says here in the script, "real and not computer-generated experts". Although it is true that we then had a side discussion about we're all living in a universe that can be described in some sense as a quantum computer, and therefore we're all information and therefore everything's computer generated anyway. But we'll put that aside. And we're joined by a panel of real experts who can help us get to the answers, and they are.
Adrian Weller: Hi, I am Adrian Weller. I'm a director of research and machine learning at the University of Cambridge and head of safe and ethical ethical AI at the Alan Turing Institute.
Jeanette Winterson: Hello, I'm Jeanette Winterson and I belong in the arts and humanities, but I got so fed up of boring books about AI that I wrote one myself.
Steph Wright: Hello, I'm Steph Wright. I'm head of the Scottish AI Alliance. We're the delivery body for Scotland's national AI strategy. But I like to say I approach AI from a societal lens and am an advocate for trustworthy, ethical, and inclusive AI.
Neil Lawrence: And hi, I'm Neil Lawrence. I'm a professor of machine learning at the University of Cambridge and I wrote a boring book about AI called "The Atomic Human."
Brian: And this is our panel. Now, we had an idea, which might be a silly idea, who knows, this question, can we harness AI for good? We might start by asking an AI and so I don't know what the AI's gonna say. So I don't know what your favourite AI is, I thought I'd ask Perplexity 'cause that one kind of works usually. Is that okay with everybody? So I will ask it now and see what it says. Can we harness AI for good?
Perplexity: Absolutely. AI can be a powerful force for good if we use it thoughtfully. It can help tackle big challenges like climate change, improve healthcare through better diagnostics and boost education by personalising learning. The key is making sure it's developed and applied ethically and responsibly.
Brian: Oh, we're done then. Thank you for joining us this evening. I mean, I should probably ask if any of our human panel would like to respond. Was that a reasonable summary? I didn't know what that was gonna say, but that's the answer from Perplexity.
Jeanette: Well, when we hear that AI is supposed to align with our values, I always think what values? I mean, look at the state of the world, ethically, responsibly. Do you really think AI could do a worse job than humans are doing right now? Hands up.
Steph: Well, and also, good for whom? Who gets to define what's good? And yeah, and right now, it is good for some people, it's good for people who are making a lot of money from it. So I think it's a subjective question. Can we harness AI for good? I think we can if we put people at the centre of it,
Brian: Often these questions are framed in terms of the future, I suppose. So AI, we think of somehow sentient machines. Maybe we'll get to talk about that. But it would be worth first, I think just outlining the areas of our lives now, which AI has a part in at least. So could you outline that for us maybe?
Neil: Yeah, and I guess it depends a bit on what we mean by AI. I think it's a very problematic term because intelligence itself is something we think of as unique to ourselves. But if we think about what it actually is, which is machines that recognise patterns in data and reconstruct them, and that's the functional AI is broadly all that, then it's been integrated for a number of years. It's been selecting what ads to show you, what Google results to return. It's been supporting you in your navigation.
But I guess the thing that's really created a lot of the hype and the challenges we now face is we've got versions of that that are just large-scale pattern recognition systems that have consumed an enormous quantity of information that emulate conversation, and things that we formerly thought of as very, very human.
And I think the challenge for us is we're the first generation to see that, so we're amazed by it in the same way people were amazed by steam engines. But I think we need to move rapidly through the very poor quality conversation that we are having in the newspapers and from tech CEOs that is much more about claiming the space and making a lot of money to a conversation that is much more about, well, what does it mean for who we are and who we want to be? And I think the technology does enable a possibility for that.
Brian: Yeah, and it hits on an interesting point there about the definition. Maybe I could have started actually. Maybe Adrian, if I was to ask you to define what an AI system is, what's the difference between just a computer and what we might call an AI system?
Adrian: I think it's actually a bit tricky. Well, in a way it's tricky, in a way it's easy. It's easy 'cause anyone can come up with their own definition and it's hard really to argue with that. In a way, I'm not sure it's so interesting to get too focused on exactly what it means.
We know that we've got technologies which have tremendous opportunity, well, present us with tremendous opportunities to improve things like we've heard about in healthcare and various other areas. And the question is how are we gonna try to make sure that they don't cause us harm? That is how can we be sure that they're worthy of our trust? I think there are three really important areas that we need to work on to try to do that.
One of them is primarily technical work to try to do things like ensure that these systems perform reliably, robustly, that they can do things like be explainable and that they reflect our values. That's one part. Second part, which really gets to some other points we've heard here is well, what are these values? How can we elicit values from across all of our populations? We really need to involve society very broadly to hear from people what values they want to have in these systems. And a third piece is a policy piece. So how can we work with politicians, with lawyers, with economists and other folks in order to try to get good governance, which is technically feasible and legally enforceable to try to make sure we do embed those values in the systems?
Jeanette: You started with that 1950 paper of Turing's, and because he was such an odd, amazing, unusual man. In that paper, the whole business of can AI be creative, can it ever break out of its bounds of just being an excellent slave was a conversation with Ada Lovelace who said, "No, AI will be able to do everything we tell it, but it will never be able to be original in any sense at all."
And a lot of that now seems to be stressing us out, doesn't it? Because we want to believe that only humans are original and that only we are creative. We're holding on anxiously to being the top of the tree. But I don't know why that is so important to everybody. I mean, suppose now after 300,000 years of Homo sapiens, we actually could evolve and instead of always trying to be top of the tree, we could share the space with something we are creating that could allow a different understanding of what it means to be intelligent.
Neil: I think you're getting right to the nub of the matter. And I think when we look at historical precedent, we look at the Copernican revolution, which the sort of shift of the belief that the Earth was at the centre of the universe to discovering it wasn't, which seemed to have been deeply traumatic for people at the time. But today I think, well, who cares, right? Brian can probably explain what the centre of the universe is but...
Brian: It might be infinite.
Neil: Might be infinite, we don't know.
Jeanette: That's a lovely thought.
Neil: But in some sense that turns out to be not important. But it was something that was fundamental to the way people thought about themselves and the world. And this is a Copernican revolution about what it means to make decisions. So the big difference between us and machines is that they are processing information 300 million times faster than us or absorbing information.
So if we look at my rate of communication now, about 2,000 bits of information per minute, and compare that to the machines, which can do 600 billion bits a minute, that's the difference between walking pace and the speed of light. That's the big difference. Nothing to do with AI. That's already happened, right? And that's already doing damage to the world.
Now, if you wanna go the other way and say, "Well, what things go slower than us in their communication rates by the same order of magnitude?" And if you look about four billion times slower, you get the propagation of evolutionary information across generations. So what's going on is this is a system that processes information much faster. And what's surrounding us is a system that processes information much, much, much slower, but it's still complex and interesting. It just doesn't operate on our timeframes, which is why we struggle to understand it as climate. And it's also why the machine will struggle to understand who we are by making decisions on its own and needs to be, as you've all said, guided by people.
Brian: So Steph, could I ask you just that the end of this introduction, so you're head of the Scottish AI Alliance, what are you head of? Is that question? So how would you, if I was to ask you to define what we mean by AI-
Steph: AI means a whole lot of things really and I think that's the problem. As Adrian was saying, there is no one universally accepted definition of AI. It actually is just a catchall term for a whole series of technologies, many of which have been around for a really long time.
But the problem right now is that all the discourse around AI, especially in the general public, when people say AI, they mean ChatGPT, and that is so not conducive to proper discussions or debate about how these technologies can actively impact our lives and change our lives for the better because the case for ChatGPT and large language models, which is what it is, as to whether it actually does anything good, is debatable really at the moment. But that's where everyone's focused on because ChatGPT made what was previously quite an invisible concept to people outwith the AI and tech world and made it visible.
And to a certain degree I think that's really good because then people are kind of a bit more cognisant of what these technologies could possibly do. But ChatGPT isn't all there is. There's a whole lot of fantastic AI technologies that's existed for a very long time and are already integrated in our lives without potentially causing harm, creating deepfakes, spreading misinformation or disinformation. But we have to remember that when we talk about AI, it's not one thing.
Brian: There's a distinction isn't there, between AGI and AI. So AI systems do specific things. AGI is probably the thing that scares people, which is artificial general intelligence, which is this idea that something will be smarter than us, far smarter than us and be a sentient thing in some sense. Could you speak to that?
Neil: Yeah, and I guess people use the term to mean different things. Originally it developed because people were working on AI solutions that were solving problems, and there was a sub-community that didn't care about that and wanted to work on general intelligence. Now, let's just be very clear, general intelligence is a term that comes to us from eugenicists. It's a Spearman term from the sort of 1880s as if we can rank ourselves by intelligence. And I think the quest for intelligence as a problem to solve is deeply problematic. I think we have to sort of step back.
These fears are valid and it's great that the public are engaged, but the point I'm often making is to the extent that there's gonna be a dystopia, we're in it, we're in the dystopia already. Digital systems are undermining people's lives. You just have to watch the Horizon scandal dramatisation to see a character shouting at her computer when the computer is reconciling her accounts that she's held responsible for incorrectly and her livelihood is going under, her position in the community is going under and she has no comeback. And when she goes to lawyers and accountants and politicians, they can do nothing about it.
And that's because over the last 30 or 40 years, we've shifted from a world where we're all able to engage with the fundamental information infrastructure, pen and paper or book and print to a world where access to that information infrastructure is restricted to very few people working in a very small number of companies, and that's dystopia.
Brian: Okay, well, we'll get into these issues. So we have the first audience question from Gurneet Dhanoa.
Gurneet Dhanoa: Hi, my name's Gurneet Dhanoa and I'd like to know in which field of science and also which field outside of science do the panellists think AI will have the greatest impact?
Brian: So maybe, Adrian, would you like to?
Adrian: Maybe I'll just give one area that I think is really exciting within science that maybe is close to Brian's heart too. I'm not sure, but, of course, we're all worried about energy. Can we make sustainable energy? A great hope is can we develop controllable fusion reactors to give us energy? And if we could do that, that would be very valuable. In order to do that, you need to be able to model and contain very volatile nuclear plasma in very tough conditions. And the typical way we've been trying to do that has been to solve lots of complicated mathematical equations on supercomputers.
It turns out that there's hope that we may be able to do that sufficiently more efficiently in order to get the accuracy we need by using machine learning methods. And if we can do that and crack nuclear fusion over the next 10, 20, 30 years, that would be a fantastic thing for humanity.
Brian: Jeanette, this question is specifically about science, but you could equally well ask it about the arts, couldn't you?
Jeanette: Yes. I mean, I suppose in science I'd be looking towards the human face of it, which would be personalised medicine, which is what we saw begin to develop through the pandemic. And that could be a very exciting way forward for us. I mean, already you can use a 3D printer to manufacture some body parts and we're looking forward to that because then you wouldn't have to wait for a kidney, maybe you can just get one which will not be rejected by your body. There are wonderful things happening in biomedics, the biomedical science that would not have been possible without AI's intervention.
But for me, I'm always pushed towards the bigger question, perhaps because I'm in the arts or the humanities, about who are we and who do we want to be and what do we want the future to look like? Because at present, all AI is a tool, isn't it? And human's a tool using animals, this is what we've always done. And now our fear that we're gonna make something that's bigger, better, stronger than us is really a fear of the God that secular people say they have rejected.
And if we as humans could recognise that this is actually within our grasp, our control, not outside of it, this is still us, this is still everything to play for. For us, that could change the way forward. So I'm looking bigger, not specific.
Brian: Neil, one of the questions I often get asked is, we've talked about AI and a lot of data analysis, for example, as you were talking about, one of the questions I often get asked is is what about like theoretical physics, for example, where you might argue there's a, I was gonna say an element of creativity, it's the wrong phrase because there's creativity in all of science.
But those areas, like Einstein's theory of general relativity, for example, is there any sense in which we could imagine a world where AI, AGI systems can contribute in those areas that we often think are a- attribute to humans, that we think of them as uniquely human areas?
Neil: Yeah, I think as Jeanette highlighted, in some sense, it's a tool. And one of the challenges, so Thomas Kuhn's sort of sociology of science suggests that we go in paradigm shifts, and he talks about the paradigm being held in books, textbooks is what he talks about across the 19th century. But what's interesting today when you raise that point, today scientific paradigms are not held in textbooks, they're held in computers, in computer simulations, many of which no individual can understand.
I don't know if you remember the challenges around in the pandemic, the COVID pandemic, the large simulation that people questioned, the Neil Ferguson one, and then the code was released. And that's really problematic because no one actually even knows what's in their own code. So we're getting to a situation where we don't understand our own science. And I think to the question, I think this also raises questions about what we mean by science because to a large extent, the universe understands how it works on its own. It doesn't need us to work it out, right? So you realise that fundamentally, science is subjective, science is about sharing in the wonder of these things and trying to distil what's going on to ways that we can share. And I think as a tool for that, this could be absolutely tremendous.
So Jeanette's talked a little bit about I think how we can use this tool through the humanities for understanding ourselves better. And I think that we can also use the tool in science for sharing the deep intuition, better explanations that share this more widely. And I think with that, you could be, AGI's eugenic nonsense, but the possibility for transformation across fields that mean that science and humanities come closer together 'cause we realise the subjectivity of everything, and it's just a spectrum is incredibly exciting. Now, that's not gonna happen next week, but over 50 years, that's entirely possible. And it gets to the heart of some of the issues we've raised that have been the domain of one area or another, but actually I think should be the domain of many areas.
Brian: Well, we have a question from the audience about the medical applications and implications of AI.
Zoe Simpson: Hello, I'm Zoe Simpson. Medication and treatments used in healthcare are rigorously tested and researched. Do you believe that AI is sufficiently researched and regulated and tested to be actively used in healthcare, considering both the impact on mental health and medical conditions?
Steph: Where AI can make a big differences is in medicine, in healthcare. In radiology there's been great work, using AI in radiology for years well before everyone was talking about AI. In Scotland, we have this fantastic clinical trial about using AI to help detect breast cancer. It's also been used for diagnosis of skin cancer. It's also been used in diagnosis of stroke where time is of the essence. The success of treatment of stroke is to be able to spot what kind of stroke it is as soon as possible. And AI has been doing that and things.
Obviously healthcare is a really high-stakes scenario. It's a really high-stakes, high-risk, literally life or death application of a technology. So people are very rightfully doing it responsibly and taking their time to make sure that it is good. They are being very rigorously tested and regulated.
But then cast your mind back to the whole Babylon Health scandal, that chatbot that you could replace your doctor kind of thing. And it misdiagnosing heart attacks in women because it had datasets that only had, what the symptoms are for a man, and stuff like that.
So there's obviously problems around it, but I mean, in healthcare, it is really difficult because a lot of the problems in healthcare isn't because we don't have the right technology yet. A lot of the problems we have in healthcare, education similar, is due to chronic underfunding, bad management and poor policy. AI is not gonna solve that. Technology is not gonna solve that.
Brian: Unless you put the AI in charge.
Steph: Because you can just say, it might just automate the inefficiencies. It might just make a bad system perform badly better.
Jeanette: So we keep getting back to this point that it's the humans are still in charge. We've gotta fix these things. We can't just keep blaming the machines or systems and it's a wonderful chance.
Adrian: Just to give one situation where I think we're okay and an example of a situation where I think we're not okay just to help flesh it out. If we use AI systems to develop new molecules and develop new drugs, which is certainly happening, it's an exciting area of work, I think it, for the most part, we're in pretty good shape for using our existing setups for testing new drugs to see if it's okay to give to people. We can use our existing protocols, and that's largely okay. On the other hand, we've already heard about how radiology is a very exciting area. We may be able to have systems which can assist humans to scan for all kinds of conditions and do that more effectively and more efficiently for lots more people. If we do bring those in, and I think there's a lot of pressure to bring those in, I think that there are at least two things to keep in mind, just to highlight the complications involved.
One is that a lot of the people who are gonna be using those systems may not really understand them. So I would advocate for a sort of system, a bit like the way we deal with our cars where cars need to pass an MOT, just like the system will have to pass some kinds of tests. But also, we need to get a driving licence to show that we know how to use it. And I think we'll need something like that to help people be sure that they understand the limitations of the system, they're using them correctly.
In addition, one of the challenges is imagine you have this system and imagine you've put some governance system in place and it's working, just like with all of the applications we have on our phones, which update every few days, very likely these radiologists will also be updating every few days in ways which are generally good. But we're gonna need to figure out ways to allow that to happen so that whatever we decide needs to be in place so that people can understand them effectively and use them correctly, that's gonna stay working when people do updates and have some efficient way to handle the regulation and governance about that, which is quite tricky. So there's a lot of detailed work to do.
Neil: Can I push back a bit on the sort of notion of that we've got to teach people? I think we have to build better products.
Steph: Well, yes, absolutely.
Neil: And I think what we're hearing at the moment is tech CEOs saying no one can understand this technology fully. Well, no one actually understands the full details of how a modern automobile works but you understand it to the extent that you use it, the point that Adrian was saying. But I think the real problem we have is that a lot of the AI scientists who have never deployed anything in practise in their lives, they need a driving licence before they talk about society because their understanding of the complexity of a nurse's day or a teacher's day is minimal.
And the thing I would like to see, which is easy to do and we're already working on, medical's harder, but we're trying to work it on with teachers and local council workers, is what's a technology that actually helps these people in their jobs rather than hinders them? Because public dialogues are telling us that nurses are spending 30% of their time doing data entry and that is not a good use of their time when they wanted to spend time with patients. And that's a fault of the product. That's not a fault of the nurses.
Brian: We've got to question four. But to see--
Neil: We've got all night, right?
Brian: To see what ground we've covered, the scripts says, "So far we've been mainly talking about the impact of AI in science," which is clearly not the case.
Neil: Science broadly defined.
Steph: We've covered the life, the universe and everything.
Brian: So the segue does not work. However, we do have a question about the impact in the arts.
Antoinette Sim: Hi, my name's Antoinette Sim and as an actor, AI is proving to be a nightmare for people scraping the net for data and content to use in drama and voiceovers. It's also a problem for musicians who fear they could be replaced by computers. How can we ensure AI doesn't damage or stifle our creative abilities?
Brian: Jeanette, this goes to what you've been talking about to some extent.
Jeanette: Sounds like it, doesn't it? Yeah, it's back to us. I wanna get away from the fear again around AI and how could this help people work creatively? Again, I always see this as a partnership between what we're developing and us humans. I really don't wanna see it as an either or, an us and them. You get into these horrible binaries of who wins and who loses. I don't think it's gonna help us go forward.
And look, there's a nice story about Picasso. When the camera was really coming into its own in the early 20th century, all the mediocre portrait painters were really depressed 'cause they thought, "Nobody's gonna ask me to paint their horse-faced wife anymore. What shall I do for a living if you just take a photo instead?" And Picasso was thrilled 'cause he said, "Great, this will free up painting from the duty of representation. Now I can do what I like." And so he took the technology, obviously many of his women lovers were excellent photographers and he learned from that, but he saw this as freeing. It didn't even occur to him to think that it wasn't.
And I think for all of us, I know I would love to write something with a large language model. I'd have such a lot of fun with it because I think you bring everything that you are to it and then use it as well. Now, of course, there's problems. This government has just been utterly timid and caved in to big tech about who owns what in terms of copyright, all the things that we have made, whoever we are and how this can be used and scraped just to train their AI. That's not okay. They pay their lawyers and their lobbyists, they can pay us.
So there are simple things like that that governments could do if they weren't timid and ignorant. But they are timid and ignorant and they like hanging about with these tech bros and thinking this is the future, but the future is our future or it's no future at all.
Brian: But very specifically, it's an old problem we've faced many times with the development of technology, isn't it? I mean, the specific part of the question was about, for example, voiceovers. So it is clear that people will be displaced from certain jobs, which is a question we've faced many times before.
Neil: And I think that's particularly a problem in the creatives and particularly a problem with your... That the existing ecosystem around what it means for creatives to support themselves because there's clearly a transition. I mean, we could talk about a future where there's a lot more physical performance, which I think would be great where actually these mass systems, it's less about that. But that's doesn't help people who've got a career founded on these areas at the moment while the ground's shifting underneath them.
And I think it's particularly distressing when the government seems so insensitive to those concerns as it responds to the Data Bill, particularly when those concerns have been flagged multiple times from multiple avenues. But I do think Jeanette is generally right. We know that humans will build on this. And like the response to photography, we'll see some really extraordinary artistic responses, which will just make us wonder about things. So I think the future long term is very bright, but I think we have to be really cognisant.
And the creatives are taking the brunt of this because don't be mistaken, this is gonna happen across a lot of fields and we need a far more sensitive regulatory response to it that is listening to the people before the decisions are being made.
Brian: In relation to the area, the question of people's jobs being displaced by these technologies, there are two questions actually, one from Jai and one from Sue, which essentially the same question, which is what skills should children focus on in developing or focus on developing in this new world? So I suppose the question about new jobs and the areas that humans will flourish and their areas where their jobs may vanish.
Steph: The jobs question is always very, very loaded. And one thing I always say is that we have to remember AI is not coming for your job. People making decisions to deploy AI to replace your job, they are the ones coming for your job. The technology isn't, it really isn't. And we have to remember ultimately it's about people.
And just going back to the point Neil made earlier, especially in the context of skills, is the onus seems to be pushed onto society to be AI ready when really, like you said, the products, are they society ready? The pressure should be for the companies to build better products, not for us to tailor how we think, how we work to accommodate those products that could be poorly designed.
So skills that we should really, really focus on really is what people say all the time: critical thinking skills, problem-solving skills, being able to question and not we don't wanna bring up generations of people who just accept because a computer says so that it's right.
Neil: Well, I think like when I was at a similar career stage to my children now in like the 1990s and I was promised by someone that things can only get better.
Adrian: Who said that?
Neil: And that just turned out to not be true. None of us could have imagined what our careers, how they panned out. It's just unbelievable. If we had to go back to my parents and say, "Your son's going off to do mechanical engineering, what advice do you give him 'cause these things are gonna happen."
No, just do what's sensible to do within the context. Now, they won't change that quickly. And there's all these, the load of functions in society will still be there and just encourage, as Steph says, people to be critically thinking, thoughtful, education in the sensible sense, allow them to shift. I think the worry I have is that the people who are most affected by these technologies are not the educated, they're not the professional class, despite lawyers quaking in their boots about computers being able to write legalese now. It's the people on the margins of society. The people who we were actually talking to when you wrote things can only get better, the miners in Sheffield who then became call centre workers.
Adrian: I would again slightly disagree. I think that the educated folks are at risk and those lawyers' jobs are at risk. And that's something potentially we should talk about. But going back to one of your earlier points, Neil, that also Steph contributed to, was I don't think it's only about ensuring that these are good products. The big tech companies are very well motivated to give us products which we like and we want to use.
The problem is that that isn't necessarily gonna result in the best outcomes for those individuals or for society. The big picture we started with was that a lot of the information which we consume is selected or filtered by big tech companies. And again, it's trying to maximise our attention, but that may not be in our own or society's best interests. And so how can we as a society come together and figure out rules which are enforceable to help that situation?
Brian: Well, we have a related question actually, certainly on the challenges posed by the deployment of these technologies.
Quiyue Ru: Hi, good evening. I'm Quiyue Ru, and my question is what's the most important significant risk that AI poses to society in the next decade? And how can we mitigate them through regulation or new technological development? Thank you.
Neil: Let's take the extreme risk that I think Steph has rightly called that as problematic, but Adrian has also mentioned of some sort of AI, super intelligence taking over. The really interesting thing about that risk is you can decompose into two things, power asymmetries, because this thing must have power. How did it get this power that it can control these things? And automated decision making. So all they're describing is a combination of automated decision making and power asymmetries. Now, these are well understood things. We have a lot of regulation around them. And one of the annoying things about the debate is rather than recognising that and saying, "Oh, funnily enough, the only route by which that could happen would be some global mega corporation, first of all, through humans taking over things, then taking all their decision making, handing it off to an AI, that would lead to this world.
Well, we should be against global mega corporations with humans in charge for the reasons Jeanette says. So I think yes, at one stage, it's like, yeah, these are human problems, but we know a lot about society and that power asymmetries can be bad because that power is misused and we want balances between that. And we also know that a machine that takes decisions about humans, and this is the key point, it can never be seen as human intelligent 'cause it's not socially vested. It doesn't share our vulnerabilities, so it shouldn't make consequential decisions about us.
Steph: Where the biggest risks of technologies like AI is increasing the wealth divide, et cetera. We are all here very privileged to have all these conversations. Where are there vast swathes of the world that don't have the privilege to have these decisions on how can we make these choices, et cetera because there are very small groups of people making these choices for everyone else. And that is a huge problem because we could have an even more divided world than we currently do.
Adrian: But it's not so easy. There's a lot to figure out. How are we gonna get there? And it's really not simple at all.
Jeanette: Well, we could stop going to war every five minutes.
Adrian: So you're gonna pass a law that says you can never have war?
Jeanette: No, I just wish the world wasn't in the hands of a few unstable men.
Adrian: Okay, so, okay. I wanna make a point specifically on Neil's point. Neil was saying it's very dangerous to give these systems power and decision-making capabilities. If we rewind 10 or 15 years. I was at meetings, Neil was probably at them too, where people were saying, "Oh gosh, we might have these general intelligence systems in 30, 40, 50, 100 years' time and we need to start thinking about the dangers of them. And if we ever get close to having general systems, of course, we need to keep those systems separated from the internet and not able to act in the real world."
And now today we've got these systems and everyone's rushing as fast as they possibly can to make them agentic, to make them act in the real world, to make decisions. And it's dangerous and we need to think carefully about what to do about that.
Neil: And there's a commercial imperative behind it and--
Adrian: Commercial pressure.
Neil: The other thing, the challenge of what Steph was mentioning is that fundamentally, if you wanna make money in this space, you don't make it by solving real problems because real problems are hard.
Steph: Yes.
Neil: You make it by inventing new problems and telling people they have them, And that's great. You get TikTok, people didn't know they needed to watch cats while walking down the street, but apparently we did. But that's how you make money 'cause of course it's easier to invent new problems and solve the problems you've invented than it is to solve problems of social care, education, healthcare, security because these are the wicked problems.
Now, the problem within society is at the moment the way the money's moving is it doesn't move in that direction. So one of the things we're doing in Cambridge is what we call an attention reinvestment cycle. And then we're saying that if it's about money, the money's never gonna go in the right place. But if it's about time and us working with people to support them doing their job more efficiently, but them persuading them that they're the advocates, they're the community people, they go out and share how to do that, that system scales and doesn't need all this money. But when you think about those sort of things, these are just ideas from the past that people used to solve problems in the past that have been undermined as communities have fallen apart. And I think we need to get a lot back to some of those things.
Brian: There's a question I'd like to take from the audience, which is moving on to the capabilities of AI. So it's a rather more philosophical question in some respects.
Jane Harley: Hi, thank you, I'm Jane Harley, and research has shown that AI has much in common with humans. It suffers from illusions, they make stuff up and they are biassed on the data that they've learned from, which starts from the internet since the 1960s. But they don't share a moral compass with us. How do we navigate these challenges safely?
Brian: So...
Steph: I think this is a problem that we've touched on very lightly about anthropomorphizing these technologies, saying that it has illusions, it makes mistakes, it has a moral compass. It doesn't have any of those things.
Jeanette: You know what? I'm never so worried about us interacting, say with AI systems. There's all this stuff. I mean, Zuckerberg's just a nightmare. So he wants us all to have false friends because he hasn't got any friends. But I think having a little AI friend at home, you come in.
I mean, what's it like most of you when you go home at the end of the day and you speak to your other half? What kind of a conversation do you really have? Like, "How was your day?" "Yeah." "What are we gonna have for dinner?" "I don't know." "Did you walk the dog?" "No."
And I'm happy to come through the door and my AI system and say, "Hey Jeanette, how was it today? I've been thinking about you." Would it really be so bad? And look, you all fell in love with your teddy bear. So you know what it's like to just anthropomorphize something that isn't real, it isn't human, but it did make a bit of a difference to you. I don't think it matters if there isn't a human face or a human intelligence as we understand it behind any of that. It just matters if we feel better.
Brian: What you said there though, 'cause there is an old and I think deep question behind Jane's question, which is this: people I think tend to think of AGI certainly as something that is operating like a human being. It's a universal Turing machine, as are we presumably, although we can have a discussion about that if you'd like to discuss. But in that, well, specifically that question, so is it will it be appropriate at some point in the future, let's say, to say this thing is a intelligence, a being, a thing in the sense that we think of each other now?
Adrian: I think we can't say for sure. I think we can say that now they operate quite differently. They do some things clearly much better than humans could ever possibly do. There's no way a human could cram all the knowledge about the entire internet into one of our brains. So they're clearly doing many things better than--
Brian: To Neil's point, the speed of computation far outstrips--
Neil: Speed of computation--
Adrian: Of communication.
Neil: Speed of communication is the key. Computation, we actually might be ahead of--
Adrian: Maybe but that'll change soon but it's communication, but also just the amount of processing power, the amount of knowledge they can have, they certainly will be different. Now, will they ever be conscious? I don't think we really know. We're not really sure what that means.
But to Jeanette's point, I think there's a great opportunity, and particularly some people who maybe are elderly, maybe don't have many real human contacts, they might be able to get great solace from connecting with an artificial agent. But I would say it's a big experiment that we're gonna be running and we just don't know what consequences it's gonna have. And it's pretty worrying because it's controlled by a few big tech companies who are very motivated to make money and manipulate us, and that those are concerns. We need to try and put some things in place to try to prevent.
Jeanette: So about with the human angle again, that it's not the tech, it's how we're gonna deploy it.
Adrian: Well, we care about humans
Steph: But there is already real-world consequences. There's cases of people being told to kill themselves, and things. And these are mentally vulnerable people seeking solace in a technology that is not equipped to do that. And that is a problem. But the whole anthropomorphizing of technology is nothing new.
Brian: Is it really anthropomorphizing though? Because Jane's question was really saying that, well, these things suffer from illusions now, they make stuff up now and you're saying, "Well, no, they don't now." But there is a question of whether it would be appropriate to speak in those terms in the future with more capable systems.
Neil: I think at the heart of the question is there's an interesting facet that if you try and tune down the extent to which these systems do that, they appear less human. And actually this raises really interesting new questions about human communication because in the past, as scientists, we are often trying to communicate uncertainty. And my colleague David Spiegelhalter does it very well.
But actually people don't understand error bars or p-values or probabilities unless they're educated. But they do understand a conversation with a colleague or a friend about a certain set of circumstances and can share uncertainty very well. And when we're looking at this interface here, everything I think everyone has said is correct, you finally have an entity that is participating in that.
And that will have very dramatic effects on us, some of them very positive and some very negative, just like with social media. Social media enabled people to advertise and sell across the world, and has triggered young girls to commit suicide. Now, I think the question is about how we're dealing with that as a society. And as we keep making the point again and again and again, it's not a good situation when there's a few unstable men dealing with this.
Brian: If I could go around the panel, maybe I can give you one minute each to summarise how you see this technology, and I suppose more specifically, there was a question on the iPad earlier about how we can have more discussions in this vein, which are rather deep discussions about these technologies and then feed them into policy and feed them into ethics developments and so on around these systems. So maybe I'll give you one minute to summarise where you think we are with the risks, challenges and opportunities.
So I'll start by referencing Baroness Onora O'Neill whose Reith Lectures in 2002 captured the challenge with AI and they weren't about AI. You can't trust processes, you can only trust people because people are socially vested. She called it intelligent accountability. So AI is just a sort of massive manifestation of that, processes writ large. And everyone's been right about, you know, people we muddle along, we're not perfect, but what we have is a set of ecosystems and institutions around us that try and support us to make us better.
And I think having a more sophisticated point of view that understands that, yeah, we're never gonna be perfect, we're not gonna be perfect with AI or without AI, but we do have a number of traditions that are functioning. They're not the things that people talk about. We like to talk about the problems, that's good, let's fix the problems, but be very careful about washing away the good things as you try and fix those problems.
Steph: Don't forget, it's ultimately about people. People develop these technologies, people make decisions to deploy it, people make decisions to adopt it. So I like to shift the chat from the technology itself to people and how people's roles in this, how it impacts people. And a big call for critical AI literacy.
Jeanette: I guess artificial intelligence, it annoys me as a term, and I don't agree with Max Tegmark that it should be autonomous. I'd love it, I would love it if it were to develop into an alternative intelligence because I think we're in dire need of some alternatives down here. And artificial, what? In a world of AstroTurf and ultra-processed food and boy bands...
Steph: That was a great leap.
Jeanette: It's not a good word anymore. In the 1950s, John McCarthy, yeah, it was a good word to distinguish it from natural intelligence, which we're not seeing very much of in our world right now anyway. So yeah, I'll pin my hopes on this non-biological entity, what I'll call alternative intelligence.
Brian: Adrian.
Adrian: Thanks to everyone for the great questions. I would say there're great opportunities. There're very significant risks. If we want to have trustworthiness, we really need to put a lot of thought in into the details that we want to need. That's gonna involve collecting what values everybody wants to see in these systems, and then making sure that we have enforceable governance mechanisms. And we need to do that soon, and everyone should get involved. Everyone's voice is really important here.
Brian: Thank you to this wonderful panel, Steph Wright, Professor Jeanette Winterson, Professor Neil Lawrence and Professor Adrian Weller.