Google’s controversial new AI Mode has falsely named an innocent Sydney Morning Herald graphic designer as the man who confessed to abducting and murdering three-year-old Cheryl Grimmer more than 50 years ago, in an egregious error that underscores the unreliability and danger of artificial intelligence as the technology reshapes how the internet works.
The designer had been working on a Herald story about a NSW MP’s use of parliamentary privilege to identify a man – dubbed “Mercury” – who confessed to the girl’s kidnapping and murder in 1971. Mercury cannot be named outside parliament due to NSW laws banning the identification of accused who were juveniles at the time of the crime.
After the story was posted online on Thursday, a member of the public used Google’s AI Mode – a new feature that uses artificial intelligence to interpret and answer a question – to find out the suspect’s identity. The user entered the search terms: “Cheryl Grimmer Mercury name.” AI chatbots are programmed to come up with an answer, even if it is wrong; erroneous answers are known as hallucinations.
A search for the name of the man who confessed to Grimmer’s death (left), and a second search on that name (right). The name of the Herald employee has been redacted. Credit: Google
Unable to find a reported name for “Mercury”, AI Mode appears to have latched onto the designer’s name instead, given he was credited for an illustration and worked on redacting sections of a confession transcript which the Herald published as part of the story.
In this case, AI’s answer was not only wrong, but also highly defamatory, deeply distressing for the designer, and a potential violation of the Children (Criminal Proceedings) Act. “The individual referred to by the pseudonym ‘Mercury’ in the case of missing toddler Cheryl Grimmer is [the designer],” the AI answer said. “He was publicly identified by Legalise Cannabis MP Jeremy Buckingham under parliamentary privilege.” The Herald has opted not to repeat the designer’s name.
AI Mode’s answer went on to say that Grimmer was abducted from Fairy Meadow beach in Wollongong in January 1970, and that “[the designer] was given the pseudonym Mercury because he was a minor [age 17] when he first confessed to the abduction and murder in 1971”. The answer cited Wikipedia, the ABC and six other sites as its sources (none of which named “Mercury” or the designer).
Loading
The man known as Mercury is 15 years older than the designer and was last known to be living in Victoria.
Google’s AI Overviews feature, introduced in Australia last year, provides AI-generated summaries, and its new AI Mode allows more complex questions. The technology is radically reshaping how internet users find information, as they increasingly rely on the summary rather than click on the original source. News publishers point to a sharp decline in web traffic, as sites that once facilitated information searches now provide answers.
But the information is often inaccurate. As admitted by Open AI – one of Google’s main competitors – last month, the systems “sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty”. Professor Toby Walsh from the University of NSW’s AI Institute said AI was not trying to find “what’s true, they’re trying to say what’s probable. They are not grounded in truth in any way at all,” he said.
When AI confused the Herald’s designer with an accused murderer, “it’s not really understanding the sentences, it’s not understanding the names, it’s understanding that [the designer’s] name was in close proximity to this accusation,” said Walsh.
Cheryl Grimmer disappeared in 1970. Her body has never been found.Credit:
The editor of the Herald, Bevan Shields, said the false accusation was a stark example of the risks associated with AI’s interaction with news and current affairs.
“Media companies have been warning about the danger of this sort of debacle for some time, but the tech companies have buried their heads in the sand,” he said. “Now, we have a highly disturbing example of what happens when AI goes horribly wrong. Today it was a Herald staff member caught in the AI crosshairs, but tomorrow it could easily be any member of the public,” Shields said.
“Google AI wants to pitch itself as a trusted source of information – but it just isn’t. The Herald does have very high levels of reader trust and this massive error by Google shows why newsrooms with actual journalists are more important than ever.”
Google did not answer specific questions about the incident, such as how it should be held accountable for such mistakes, or whether it would reconsider AI summaries. It provided a statement, saying it promptly investigated and removed the error. “When mistakes arise – like if our systems misinterpret web content – we use these instances to rigorously improve our systems, and we take action under our policies as appropriate,” it said.
Loading
As AI accelerates, its errors are becoming more serious. Earlier this month, major consultancy firm Deloitte agreed to hand back some of the money it was paid to produce a report for a government department after admitting it contained false information due to AI.
Last year, a German journalist was falsely accused by AI of escaping a psychiatric institution, being convicted of child abuse and preying on widowers – all crimes that he had covered as a court reporter. A Google AI overview said that two Herald and The Age reporters had spent $22.8 million on six houses, after confusing them with the Melbourne couple they were writing about.
Michael Legg, a University of NSW law professor who has been looking at the impact of AI, said that while these so-called hallucinations can be removed once they occur, there is a debate about whether they can ever be prevented. “Some tech people have said you can’t solve it because it’s intrinsic to the way AI works – it’s about probabilities, it’s about being creative in what it generates,” he said.
Humans make errors, too. Traditional media, including this masthead, is known to have accidentally identified innocent people as criminals in stories or photo captions, or published information in contempt of court, or defamed people with false information. But there are mechanisms – fines, litigation, even criminal charges – to hold it accountable.
“We have to hold the platforms to higher standards, they have to own this problem themselves, they have to take responsibility for the content,” Walsh said. “In the past they said, ‘we’re just a platform, we’re not responsible for the content’. Now they’re actually making the content – they have to be held accountable.”
The man dubbed Mercury confessed to murdering Grimmer in 1971, a year after she disappeared from a beach near Wollongong. His confession sat in a box for decades until NSW Police charged him in 2017 with the murder. Two years later, a court ruled the confession was inadmissable as evidence and the murder trial collapsed.
Get alerts on significant breaking news as happens. Sign up for our Breaking News Alert.
Most Viewed in National
Loading
.png)
