Judge scolds Tim Burke's lawyer after AI produces error-filled court motion

3 days ago 1

TAMPA — The federal case of Tampa media figure Tim Burke — a dispute that mixes topics like Fox News, the American media and complex questions of free speech — took another all-too-modern turn this week, courtesy of artificial intelligence.

One of Burke’s lawyers relied on AI tools, including ChatGPT, to help research and write their latest motion to dismiss some of the charges against him. The result was a legal memo full of errors, nonexistent quotes and misstatements of law.

The problems didn’t go unnoticed by the judge overseeing the case.

Two days after the document was filed, U.S. District Judge Kathryn Kimball Mizelle ordered it to be stricken from the case record.

“Burke’s motion contains significant misrepresentations and misquotations of supposedly pertinent case law and history,” Mizelle wrote. The document cites cases “as saying things that they do not say — and for propositions of fact and law that they do not support."

Mizelle allowed Burke’s lawyers to file a new motion with appropriate and accurate case citations and quotes. She also directed them to make a separate filing explaining why and how the errors occurred.

In their response, Burke’s legal team blamed an “ill-advised reliance on AI,” along with time constraints and geographic challenges with one attorney in Tampa and the other in Maryland.

Burke’s lead attorney, Mark Rasch, is a former federal prosecutor with expertise in cybersecurity and computer crime. It was he who drafted the motion, according to the defense’s response. Burke’s other attorney, Michael Maddux, was busy with an unrelated trial and did not review the motion before it was filed.

Rasch “assumes sole and exclusive responsibility for these errors and Mr. Burke bears no responsibility for these inaccuracies,” the response states.

The judge took no punitive action against the lawyers. But in a Tuesday afternoon court hearing, she gave them a stern warning against making future mistakes.

“I expect research to be done by human beings and cite checks to be done by human beings,” Mizelle said.

In her written order, the judge identified at least nine examples of nonexistent quotes and misstatements of case law.

One, which she highlighted as the most egregious, was a citation to an opinion in a 2001 case from the 11th Circuit Court of Appeal, known as United States v. Ruiz.

The defense‘s motion to dismiss included a quote, ostensibly from that case, stating: “(A) statute that criminalizes otherwise innocent conduct must clearly delineate the line between permissible and prohibited conduct and cannot constitutionally require the defendant to prove facts that exculpate him.”

Want breaking news in your inbox?

Subscribe to our free News Alerts newsletter

You’ll receive real-time updates on major issues and events in Tampa Bay and beyond as they happen.

You’re all signed up!

Want more of our free, weekly newsletters in your inbox? Let’s get started.

Explore all your options

That quote does not appear anywhere in that ruling. Nor does the Ruiz case support that proposition.

Mizelle also listed at least seven quotes incorrectly attributed to cases that she said might otherwise support Burke’s arguments. She also noted other “miscellaneous problems,” citing one example of a real quote attributed to the wrong court.

In his response, Rasch wrote that he conducted “substantial legal research and writing” to prepare the motion. His used the AI features of WestLaw, an online legal research service, along with Google Scholar, an academic search engine, and “deep research” with the “Pro” version of ChatGPT.

“A combination of these tools produced the final product,” the response states.

Given the size, complexity and scope of the document, Rasch wrote that he should have asked for additional time to file it. He apologized to the court, Maddux and Burke.

In court Tuesday, Mizelle said she did not believe the errors were due to a lack of zealous legal advocacy from Burke’s team. She said she believes Burke is receiving good representation from both of his attorneys.

Burke, 46, is a nationally recognized media consultant who has done work for major companies like HBO and ESPN. He is well known for his ability to find and promote obscure online content.

He is charged with 14 federal crimes related to his acquisition and distribution of videos he found online, including some that depicted unaired Fox News footage. Federal prosecutors accused him of intruding into private computer systems to obtain the videos.

His attorneys have asserted that Burke found the videos by accessing them with credentials that were available on a public website. They’ve argued that he is a journalist who brought to light material in the public interest. They say the case against him infringes on his First Amendment rights.

The case is set for a jury trial in September.

Like many professions, lawyers and courts have in recent years incorporated artificial intelligence tools into their work. In a survey last year conducted by the Thomson Reuters company, 63% of lawyer respondents reported that they have used AI for work, with 12% saying they use it regularly.

Burke’s is far from the first case to see AI-generated phantom quotes and misstatements of law.

Earlier this year, the Florida-based personal injury giant Morgan & Morgan sent an email to their attorneys warning that AI could place false information in court documents. It happened after a brief in a case the firm handled was found to have included citations to eight cases that did not exist. A chatbot used to write the brief generated dates and case numbers that were likewise fictional.

The lawyer responsible for that error apologized in court, acknowledging a misplaced reliance on AI. A judge barred the attorney from further work on the case and fined the firm $1,000.

Earlier this month, a California judge ordered a plaintiff’s law firm to pay $31,000 after they filed briefs that the judge discovered contained numerous false and inaccurate citations generated by AI, the MIT Technology Review reported.

Maura Grossman, a Buffalo, New York, attorney and computer science professor at the University of Waterloo in Ontario, Canada, has been outspoken about the problems AI causes for courts. In an email to the Tampa Bay Times, Grossman wrote that she doesn’t believe the technology itself is a problem, but overreliance on it is.

Most attorneys, she noted, would never file a brief that had been drafted by a clerk or junior associate without thoroughly checking it for accuracy. The same rigor should apply when using AI.

“I am a bit surprised by the persistence of the error given how much negative publicity it has gotten in the legal industry,” Grossman wrote. “You can understand it a bit more with self-represented litigants who don’t have access to the same case law databases that lawyers do, but lawyers have no real excuse.”

Grossman said lawyers are getting “sucked in by all the hype and the fluency and authority” of the technology without considering its limitations. She believes the solution is more education.

“It may be that the (return on investment) is less if you have to check every last word, but that is where we are right now,” she wrote. “Failure to do so is very risky.”

Read Entire Article