AI Use
The fabricated citations originated from a ChatGPT query submitted by an unlicensed law clerk at Petitioner's law firm. Neither Counsel reviewed the petition’s contents before filing. The firm had no AI use policy in place at the time, though they implemented one after the order to show cause was issued.
Hallucination Details
Chief among the hallucinations was Royer v. Nelson, which Respondents demonstrated existed only in ChatGPT’s output and in no official database. Other cited cases were also inapposite or unverifiable. Petitioner’s counsel admitted fault and stated they were unaware AI had been used during drafting.
Ruling/Sanction
The court issued three targeted sanctions:
- Attorney fees: Respondents’ counsel are to submit an itemized bill; Counsel must pay within 10 days of receipt
- Client refund: Petitioner’s counsel must refund all fees paid by Mr. Garner in relation to the defective petition
- Charitable payment: Counsel must donate $1,000 to “and Justice for all” within 14 days and file proof of payment with the court
Key Judicial Reasoning
The panel (Per Curiam) emphasized that the conduct, while not malicious, still diverted judicial resources and imposed unnecessary burdens on the opposing party. Unlike Mata or Hayes, the attorneys in this case quickly admitted the issue and cooperated, which the court acknowledged. Nonetheless, the submission of fabricated law—especially under counsel's signature—breaches core duties of candor and verification, warranting formal sanctions. The court warned that Utah’s judiciary cannot be expected to verify every citation and must be able to trust lawyers to do so
" While in our discretion we will not impose sanctions on petitioner, who is proceeding pro se, we warn petitioner that continuing to cite nonexistent caselaw could result in the imposition of sanctions in the future. "
AI Use
First Counsel, who had not previously used AI for legal work, used an unspecified AI tool to assist with drafting a response. He failed to verify the citation before submission. Second Counsel, as local counsel, filed the response without checking the content or accuracy, even though he signed the document.
Second Counsel then said that he had initiated "procedural safeguards to prevent this error from happening again by ensuring he, and local counsel, undertake a comprehensive review of all citations and arguments filed with this and every court prior to submission to ensure their provenance can be traced to professional non-AI sources."
Hallucination Details
The hallucinated case was cited as controlling Delaware authority on privilege assignments. When challenged by Plaintiff, Defendants initially filed a bare withdrawal without explanation. Only upon court order did they disclose the AI origin and acknowledge the error. Mr. Lord personally apologized to the court and opposing counsel.
Ruling/Sanction
Judge William Matthewman imposed a multi-part sanction:
- Attorneys’ fees and costs incurred by Plaintiff in rebutting the hallucinated citation—jointly payable by Mr. Lord and Mr. Bello
- Required CLE on AI ethics within 30 days, with proof of completion due by June 20, 2025
- Monetary fines: $1,000 (First Counsel) and $500 (Second Counsel), payable to the Court registry
The Court emphasized that the submission of hallucinated citations—particularly when filed and signed by two attorneys—constitutes reckless disregard for procedural and ethical obligations. Though no bad faith was found, the conduct was sanctionable under Rule 11, § 1927, the Court’s inherent authority, and local professional responsibility rules.
Key Judicial Reasoning
The Court distinguished this case from more egregious incidents (O’Brien v. Flick, Thomas v. Pangburn) because the attorneys admitted their error and did not lie or attempt to cover it up. However, the delay in correction and failure to check the citation in the first place were serious enough to warrant monetary penalties and educational obligations.
AI Use
The plaintiff, proceeding pro se, cited “Darling v. Linde, Inc., No. 21-cv-01258, 2023 WL 2320117 (D. Or. Feb. 28, 2023)” in briefing. The court stated it could not locate the case in any major legal database or via internet search and noted this could trigger Rule 11 sanctions if not based on a reasonable inquiry. The ruling cited Saxena v. Martinez-Hernandez as a cautionary example involving AI hallucinations, suggesting the court suspected similar conduct here.
AI Use
Counsel filed a motion to dismiss appeal that cited “Greenspan v. Greenspan, 121 Hawai‘i 60, 71, 214 P.3d 557, 568 (App. 2009).” The court found that:
- No Hawai‘i case titled Greenspan v. Greenspan exists
- The citations to “121 Hawai‘i 60” and “214 P.3d 568” were in fact to other real cases (Estate of Roxas v. Marcos and Colorado Court of Appeals cases), suggesting a garbled AI-generated fabrication
- Counsel admitted delegating the brief to a per diem attorney and failing to verify the citation before filing
Ruling/Sanction
- $100 sanction imposed on counsel personally
- Payment to be made to the Supreme Court Clerk of Hawai‘i within seven days
- DiPasquale ordered to file a declaration attesting to payment.
The amount reflects counsel’s candor and corrective measures, but the court noted that federal courts have imposed higher sanctions in similar cases
Key Judicial Reasoning
The court cited Mata v. Avianca and Wadsworth v. Walmart, holding that “a fake opinion is not ‘existing law’” and using one violates HRCP Rule 11(b)(2). The court stressed that the signing attorney is responsible for verifying filings, regardless of delegation or AI use.
AI Use
In a filing related to a third-party notice, the defendant cited a judgment that did not exist. The judge clarified that this was not simply a mistaken citation or party confusion, but rather a reference to an entirely fictional judgment. The court explicitly stated: “It is not clear how such an error occurs, except through the use of artificial intelligence.”
Ruling/Sanction
The court permitted the defendant to proceed with the third-party notice but ordered partial costs (₪1,200) to be paid to the plaintiff due to procedural irregularities. The judge demanded a formal explanation of how the fictitious citation was introduced, in order to prevent recurrence
Key Judicial Reasoning
While the procedural error did not warrant barring the defendant’s claim against a third party, the court emphasized that referencing a fictional legal source is a serious issue requiring scrutiny. The opinion signals a growing judicial intolerance for unverified AI-assisted legal drafting in Israeli courts.
AI Use
A paralegal used public search tools and unspecified “AI-based research assistants” to generate legal citations. The resulting hallucinated cases were passed to Ms. Stillman, who filed them without verification. Four out of eight cited cases were found to be fictitious:
- London v. Polish Slavic Fed. Credit Union, No. 19-CV-6645
- Rosario v. 2022 E. Tremont Hous. Dev. Fund Corp., No. 21-CV-9010
- Paniagua v. El Gallo No. 3 Corp., No. 22-CV-7073
- Luna v. Gon Way Constr., Inc., No. 20-CV-893
Ruling/Sanction
The court imposed a $1,000 sanction against Counsel and her firm. Counsel was ordered to serve the sanction order on her client and file proof of service. The court declined harsher penalties, crediting her swift admission, apology, and internal reforms.
Key Judicial Reasoning
The court found subjective bad faith due to the complete absence of verification. It cited a range of other AI-related sanction decisions, underscoring that even outsourcing to a “diligent and trusted” paralegal is not a defense when due diligence is absent.
AI Use
Bandla denied using AI, claiming instead to have relied on Google searches to locate “supportive” case law. He admitted that he did not verify any of the citations and never checked them against official sources. The court found this unacceptable, particularly from someone formerly admitted as a solicitor.
Hallucination Details
Bandla’s submissions cited at least 27 cases which the Solicitors Regulation Authority (SRA) could not locate.
Bandla maintained summaries and quotations from these cases in formal submissions. When pressed in court, he admitted having never read the judgments, let alone verified their existence.
Ruling/Sanction
The High Court refused the application for an extension of time, finding Bandla’s explanations inconsistent and unreliable. The court independently struck out the appeal on grounds of abuse of process due to the submission of fake authority. It imposed indemnity costs of £24,727.20. The judge emphasized that even after being alerted to the fictitious nature of the cases, Bandla neither withdrew nor corrected them.
Key Judicial Reasoning
The court found Bandla’s conduct deeply troubling, noting his previous experience as a solicitor and his professed commitment to legal standards. It held that the deliberate or grossly negligent inclusion of fake case law—especially in an attempt to challenge a disciplinary disbarment—was an abuse requiring strong institutional response.
AI Use
The court found that several of the cases cited by the plaintiff in her briefing opposing Officer Hill’s qualified immunity defense did not exist. Although Newbern suggested the citations may have been innocent mistakes, she did not challenge the finding of fabrication. No AI tool was admitted or named, but the structure and specificity of the invented cases strongly suggest generative AI use.
Hallucination Details
The fabricated authorities were not background references, but “key authorities” cited to establish that Hill’s alleged conduct violated clearly established law. The court observed that the fake cases initially appeared to be unusually on-point compared to the rest of plaintiff’s citations, which raised suspicion. Upon scrutiny, it confirmed they did not exist.
Ruling/Sanction
The court dismissed the federal claims against Officer Hill as a partial sanction for plaintiff’s fabrication of legal authority and failure to meet the burden under qualified immunity. However, it declined to dismiss the entire case, citing the interest of the minor child involved and the relevance of potential state law claims. It permitted discovery to proceed on those claims to determine whether Officer Hill acted with malice or engaged in other conduct falling outside the scope of Mississippi Tort Claims Act immunity.
Key Judicial Reasoning
The court found that plaintiff’s citation of fictitious cases undermined her effort to meet the demanding “clearly established” standard. It rejected her claim that the fabrication was an innocent mistake and viewed it in light of her broader litigation conduct, which included excessive filings and disregard for procedural limits. Still, recognizing the stakes, the court preserved state law discovery as a potential pathway to factual resolution.
AI Use
The petition’s pages were marked “Criado com MobiOffice.” The STF verified that MobiOffice includes a built-in AI writing assistant. Combined with the inclusion of fictitious citations, this led the Court to conclude that AI had been used and not reviewed. The judge characterized this as reckless conduct.
Hallucination Details
- Claimed violations of STF precedents including RE 464.867/SP and RE 226.855/RS, which were either inapplicable or misrepresented
- Claimed Súmula Vinculante 6 said something entirely false (its actual content concerns military service remuneration)
- Cited judgments ARE 1.218.084 AgR and RE 328.111/DF as relevant when they were either misquoted or irrelevant
- Court concluded these references were invented or misrepresented, “false statements intended to mislead”
Ruling/Sanction
- Rejected the complaint as manifestly inadmissible
- Found that counsel likely used AI and submitted the petition without review
- Ordered notification to both the national and Bahia sections of the OAB
- Declared the petitioner litigated in bad faith under Article 80, V, of the Brazilian Civil Procedure Code
- Imposed a procedural penalty of double the initial court costs
- Ordered referral to dívida ativa (federal collections) if not paid
Key Judicial Reasoning
The STF emphasized that false statements regarding binding case law and the misuse of AI-generated content—especially in constitutional matters—endanger judicial integrity. While the AI tool was not expressly named as the hallucination source, its involvement was evident and its uncritical use characterized as grossly negligent.
AI Use
Counsel explained that the hallucinated citations were included in a draft intended for personal legal research and learning, which was mistakenly filed with the court. This constituted an implicit admission that generative AI tools were involved.
Ruling/Sanction
While Judge Itay Katz did not impose personal costs, he referred the matter to the Legal Department of the Judicial Authority to determine whether further steps—including referral to the Israel Bar Association Ethics Committee—should be taken. The court emphasized this was done as a gesture of leniency with the hope that such behavior will not recur.
Key Judicial Reasoning
The court referred to several other recent Israeli cases to underscore the growing recognition of AI hallucination risk in legal practice. It reiterated the requirement for attorneys to meticulously verify any citation before submission and warned that future similar instances may not receive such lenient treatment.
(Grievance Committee Report available here.)
AI Use
Neusom told the grievance committee that he “may have used artificial intelligence” in preparing filings, and that any hallucinated cases were not deliberately fabricated but may have come from AI tools. The filings in question included a notice of removal and a motion for summary judgment. The judge later noted a pattern of citations inconsistent with established case law and unsupported by known databases.
Hallucination Details
Citations included cases that either did not exist or were grossly mischaracterized. Notably:
- Southern Specialties, Inc. v. Pulido Produce, Inc. – no such case found in Westlaw, Lexis, or PACER
- Trilogy Communications v. Times Fiber – cited in support of breach of contract when it was a patent matter involving no such principles
Neusom failed to produce the full texts of the cited cases when requested and instead filed a 721-page exhibit in violation of court orders.
Ruling/Sanction
The court adopted the grievance committee’s recommendation and imposed a one-year suspension. Neusom is prohibited from accepting new federal cases in the Middle District of Florida during the suspension and must:
- Notify existing clients and the court of his suspension
- File a compliance affidavit within 30 days
- Complete appropriate CLE and counseling programs
- Remain in good standing with the Florida Bar
- Apply for reinstatement only after certifying compliance
Key Judicial Reasoning
The court found that Neusom violated Rules 4-1.3, 4-3.3(a)(3), 4-3.4(c), and 4-8.4(c) of the Florida Rules of Professional Conduct. His failure to verify AI-generated content, compounded by noncompliance with orders and false statements to opposing counsel, demonstrated a pattern of recklessness and dishonesty. The court emphasized that federal proceedings require a high standard of diligence and that invoking AI cannot excuse failure to meet professional obligations.
AI Use
In opposing the return of a seized mobile phone, the prosecution cited a non-existent statutory provision allegedly defining what qualifies as an “institutional computer.” The judge identified the law as fictional and attributed its creation to generative AI, noting that it does not appear in any legal database or government source. The court referred to this as a product “created by artificial intelligence.”
Hallucination Details
The prosecution cited a statute regarding institutional computer definitions which, upon investigation, did not exist in Israeli law. The judge conducted internet and database searches to confirm its nonexistence. The judge criticized the error, remarking: “If I thought I had seen everything in 30 years on the bench, I was mistaken”
Ruling/Sanction
The judge declined to sanction the prosecution but strongly rebuked the conduct, calling it embarrassing and improper.
Key Judicial Reasoning
The judge stressed that citing phantom laws undermines public confidence and judicial efficiency. Even absent malice, reliance on fictitious AI-generated legal references is unacceptable. The judgment did not penalize the prosecution but underscored the need for due diligence and warned of reputational damage.
AI Use
GAO requested clarification after identifying case citation irregularities. The protester confirmed that their representative was not a licensed attorney and had relied on a combination of public tools, AI-based platforms, and secondary summaries, which produced fabricated or misattributed citations.
Hallucination Details
Examples included:
- GAO B-numbers with no corresponding decision
- CPD citations that did not match the referenced holding
- Alleged direct quotations not found in any GAO decision
The fabrications mirrored patterns typical of AI hallucinations.
Ruling/Sanction
Although the protest was dismissed on academic grounds, GAO addressed the citation misconduct. It did not impose sanctions in this case but warned that future submission of non-existent authority could lead to formal disciplinary action—including dismissal, cost orders, and bar referrals (in the case of attorneys).
Hallucination Details
The applicant’s factum included citations to:
- Alam v. Shah, which linked to an unrelated case (Gatoto v. 5GC Inc.)
- DaCosta v. DaCosta, which returned a 404 error
- Johnson v. Lanka, cited for a proposition directly contradicted by the decision
- Meschino Estate v. Meschino, which actually linked to a wrongful dismissal case (Antonacci v. Great Atlantic & Pacific Co.)
The judge noted these citations bore “hallmarks of an AI response” and described the conduct as possibly involving “hallucinations” from generative AI. The court ordered counsel to appear to explain whether she knowingly relied on AI and failed to verify the content. No clarification or correction was received from counsel after the hearing.
Ruling/Sanction
The motion proceeded, but Justice Myers ordered Ms. Lee to appear and show cause why she should not be held in contempt. Cited Zhang v. Chen, 2024 BCSC 285, for the principle that submission of fake cases is an abuse of process. Reserved costs pending outcome of contempt process
Key Judicial Reasoning
The court emphasized the fundamental duty of litigation lawyers not to mislead the court. It affirmed the need for human verification of AI-generated content and stated that submitting fabricated citations may amount to contempt in the face of the court.
AI Use
Counsel used CoCounsel, Westlaw’s AI tools, and Google Gemini to generate a legal outline for a discovery-related supplemental brief. The outline contained hallucinated citations and quotations, which were incorporated into the filed brief by colleagues at both Ellis George and K&L Gates. No one verified the content before filing. After the Special Master flagged two issues, counsel refiled a revised brief—but it still included six AI-generated hallucinations and did not disclose AI use until ordered to respond.
Hallucination Details
At least two cases did not exist at all, including a fabricated quotation attributed to Booth v. Allstate Ins. Co., 198 Cal.App.3d 1357 (1989). Misquoted or fabricated quotes attributed to National Steel Products Co. v. Superior Court, 164 Cal.App.3d 476 (1985). Several additional misquotes and garbled citations across three submitted versions of the brief. Revised versions attempted to silently “fix” errors without disclosing their origin in AI output.
Ruling/Sanction
The Special Master (Judge Wilner) struck all versions of Plaintiff’s supplemental brief, denied the requested discovery relief, and imposed:
- $26,100 in fees to reimburse Defendant for Special Master costs
- $5,000 in additional attorney’s fees to Defendant
- Total monetary sanction: $31,100, payable jointly and severally by Ellis George LLP and K&L Gates LLP
- No sanctions against individual attorneys due to candid admissions and remedial action, but strong warning issued
Key Judicial Reasoning
The submission and re-submission of AI-generated material without verification, especially after warning signs were raised, was deemed reckless and improper. The court emphasized that undisclosed AI use that results in fabricated law undermines judicial integrity. While individual attorneys were spared, the firms were sanctioned for systemic failure in verification and supervision. The Special Master underscored that the materials nearly made it into a judicial order, calling that prospect “scary” and demanding “strong deterrence.”
AI Use
The court observed that “some of the cases that plaintiff cites… do not exist,” and noted it had “tried, in vain,” to find them. While no explicit AI use is admitted by the plaintiff, the pattern and specificity of the fabricated citations are characteristic of LLM-generated hallucinations.
Ruling/Sanction
The court dismissed all five causes of action—including negligence, tortious interference, aiding and abetting fraud, declaratory judgment, and breach of implied covenant of good faith and fair dealing—as either untimely or duplicative/deficient on the merits. It declined to impose sanctions but explicitly invoked Dowlah v. Professional Staff Congress, 227 AD3d 609 (1st Dept. 2024), and Will of Samuel, 82 Misc 3d 616 (Sur. Ct. 2024), to warn plaintiff that any future citation of fictitious cases would result in sanctions.
Key Judicial Reasoning
Justice Jamieson noted that while the court is “sensitive to plaintiff's pro se status,” that does not excuse disregard of procedural rules or the submission of fictitious citations. The court emphasized that its prior decision in related litigation in 2022 undermined plaintiff’s tolling claims, and that Executive Order extensions during the COVID-19 pandemic did not rescue otherwise-expired claims. The hallucinated citations failed to salvage plaintiff’s fraud and tolling theories, and their use was treated as an aggravating—though not yet sanctionable—factor.
Court held that: "The use of fictitious quotes or cases in filings may subject a party, including a pro se party, to sanctions pursuant to Federal Rule of Civil Procedure 11 as “pro se litigants are subject to Rule 11 just as attorneys are.”
"For that principal [sic] Qamar cites a case, Gunn v. McKinney, 259 F.3d 824, 829 (7th Cir. 2001), which neither defense counsel nor the Court has been able to locate. The Court reminds Qamar that Federal Rule of Civil Procedure 11 applies to pro se litigants, and sanctions may result from such conduct, especially if the citation to Gunn was not merely a typographical or citation error but instead referred to a non-existent case. By presenting a pleading, written motion, or other paper to the Court, an unrepresented party acknowledges they will be held responsible for its contents. See Fed. R. Civ. P. 11(b)."
AI Use
Counsel denied using AI directly and attributed the hallucinations to “Google and Google Scholar” searches by a junior research assistant. However, the court found the citation pattern highly characteristic of generative AI hallucinations, including plausible-sounding but non-existent authority names and improper formatting. Counsel acknowledged a lack of adequate supervision and admitted that the cited authorities were never verified nor included in the bundle.
Hallucination Details
Seven cited authorities were found to be fictitious or mischaracterized, including:
- BWIA v. Ramnarine (TT Industrial Court, 2005)
- National Petroleum Marketing Co. v. Brewster (TT 2007)
- Horner v. KMW [2000] IRLR 814
- Jones v. Manchester Corporation [1952] 2 QB 852 (used for unrelated point)
- London School of Economics v. Dr Don [2016] EAT
- Ishmael v. NIPDEC (TT, 2014)
- BWIA v. Hollis (TT, 2001)
These were used to support the implied obligation to repay employer-sponsored training, the core issue of the case. None were available in legal databases or official archives, and no hard copies were ever submitted.
Ruling/Sanction
While the court awarded judgment for the Claimant on the breach of contract claim, it found the citation misconduct egregious and referred the matter to the Disciplinary Committee of the Law Association. The Court noted that hallucinated citations undermine judicial integrity and must be proactively prevented.
Key Judicial Reasoning
Justice Westmin James emphasized that lawyers must not submit unverifiable or fictitious authority, whether generated by AI or not. He underscored that the legal system depends on the accuracy of submissions, and that even unintentional use of hallucinated material violates the duty of candour and may constitute professional misconduct.
AI Use
The court stated that “Moales may have used artificial intelligence in drafting his submissions,” citing widespread concerns over AI hallucination. It noted that several citations in his complaint and show-cause response were plainly incorrect or irrelevant. While Moales did not admit AI use, the court cited Strong v. Rushmore Loan Mgmt. Servs., 2025 WL 100904 (D. Neb.) and Mata v. Avianca to contextualize its concern.
Hallucination Details
Cited Ernst & Ernst v. Hochfelder, 425 U.S. 185 (1976), and S.E.C. v. W.J. Howey Co., 328 U.S. 293 (1946) as supporting the existence of a federal common law fiduciary duty—an inaccurate legal proposition. The court characterized such misuses as “the norm rather than the exception” in Moales’s submissions. It stopped short of identifying all misused authorities but made clear that the inaccuracies were pervasive.
Ruling/Sanction
The complaint was dismissed for lack of subject matter jurisdiction under Rule 12(h)(3). Moales was permitted to file an amended complaint by May 28, 2025, but was warned that future filings must be factually and legally accurate. The court declined to reach the venue issue or impose immediate sanctions but warned Moales that misrepresentation of law may violate Rule 11.
Key Judicial Reasoning
The court found no basis for federal question jurisdiction and rejected Moales’s reliance on the Declaratory Judgment Act, constructive trust theories, and a nonexistent “federal common law of securities.” It also held that Moales failed to plausibly allege the amount in controversy necessary for diversity jurisdiction. The inaccurate legal citations further undermined the credibility of his filings. The court’s concluding section served as both an educational warning and a procedural admonition regarding AI use.
AI Use
The plaintiff’s attorney denied deliberate use of generative AI, claiming the wrong file numbers were inserted by mistake. The court rejected this explanation, finding the hallucinated decisions did not exist in any legal archive and could not plausibly arise from mere misnumbering. The court accepted the defendant’s assertion that the fabricated citations originated from generative AI.
Hallucination Details
Out of five rulings cited in the petition, three were not found in any legal database. Two additional cases were filed after the hearing, but neither matched the original citations or contained the propositions advanced in the pleading. The court found the overall drafting pattern aligned with generative AI hallucination phenomena.
Ruling/Sanction
Judge Merav Eliyahu dismissed the petition and imposed personal costs of ₪1,500 against Plaintiff’s counsel (payable to the state) and ₪3,500 (payable to the opposing party). She cited Supreme Court precedent and ethical commentary emphasizing the risks of hallucinated legal drafting. She emphasized that lawyers must not rely blindly on AI tools and must always verify the authenticity of legal authorities cited in pleadings.
Key Judicial Reasoning
The judge found that legal pleadings are the “foundational documents of judicial proceedings” and must be “accurate, reliable, and competently drafted.” Submitting fictitious judgments constitutes not only a procedural abuse but an ethical breach. Even absent bad faith, failure to verify AI-generated legal content breaches a lawyer’s core obligations.
AI Use
Counsel used ChatOn to rewrite a reply brief with case law, under time pressure, without verifying the outputs. The five cases did not exist; citations were entirely fictional. Counsel later admitted this in a sworn declaration and at hearing, describing her actions as a lapse caused by workload and inexperience with AI.
Hallucination Details
Fabricated cases included:
- Klein v. E.I. Du Pont de Nemours & Co., 406 F.2d 1004 (cited case does not exist)
- Gordon v. N.Y. Cent. R.R. Co., 202 F. Supp. 2d 290
- Mitchell v. JCG Industries, 2010 WL 11627832
- Hollander v. Sweeney, 2005 WL 19904045
- Davis v. S. Farm Bureau Cas. Ins. Co., 2019 WL 3452601
None of these cases matched any legal source. Counsel filed them as part of a sworn statement under penalty of perjury.
Ruling/Sanction
The court imposed a $1,000 sanction payable to the Clerk; ordered the counsel to serve the order on her client and file proof of service. The court acknowledged her sincere remorse and remedial CLE activity, but emphasized the seriousness of submitting hallucinated cases under oath. Sanctions were tailored for deterrence, not punishment.
Key Judicial Reasoning
Quoting Park v. Kim and Mata v. Avianca, the court held that submitting legal claims based on nonexistent authorities without checking them constitutes subjective bad faith. Signing a sworn filing without knowledge of its truth is independently sanctionable. Time pressure is not a defense. Lawyers cannot outsource core duties to generative AI and disclaim responsibility for the results.
AI Use
Plaintiff submitted a motion to disqualify opposing counsel that cited multiple non-existent cases. She offered no clarification about how the citations were obtained or whether she had attempted to verify them. The court noted this failure and declined to excuse the misconduct, though it stopped short of attributing it directly to AI tools.
Hallucination Details
The court reviewed Plaintiff’s motion and found that some of the cited cases did not exist. Despite being ordered to show cause, Plaintiff responded only with general statements about her good faith and complaints about perceived procedural unfairness, without addressing the origin or verification of the fake cases.
Ruling/Sanction
The court dismissed the case for lack of subject matter jurisdiction and independently dismissed it as a sanction for bad-faith litigation under Rule 11. It found Plaintiff’s conduct—submitting fictitious legal authorities and refusing to take responsibility for them—warranted dismissal, even if monetary sanctions were not appropriate. The court cited Mata v. Avianca, Morgan v. Community Against Violence, and O’Brien v. Flick as relevant precedents affirming the sanctionability of hallucinated case law.
Key Judicial Reasoning
Judge Hall held that Plaintiff’s conduct went beyond excusable error. Her submission of fabricated cases, refusal to explain their origin, and attempts to shift blame to perceived procedural grievances demonstrated bad faith. The court concluded that dismissal—though duplicative of the jurisdictional ground—was warranted as a standalone sanction to deter future abuse by similarly situated litigants.
AI Use
Lawyer admitted at a hearing that he used a generative AI tool to draft Defendants’ opposition to a motion in limine, without performing a manual cite-check. This only came to light after repeated questioning by the court.
Hallucination Details
The Opposition included nearly thirty major defects:
- Nonexistent cases (e.g., fabricated citations like Perkins v. Fed. Fruit & Produce Co., 945 F.3d 1242)
- Real cases misattributed to incorrect jurisdictions (e.g., Eastern District of Kentucky decisions wrongly cited as District of Colorado cases)
- Paraphrased material falsely quoted as direct text
- Fabricated or materially inaccurate statements of law allegedly drawn from cited authorities
Ruling/Sanction
The court issued a show cause order requiring defendants’ counsel to explain why sanctions and disciplinary referrals should not be imposed. The court is considering monetary sanctions, personal discipline for the counsel involved, and mandated disclosure of misconduct to defendant Michael Lindell.
Key Judicial Reasoning
The judge stressed that generative AI is no substitute for professional competence. Rule 11 requires a reasonable inquiry, and lawyers remain personally responsible for the contents of their filings. The court treated Kachouroff’s admissions and excuses with marked skepticism, indicating that misconduct was not merely negligent but bordered on reckless or deliberate disregard for ethical duties.
Although no immediate sanctions were imposed, Magistrate Judge Ho explicitly warned Plaintiff that future misconduct of this nature may violate Rule 11 and lead to consequences.
AI Use
Mr. Ferris admitted at the April 8, 2025 hearing that he used ChatGPT to generate the legal content of his filings and even the statement he read aloud in court. The filings included at least seven entirely fictitious case citations. The court noted the imbalance: it takes a click to generate AI content but substantial time and labor for courts and opposing counsel to uncover the fabrications.
Hallucination Details
The hallucinated cases included federal circuit and district court decisions, complete with plausible citations and jurisdictional diversity, crafted to lend credibility to Plaintiff’s intellectual property and employment-related claims. These false authorities were submitted both in the complaint and in opposition to Amazon’s motion to dismiss.
Ruling/Sanction
The court found a Rule 11 violation and, while initially inclined to dismiss the case outright, chose instead to impose a compensatory monetary sanction. Amazon is entitled to submit a detailed affidavit of costs directly attributable to rebutting the false citations. The final monetary amount will be set in a subsequent order.
Key Judicial Reasoning
Judge Michael P. Mills condemned the misuse of generative AI as a serious threat to judicial integrity. Quoting Kafka (“The lie made into the rule of the world”), the court lamented the rise of “a post-truth world” and framed Ferris as an “avatar” of that dynamic. Nevertheless, it opted for the least severe sanction consistent with deterrence and fairness: compensatory costs under Rule 11.
AI Use
Counsel filed opposition briefs citing two nonexistent cases. The court suspected generative AI use based on "hallucination" patterns but Counsel neither admitted nor explained the citations satisfactorily. Failure to comply with a standing AI order aggravated sanctions.
Hallucination Details
Two fake cases cited. Citation numbers and Westlaw references pointed to irrelevant or unrelated cases. No affidavit or real case documents were produced when ordered.
Ruling/Sanction
Counsel's appearance was struck with prejudice. The Court ordered notification to the State Bar of Pennsylvania and the Eastern District Bar. Consel was required to inform his client, Bevins, of the sanctions and the need for new counsel if re-filing.
Key Judicial Reasoning
The judge emphasized that citing nonexistent cases—even inadvertently—is a violation of Rule 11(b)(2), constituting at least negligence. Compliance with the Court’s AI Standing Order was mandatory. Self-certification obligations under Federal and Local Rules remain fully in force despite technological assistance.
The court held that: "It is likely that Appellant employed argument generated by an artificial intelligence (AI) program which contained the fictitious case citation and cautions Appellant that many harms flow from the use of non-existent case citations and fake legal authority generated by AI programs, including but not limited to the waste of judicial resources and time and waste of resources and time of the opposing party. Were courts to unknowingly rely upon fictitious citations, citizens and future litigants might question the validity of court decisions and the reputation of judges. If, alternatively, Appellant's use of a fictitious case was not the result of using an AI program, but was instead a conscious act of the Appellant, Appellant's action could be deemed a fraud on the Court. Appellant is hereby expressly warned that submission of fictitious case authorities may subject Appellant to sanctions under the S.C. Frivolous Proceedings Act, S.C. Code Ann. § 15-36-10(Supp. 2024)."
AI Use
The petitioner submitted a motion to compel discovery that contained several fabricated or misleading citations. The court explicitly stated that the motion bore hallmarks of generative AI use and referenced ChatGPT’s known risk of “hallucinations.” Although the petitioner did not admit AI use, the court found the origin clear and required future filings to include a GenAI usage certification.
Hallucination Details
Examples included:
- Terramar Retail Centers, LLC v. Marion #2-Seaport Trust – cited for discovery principles it did not contain
- Deutsch v. ZST Digital Networks, Inc. – quoted for a sentence not found in the opinion
- Production Resources Group, LLC v. NCT Group, Inc. – attributed with a quote that appears nowhere in the case or legal databases
Court verified via Westlaw that some phrases returned only the petitioner’s motion as a result.
Ruling/Sanction
Motion to compel denied with prejudice. No immediate monetary sanction imposed, but petitioner was warned that further submission of fabricated authority may result in sanctions including monetary penalties or dismissal. Future filings must include a certification regarding the use of generative AI.
Key Judicial Reasoning
The Vice Chancellor emphasized that GenAI can benefit courts and litigants, but careless use that results in fictitious legal authorities wastes resources and harms judicial integrity. Misleading the court, even unintentionally, constitutes sanctionable conduct under Delaware standards, and will not be tolerated if repeated.
AI Use
The judgment states that the only other explanation for the fabricated cases was the use of artificial intelligence.
Hallucination Details
The following five nonexistent cases were cited:
- R (El Gendi) v Camden [2020] EWHC 2435 (Admin)
- R (Ibrahim) v Waltham Forest [2019] EWHC 1873
- R (H) v Ealing [2021] EWHC 939 (Admin)
- R (KN) v Barnet [2020] EWHC 1066 (Admin)
- R (Balogun) v Lambeth [2020] EWCA Civ. 1442
Ruling/Sanction
The court imposed wasted costs orders against both barrister and solicitor, reduced the claimant’s recoverable costs, and ordered the judgment to be provided to the BSB and SRA.
Key Judicial Reasoning
The judge found that five cited cases did not exist and were not discoverable on Westlaw. The statement of facts and grounds was deemed misleading, and the conduct improper and unreasonable. The Court noted that this conduct met the Denton test for serious and significant breach and ordered sanctions accordingly.
AI Use
Counsel hired a freelance attorney through LAWCLERK to prepare a filing. He made minimal edits and admitted not verifying any of the case law before signing. The filing included multiple fabricated cases and misquoted others. The court concluded these were AI hallucinations, likely produced by ChatGPT or similar.
Hallucination Details
Examples of non-existent cases cited include:
Moncada v. Ruiz, Vega-Mendoza v. Homeland Security, Morales v. ICE Field Office Director, Meza v. United States Attorney General, Hernandez v. Sessions, and Ramirez v. DHS.
All were either entirely fictitious or misquoted real decisions.
Ruling/Sanction
The Court sanctioned Counsel by:
- Ordering a $1,500 fine
- Requiring a 1-hour CLE on AI/legal ethics
- Ordering him to self-report to the New Mexico and Texas bars
- Ordering him to report the freelance lawyer to the New York bar
- Requiring notification of LAWCLERK
- Requiring proof of compliance by May 15, 2025
Key Judicial Reasoning
The court emphasized that counsel’s failure to verify cited cases, coupled with blind reliance on subcontracted work, constituted a violation of Rule 11(b)(2). The court analogized to other AI-sanctions cases. While the fine was modest, the court imposed significant procedural obligations to ensure deterrence.
AI Use
The plaintiff did not admit to using AI, but the court inferred likely use due to the submission of fabricated citations matching the structure and behavior typical of generative AI hallucinations. The decision referenced public concerns about AI misuse and cited specific examples of federal cases where similar misconduct occurred.
Hallucination Details
Plaintiff cited:
- Tucker v. United States, 24 Cl. Ct. 536 (1991) – does not exist
- Fargo v. United States, 184 F.3d 1096 (Fed. Cir. 1999) – fabricated citation pointing to an unrelated Ninth Circuit case
- Bristol Bay Native Corporation v. United States, 87 Fed. Cl. 122 (2009) – fictional
- Quantum Construction, Inc. v. United States, 54 Fed. Cl. 432 (2002) – nonexistent
- Hunt Building Co., LLC v. United States, 61 Fed. Cl. 243 (2004) – real case misused; contains no mention of unjust enrichment
Ruling/Sanction
The court granted the government’s motion to dismiss for lack of subject matter jurisdiction under Rule 12(b)(1). Although the court found a clear Rule 11 violation, it opted not to sanction the plaintiff, citing the evolving context of AI use and the absence of bad faith. A formal warning was issued, with notice that future hallucinated filings may trigger sanctions.
Key Judicial Reasoning
Judge Roumel noted that plaintiff’s attempt to rely on fictional case law was a misuse of judicial resources and a disservice to her own advocacy. The court cited multiple precedents addressing hallucinated citations and AI misuse, stating clearly that while leeway is granted to pro se litigants, the line is crossed when filings rely on fictitious law.
AI Use
Although AI was not named and Plaintiff denied intentional fabrication, the court considered the citation (Adamov, 779 F.3d 851, 860 (8th Cir. 2015)) to be plainly fictitious. It noted the possibility that Plaintiff used generative AI tools, given the fabricated citation's plausible-sounding structure and mismatch with existing precedent.
Hallucination Details
Plaintiff submitted fabricated legal authorities in at least two filings, despite being explicitly warned by the court after the first incident. The false case cited in her sur-reply could not be located in any legal database. When asked to produce it, she responded that she had likely “garbled” the citation but provided no plausible alternative or correction.
Ruling/Sanction
The court declined to dismiss the action as a sanction, citing the limitations pro se litigants face in accessing reliable legal research tools. However, it granted the defendant’s motion to strike Plaintiff’s two unauthorized sur-replies and formally warned her that further violations of Rule 11 would lead to sanctions, including monetary penalties, filing restrictions, or dismissal.
Key Judicial Reasoning
Judge Patrick Wyrick stated that fabricated citations waste opposing counsel’s and the court’s time and damage the legal system’s credibility. While recognizing the plaintiff’s pro se status, he emphasized that Rule 11 applies to all litigants equally. Given the warning already issued and the failure to provide any convincing justification, the court found Plaintiff’s continued behavior troubling, but stopped short of imposing punitive sanctions.
AI Use
The applicant cited Crime and Misconduct Commission v Chapman [2007] QCA 283 in support of a key submission. The Tribunal was unable to locate such a case. It queried ChatGPT, which returned a detailed but entirely fictitious account of a case that does not exist. The Tribunal attached Queensland’s AI usage guidelines to its reasons and emphasized that the responsibility for accuracy lies with the party submitting the material.
Ruling/Sanction
The fabricated case was disregarded. The Tribunal granted a stay but issued a strong warning: litigants are responsible for understanding the limitations of AI tools and must verify all submitted material. The inclusion of fictitious material wastes time, diminishes credibility, and undermines the process.
Key Judicial Reasoning
Citing non-existent authorities "weakens their arguments. It raises issues about whether their submission can be considered as accurate and reliable. It may cause the Tribunal to be less trusting of other submissions which they make. It wastes the time for Tribunal members in checking and addressing these hallucinations. It causes a significant waste of public resources."
AI Use
Kruglyak acknowledged he had used free generative AI tools to conduct legal research and included fabricated case citations and misrepresented holdings in his filings. He claimed ignorance of AI hallucination risk at the time of filing but stated he had since ceased such reliance and sought more reliable legal sources.
Hallucination Details
The plaintiff cited non-existent decisions and falsely attributed holdings to real ones. He did not initially disclose the use of AI but conceded it in response to the court’s show cause order. The brief at issue combined wholly fabricated cases with distorted summaries of actual ones.
Ruling/Sanction
Magistrate Judge Sargent concluded that Kruglyak had not acted in bad faith, credited his prompt admission and explanation, and noted his subsequent remedial efforts. No monetary sanctions were imposed, but the court emphasized its authority to impose such penalties if future violations occur. Plaintiff was allowed to amend his filings with accurate citations.
Key Judicial Reasoning
The court stressed that while generative AI platforms may assist litigants, they are unreliable legal authorities prone to hallucinations. Rule 11 requires a reasonable inquiry before filing, and ignorance of AI limitations does not excuse defective legal submissions. However, leniency was warranted here due to Kruglyak’s candor and corrective action. The court invoked prior AI-related rulings (Mata v. Avianca, Cohen, Iovino) to situate the incident within an emerging judicial trend.
AI Use
While not formally admitted, Plaintiff’s opposition brief referred to “legal generative AI program CoCounsel,” and the court noted that the structure and citation pattern were consistent with AI-generated output. Capital One was unable to verify several case citations, prompting the court to scrutinize the submission.
Hallucination Details
At least one case was fully fabricated, and another was a real case misattributed to the wrong jurisdiction and reporter. The court emphasized that it could not determine whether the mis-citations were the result of confusion, poor research, or hallucinated AI output—but the burden rested with the party filing them.
Ruling/Sanction
The court dismissed the complaint with prejudice, noting Plaintiff had already filed and withdrawn a prior version and had had full opportunity to amend. Though it did not impose monetary sanctions, it issued a “strong warning” and directed Plaintiff to notify other courts where he had similar pending cases if any filings included erroneous AI-generated citations.
Key Judicial Reasoning
The fabricated citations severely undermined Plaintiff’s credibility and legal argument. The court deemed further amendment futile given the procedural history and the nature of the defects.
AI Use
The respondent retailer's defense cited Italian Supreme Court judgments that did not exist, claiming support for their arguments regarding lack of subjective bad faith. During subsequent hearings, it was admitted that these fake citations were generated by ChatGPT during internal research by an assistant, and the lead lawyer had failed to independently verify them.
Hallucination Details
Cited fabricated cassation rulings allegedly supporting subjective good faith defenses. No such rulings could be found in official databases; court confirmed their nonexistence. Hallucinated decisions related to counterfeit goods sales defenses
Ruling/Sanction
The court declined to impose a financial sanction under Article 96 Italian Code of Civil Procedure but issued a formal rebuke. It refused the defending party's requests for costs and treated the fabricated citations as weakening the credibility of the defense. Court emphasized that using unverifiable AI outputs to support legal arguments is a procedural violation undermining the adversarial system.
Key Judicial Reasoning
The Tribunal held that reliance on hallucinated case law undermines judicial process integrity and cannot be excused by ignorance or delegation to assistants. While no malice was found, gross negligence in verifying legal claims was established. Judicial reliance on trustworthy authorities is non-negotiable. The court noted that AI hallucinations are an increasingly recognized threat, drawing implicit parallel to international cases like Mata v. Avianca.
AI Use
Justice Nolan suspected that Reddan's submissions, especially references to "subornation to perjury" and Constitutional Article 40 rights, were AI-generated, exhibiting typical hallucination patterns (pseudo-legal concepts, inappropriate cut-and-paste fragments). Reddan did not admit using AI but relied on internet-sourced legal arguments that closely resembled LLM-style outputs.
Hallucination Details
Inappropriate invocation of "subornation to perjury," a term foreign to Irish law. Constitutional and criminal law citations (Article 40, Non-Fatal Offences Against the Person Act) irrelevant to judicial review context. Assertions framed in hyperbolic, sensationalist terms without factual or legal basis. General incoherence of pleadings, consistent with AI-generated pseudo-legal text
Ruling/Sanction
The High Court refused leave to apply for judicial review on all nine grounds. While no formal financial sanction was imposed, Justice Nolan issued a sharp rebuke, highlighting the improper use of AI and warning against making scurrilous, unverified allegations in legal pleadings. The Court stressed that misuse of AI-generated material could itself amount to an abuse of the judicial process.
Key Judicial Reasoning
The Court held that AI tools do not excuse litigants from ensuring precision, coherence, and factual basis in pleadings. It emphasized that judicial review demands rigorous pleading standards, and the insertion of AI-fabricated concepts or incoherent arguments amounts to a violation of procedural rules. The ruling underlined the broader systemic risks posed by AI misuse in legal filings.
AI Use
Nguyen did not confirm which AI tool was used but acknowledged that AI “may have contributed.” The court inferred the use of generative AI from the pattern of hallucinated citations and accepted Nguyen’s candid acknowledgment of error, though this did not excuse the Rule 11 violation.
Hallucination Details
Fictitious citations included:
- Kraft v. Brown & Williamson Tobacco Corp., 668 F. Supp. 2d 806 (E.D. Ark. 2009)
- Young v. Johnson & Johnson, 983 F. Supp. 2d 747 (E.D. Ark. 2013)
- Carpenter v. Auto-West Inc., 553 S.W.3d 480 (Ark. 2018)
- Miller v. Hall, 360 S.W.2d 704 (Ark. 1962)
None of these cases existed in Westlaw or Lexis, and the quotes attributed to them were fabricated.
Outcome / Sanction
The court imposed a $1,000 monetary sanction on Counsel for citing non-existent case law in violation of Rule 11(b). It found her conduct unjustified, despite her apology and explanation that AI may have been involved. The court emphasized that citing fake legal authorities is an abuse of the adversary system and warrants sanctions.
Legal Reasoning
Rule 11(b) requires that legal contentions be warranted by existing law or a non-frivolous extension thereof. A fabricated opinion is not “existing law,” and citation to such a source constitutes a sanctionable abuse of the judicial process, regardless of whether AI was involved directly or via delegated drafting. The court adopted the reasoning of Mata v. Avianca and reaffirmed that presenting fake case law to support legal arguments undermines the integrity of federal proceedings.
AI Use
The judgment refers repeatedly to use of “AI-based websites” and “artificial intelligence hallucinations,” and quotes prior decisions warning against reliance on AI without verification. Although no specific tool was named, the Court inferred use based on the stylistic pattern and total absence of real citations. Petitioner provided no clarification and ultimately sought to withdraw the petition once challenged.
Hallucination Details
The legal authorities cited in the petition included:
- Case names and citations that do not exist in Israeli legal databases or official court archives
- Quotations and doctrinal references attributed to rulings that were entirely fictitious
- Systematic internal inconsistencies and citation errors typical of AI-generated legal writing
The Court made efforts to locate the decisions independently but failed, and the petitioner never supplied the sources after being ordered to do so.
Ruling/Sanction
The Court dismissed the petition in limine (on threshold grounds), citing “lack of clean hands” and “deficient legal infrastructure.” It imposed a ₪7,000 costs order against the petitioner and referred to the growing body of jurisprudence on AI hallucinations. The Court explicitly warned that future petitions tainted by similar conduct would face harsher responses, including possible professional discipline.
Key Judicial Reasoning
Justice Noam Sohlberg, writing for the panel, observed that citing fictitious legal authorities—whether by AI or not—is as egregious as factual misrepresentation. He reiterated that the duty of candor includes legal citations, not just facts. Drawing parallels to, he warned of an emerging trend and called for heightened professional vigilance. The Court explicitly rejected the petitioner’s belated attempt to withdraw the petition without consequences, stating that the judicial system will not become a playground for algorithmic invention.
AI Use
Counsel admitted using ChatGPT to draft two motions (Motion to Withdraw and Motion for Leave to Appeal), without verifying the cases or researching the AI tool’s reliability.
Hallucination Details
2 Fake cases:
- McNally v. Eyeglass World, LLC, 897 F. Supp. 2d 1067 (D. Nev. 2012) — nonexistent
- Behm v. Lockheed Martin Corp., 460 F.3d 860 (7th Cir. 2006) — nonexistent
Misused cases:
- Degen v. United States, cited for irrelevant proposition
- Dow Chemical Canada Inc. v. HRD Corp., cited despite later vacatur
- Eavenson, Auchmuty Greenwald v. Holtzman, cited despite being overruled by Third Circuit precedent
Ruling/Sanction
The Court sanctioned Counsel $2,500 payable to the court and ordered him to complete at least one hour of CLE on AI and legal ethics. The opinion emphasized that deterrence applied both specifically to Counsel and generally to the profession.
Key Judicial Reasoning
Rule 11(b)(2) mandates reasonable inquiry into all legal contentions. No AI tool displaces the attorney’s personal duty. Ignorance of AI’s unreliability is not a defense. The Court cited Mata v. Avianca and Gauthier v. Goodyear to emphasize that sanctions for AI hallucinations are now a well-established judicial response.
AI Use
Counsel from Morgan & Morgan used the firm's internal AI platform (MX2.law, reportedly using ChatGPT) to add case law support to draft motions in limine in a product liability case concerning a hoverboard fire. This was reportedly his first time using AI for this purpose.
Hallucination Details
Eight out of nine case citations in the filed motions were non-existent or led to differently named cases. Another cited case number was real but belonged to a different case with a different judge. The legal standard description was also deemed "peculiar".
Ruling/Sanction
After defense counsel raised issues, the Judge issued an order to show cause. The plaintiffs' attorneys admitted the error, withdrew the motions, apologized, paid opposing counsel's fees related to the motions, and reported implementing new internal firm policies and training on AI use. Judge Rankin found Rule 11 violations. Sanctions imposed were: $3,000 fine on the drafter and revocation of his pro hac vice admission; $1,000 fine each on the signing attorneys for failing their duty of reasonable inquiry before signing.
Key Judicial Reasoning
The court acknowledged the attorneys' remedial steps and honesty but emphasized the non-delegable duty under Rule 11 to make a reasonable inquiry into the law before signing any filing. The court stressed that while AI can be a tool, attorneys remain responsible for verifying its output. The judge noted this was the "latest reminder to not blindly rely on AI platforms' citations".
AI Use
The petitioner’s counsel used an AI-based platform to draft the legal petition.
Hallucination Details
The petition cited 36 fabricated or misquoted Israeli Supreme Court rulings. Five references were entirely fictional, 14 had mismatched case details, and 24 included invented quotes. Upon judicial inquiry, counsel admitted reliance on an unnamed website recommended by colleagues, without verifying the information's authenticity. The Court concluded that the errors were likely the product of generative AI.
Ruling/Sanction
The High Court of Justice dismissed the petition on the merits, finding no grounds for intervention in the Sharia courts’ decisions. Despite the misconduct, no personal sanctions or fines were imposed on counsel, citing it as the first such incident to reach the High Court and adopting a lenient stance “far beyond the letter of the law.” However, the judgment was explicitly referred to the Court Administrator for system-wide attention.
Key Judicial Reasoning
The Court issued a stern warning about the ethical duties of lawyers using AI tools, underscoring that professional obligations of diligence, verification, and truthfulness remain intact regardless of technological convenience. The Court suggested that in future cases, personal sanctions on attorneys might be appropriate to protect judicial integrity.
AI Use
Counsel admitted at a show cause hearing that he used generative AI tools to draft multiple briefs and did not verify the citations provided by the AI, mistakenly trusting their apparent credibility without checking.
Hallucination Details
Three distinct fake cases across filings. Each was cited in a separate brief, with no attempt at Shepardizing or KeyCiting.
Ruling/Sanction
The Court recommended a $15,000 sanction ($5,000 per violation), with the matter referred to the Chief Judge for potential additional professional discipline. Counsel was also ordered to notify Hoosiervac LLC’s CEO of the misconduct and file a certification of compliance.
Key Judicial Reasoning
The judge stressed that reliance on AI outputs without verification is a violation of Rule 11. Good faith ignorance about AI hallucination capabilities is irrelevant. The decision emphasized that generative AI can assist research but cannot replace professional obligations. The judge invoked multiple authorities on sanctions for failure to verify case law and analogized using AI improperly to wielding dangerous tools without caution.
AI Use
The appellant’s counsel admitted to having used ChatGPT, claiming the submission of false case law was the result of “unintentional use.” The fabricated citations were used in an appeal against a reintegration of possession order, in favor of the appellant’s stepmother and father’s heirs.
Hallucination Details
The brief contained numerous non-existent judicial precedents and references to legal doctrine that were either incorrect or entirely fictional. The court described them as “fabricated” and considered them serious enough to potentially mislead the court.
Ruling/Sanction
While the 6th Civil Chamber temporarily suspended the reintegration order, it further imposed a 10% fine on the value of the claim for bad-faith litigation and ordered that a copy of the appeal be forwarded to the Santa Catarina section of the Brazilian Bar Association (OAB/SC) for further investigation.
Key Judicial Reasoning
The court emphasized that the legal profession is a public calling entailing duties of truthfulness and diligence. It cautioned that AI must be used “with caution and restraint,” as reliance on hallucinated material violates the duty to faithfully represent facts and law. The chamber unanimously supported the sanction.
AI Use
The plaintiff submitted citations that were entirely fabricated. When challenged, Saxena denied AI use and insisted the cases existed, offering no evidence. The court concluded either he fabricated the citations or relied on AI and failed to verify them.
Hallucination Details
- Spokane v. Douglass turned out to conflate unrelated decisions and misused citations from other cases
- Hummel v. State could not be found in any Nevada or national database; citation matched an unrelated Oregon case
The court found no plausible explanation for these citations other than AI generation or outright fabrication.
Ruling/Sanction
The court dismissed the case with prejudice for repeated failure to comply with Rule 8 and for the submission of fictitious citations. Though no separate sanctions motion was granted, the court's ruling incorporated the AI misuse into its reasoning and concluded that Saxena could not be trusted to proceed further in good faith.
Key Judicial Reasoning
The court reasoned that “courts do not make allowances for a plaintiff who cites to fake, nonexistent, misleading authorities.” Saxena’s refusal to acknowledge the fabrication compounded the issue. In a subsequent order, the court held that being pro se and disabled "is no excuse for submitting non-existent authority to the court in support of a brief".
AI Use
Counsel used ChatGPT to generate a summary of cases for a submission, which included fictitious Federal Court decisions and invented quotes from a Tribunal ruling. He inserted this output into the brief without verifying the sources. Counsel later admitted this under affidavit, citing time pressure, health issues, and unfamiliarity with AI's risks. He noted that guidance from the NSW Supreme Court was only published after the filing.
Hallucination Details
The 25 October 2024 submission cited at least 16 completely fabricated decisions (e.g. Murray v Luton [2001] FCA 1245, Bavinton v MIMA [2017] FCA 712) and included supposed excerpts from the AAT’s ruling that did not appear in the actual decision. The Court and Minister’s counsel were unable to verify any of the cited cases or quotes.
Ruling/Sanction
Judge Skaros ordered referral to the OLSC under the Legal Profession Uniform Law (NSW) 2014, noting breaches of rules 19.1 and 22.5 of the Australian Solicitors’ Conduct Rules. The Court accepted Counsel’s apology and health-related mitigation but found that the conduct fell short of professional standards and posed systemic risks given increasing AI use in legal practice.
Key Judicial Reasoning
While acknowledging that Counsel corrected the record and showed contrition, the Court found that the damage—including wasted judicial resources and delay to proceedings—had already occurred. The ex parte email submitting corrected materials, without notifying opposing counsel, further compounded the breach. Given the public interest in safeguarding the integrity of litigation amidst growing AI integration, referral to the OLSC was deemed necessary, even without naming Counsel in the judgment.
AI Use
The court noted “problems with several citations leading to different or non-existent cases and a quotation that did not appear in any cases cited” in defendants’ reply papers. While the court did not identify AI explicitly, it flagged the issue and indicated that repeated infractions could lead to sanctions.
Ruling/Sanction
No immediate sanction. The court granted plaintiff’s motion in part, striking thirteen of eighteen affirmative defenses. It emphasized that if citation issues persist, sanctions will follow.
Key Judicial Reasoning
The court acknowledged that while the faulty citations did not alter the motion’s resolution, they undermine credibility and judicial efficiency. It reserved stronger consequences for any future recurrence.
AI Use
Counsel admitted using a “new legal research medium”, appears to be a generative AI system or platform capable of generating fictitious case law. Counsel did not deny using AI, but claimed the system may have been corrupted or unreliable. The amended filing removed the false authorities.
Hallucination Details
The court did not identify the specific fake cases but confirmed that “citations to non-existent cases” were included in Defendants’ original brief. Counsel’s subsequent filing corrected the record but did not explain how the citations passed into the brief in the first place.
Ruling/Sanction
Judge William Griesbach denied the motion for summary judgment on the merits, but addressed the citation misconduct separately. He cited Rule 11 and Park v. Kim (91 F.4th 610, 615 (2d Cir. 2024)) to underline the duty to verify. No formal sanctions were imposed, but counsel was explicitly warned that further use of non-existent authorities would not be tolerated.
Key Judicial Reasoning
The court emphasized that even if the submission of false citations was not malicious, it was still a serious breach of Rule 11 obligations. Legal contentions must be “warranted by existing law,” and attorneys are expected to read and confirm cited cases. The failure to do so, even if caused by AI use, is unacceptable. The court accepted counsel’s corrective effort but insisted that future violations would be sanctionable.
Key Judicial Reasoning
Magistrate Judge Sheri Pym found the motion legally deficient on multiple grounds. In addition, she emphasized that counsel must not rely on fake or unverified authority. She cited Mata, Park, Gauthier, and others as cautionary examples of courts imposing sanctions for AI-generated hallucinations. The court reaffirmed that the use of AI does not lessen the duty to verify the existence and relevance of cited law.
AI Use
Defense counsel Andrew Francisco submitted filings quoting and relying on a fabricated case (United States v. Harris, 761 F. Supp. 409 (D.D.C. 1991)) and a nonexistent quotation. Although Francisco claimed he had not used AI, the court found the fabrication bore the hallmarks of an AI hallucination and rejected his explanations as implausible.
Hallucination Details
Francisco cited and quoted from a wholly fictitious United States v. Harris case, which neither existed at the cited location nor contained the quoted material. Upon confrontation, Francisco incorrectly tried to shift the source to United States v. Broussard, but that case also did not contain the quoted text. Searches in Westlaw and Lexis confirmed the quotation existed nowhere.
Ruling/Sanction
The Court formally sanctioned Francisco for degrading the integrity of the court and violating professional responsibility rules. Although monetary sanctions were not immediately imposed, the misconduct was recorded and would be taken into account in future disciplinary proceedings if warranted.
Key Judicial Reasoning
The court emphasized that submitting fake legal authorities undermines judicial credibility, wastes opposing parties' resources, and abuses the adversarial system. Persistent refusal to candidly admit errors aggravated the misconduct. The Court explicitly cited Mata v. Avianca and other AI hallucination cases as precedent for sanctioning such behavior, finding Francisco’s case especially egregious due to repeated bad faith evasions after being given opportunities to correct the record.
AI Use
The court stated it was “highly suspicious” that plaintiffs used generative AI to draft their complaint and briefs due to the presence of nonexistent cases and mismatched citations. Though no explicit admission of AI use was made, the pattern of errors mirrored known AI hallucination behavior.
Hallucination Details
The plaintiffs cited several case names with incorrect or fabricated reporter citations that did not support the legal propositions offered. For example, United States v. Bortnovsky, cited as 820 F.3d 572 (2d Cir. 2016), does not exist.
Ruling/Sanction
The court granted defendants’ motion to dismiss. It declined to impose filing restrictions at this stage but explicitly warned the Strongs that repeated abusive or AI-generated filings would lead to sanctions, including monetary penalties and restrictions on future filings.
Key Judicial Reasoning
The plaintiffs’ misuse of legal citations undermined their credibility, and the court warned that even pro se status does not excuse submission of false or hallucinatory legal material. The opinion cited multiple federal cases warning against use of fictitious citations and referenced Rule 11’s applicability to unrepresented litigants.
AI Use
Although O’Brien denied deliberate fabrication and described the inclusion of fake citations as a “minor clerical error” or “mix-up,” the court rejected this explanation. The opinion notes that the citations had no plausible source in other filings and that the brief exhibited structural traits of AI-generated text. The court explicitly concluded that O’Brien “generated his Reply with the assistance of a generative artificial intelligence program.”
Ruling/Sanction
The court dismissed the case with prejudice on dual grounds:
- The claims should have been raised as compulsory counterclaims in prior pending litigation and were thus procedurally barred under Rule 13(a)
- O’Brien submitted fake legal citations, failed to acknowledge the issue candidly, violated local rules, and engaged in a pattern of procedural misconduct in this and other related litigation. While monetary sanctions were not imposed, the court granted the motion to strike and ordered dismissal with prejudice as both substantive and disciplinary remedy.
Key Judicial Reasoning
Judge Melissa Damian found that the fabricated citations and O’Brien’s refusal to admit or correct them constituted bad faith. She referenced multiple prior instances where O’Brien had been warned or sanctioned for similar behavior, and emphasized that while pro se litigants may receive procedural leniency, they are not exempt from ethical or legal standards. Dismissal with prejudice was chosen as a proportionate sanction under the court’s inherent powers.
AI Use
Professor Jeff Hancock, a Stanford University expert on AI and misinformation, used GPT-4o to assist in drafting an expert declaration submitted by the Minnesota Attorney General's office in defense of a state law regulating AI deepfakes in elections.
Hallucination Details
The declaration contained citations to three non-existent academic articles, apparently generated when the AI misinterpreted Hancock's notes to himself (e.g., "[cite]") as prompts to insert references. Opposing counsel identified the fake citations.
Ruling/Sanction
Professor Hancock admitted the errors resulted from unchecked AI use, explaining it deviated from his usual practice of verifying citations for academic papers, and affirmed the substance of his opinions remained valid. Judge Laura M. Provinzino found the explanation plausible but ruled the errors "shattered his credibility". The court excluded the expert declaration as unreliable, emphasizing that signing a declaration under penalty of perjury requires diligence and that false statements, innocent or not, are unacceptable.
Key Judicial Reasoning
The court found it "particularly troubling" that the expert exercised less care with a court filing than with academic work. While not faulting the use of AI itself, the court stressed the need for independent judgment and verification, stating the incident was a reminder that Rule 11's "inquiry reasonable under the circumstances" might now require attorneys to ask witnesses about their AI use and verification steps. The irony of an AI misinformation expert falling victim to AI hallucinations in a case about AI dangers was noted.
AI Use
The judgment does not explicitly confirm that generative AI was used, but the judge strongly suspects ChatGPT or a similar tool was the source. The judge even ran prompts into ChatGPT and confirmed that the tool responded with fabricated support for the same fake cases used in the submission. Counsel blamed overwork and delegation to a candidate attorney (Ms. Farouk), who denied AI use but gave vague and evasive answers.
Hallucination Details
Fabricated or misattributed cases included:
- Pieterse v. The Public Protector (no such case exists at cited location)
- Burgers v. The Executive Committee..., Dube v. Schleich, City of Cape Town v. Aon SA, Makro Properties v. Raal, Standard Bank v. Lethole — none found in SAFLII or major reporters
- Citations were often invented or misattributed to irrelevant decisions (e.g., a Competition Tribunal merger approval cited as support for service rules)
The supplementary notice of appeal included misleading summaries with no accurate paragraph citations, and no proper authority was ever provided for key procedural points.
Ruling/Sanction
- Application for leave to appeal dismissed in full
- Legal representatives ordered to pay costs of the 22 and 25 September 2024 appearances de bonis propriis
- Judgment referred to the Legal Practice Council
- Judge emphasized that the conduct went beyond the leniency shown in Parker v. Forsyth, as it involved unverified submissions in a signed court filing and then doubling down during oral argument.
Key Judicial Reasoning
Justice Bezuidenhout issued a lengthy and stern warning on the professional obligation to verify authorities. She held that “relying on AI technologies when doing legal research is irresponsible and downright unprofessional,” and emphasized that even ignorance of AI’s flaws does not excuse unethical conduct. The judgment discusses comparative standards, ethical obligations, and recent literature in detail.
AI Use
Alim Al-Hamim, appearing pro se (self-represented), used a generative AI tool to prepare his opening brief appealing the dismissal of his claims against his landlords. He had also submitted a document with fabricated citations in the lower court.
Hallucination Details
The appellate brief contained eight fictitious case citations alongside legitimate ones. The court could not locate the cases and issued an order to show cause.
Ruling/Sanction
Al-Hamim admitted relying on AI, confirmed the citations were hallucinations, stated he failed to inspect the brief, apologized, and accepted responsibility. The court affirmed the dismissal of his claims on the merits. While finding his submission violated Colorado Appellate Rules (C.A.R. 28(a)(7)(B)), the court exercised its discretion and declined to impose sanctions.
Key Judicial Reasoning
Factors against sanctions included Al-Hamim's pro se status, his contrition, lack of prior appellate violations, the absence of published Colorado precedent on sanctions for this issue, and the fact that opposing counsel did not raise the issue or request sanctions. However, the court issued a clear and strong warning to "the bar, and self-represented litigants" that future filings containing AI-generated hallucinations "may result in sanctions". The court emphasized the need for diligence, regardless of representation status.
AI Use
Counsel admitted the fictitious citations originated from an “online legal database commonly used by lawyers.” Though the platform is unnamed, the court ruled out the standard legal database Nevo and concluded the “source of the hallucination is unclear.” Counsel apologized and claimed no intent to mislead.
Hallucination Details
The motion cited ten fabricated decisions—each with full party names, court locations, file numbers, and dates—purportedly showing that indirect child support debts owed to the National Insurance Institute could be discharged in bankruptcy. The court could not find a single one in any judicial database and ordered counsel to produce them. When he failed, he admitted they were inauthentic. The only real cited case (Skok) did not support the petitioner’s position.
Ruling/Sanction
The court dismissed the petition after finding that: (i) the cited decisions were fabricated; (ii) the only valid case did not support the argument; and (iii) under Israel’s Bankruptcy Ordinance, child support debts are not dischargeable by default. Despite the state’s failure to respond, the judge ruled sua sponte and imposed ₪1,000 in costs for procedural abuse.
Key Judicial Reasoning
Judge Saharai held that even if the hallucinated cases were cited inadvertently, their submission constituted a grave failure to meet professional obligations. He emphasized that a court cannot function when presented with legal fictions dressed up as precedent. The decision cited the attorney’s duty under section 54 of the Bar Law (1961) and ethics rules 2 and 34.
AI Use
Dr. Wright, representing himself, submitted numerous case citations in support of an application for remote attendance at an upcoming contempt hearing. COPA demonstrated that most of the authorities cited did not contain the quoted language—or were entirely unrelated. The judge agreed, noting these were likely "AI hallucinations by ChatGPT."
AI Use
The court did not specify how the hallucinated material was generated but described the bulk of appellant’s cited cases as “phantom case law.”
Hallucination Details
The court identified that the “Augmented Appendix Sections” attached to each brief consisted of numerous nonexistent Florida cases. Some real cases were cited, but quotes attributed to them were fabricated.
Ruling/Sanction
Dismissal of both consolidated appeals as a sanction. Bar on further pro se filings in the underlying probate actions without review and signature of a Florida-barred attorney. Clerk directed to reject noncompliant future filings
Key Judicial Reasoning
The Court held that Gutierrez’s submission of fictitious legal authorities and failure to respond to the show cause order constituted an abuse of process. It emphasized that pro se litigants are bound by the same rules as attorneys and referenced prior sanctions cases involving AI hallucinations.
AI Use
Plaintiff’s proposed second amended complaint included multiple fictitious legal authorities, phrased in language suggesting generative AI use (e.g., “Here are some relevant legal precedents...”). The court stated it “bears some of the hallmarks of an AI response” and noted that the citations appeared to have been “invented by artificial intelligence (‘AI’).”
Hallucination Details
The court could not locate the following cited cases:
- Ford v. District of Columbia, 70 F.3d 231 (D.C. Cir. 1995)
- Davis v. District of Columbia, 817 A.2d 1234 (D.C. 2003)
- Ward v. District of Columbia, 818 A.2d 27 (D.C. 2003)
- Reese v. District of Columbia, 37 A.3d 232 (D.C. 2012)
These were used to allege a pattern of constitutional violations by the District but were found to be fabricated.
Ruling/Sanction
The court denied Plaintiff’s motion to file a second amended complaint and dismissed the federal claims with prejudice. No formal Rule 11 sanctions were imposed, but the court emphasized the importance of verifying legal citations, citing Mata v. Avianca as precedent for how courts have responded to similar AI-related misuse.
Key Judicial Reasoning
The Court noted that while AI may be a helpful tool for pro se litigants, its use does not relieve them of the obligation to verify that every citation is real. The submission of fictitious legal authorities, even if inadvertent, is improper and may warrant sanctions. Here, the repeated failure to plead a viable claim after multiple amendments led to dismissal with prejudice.
AI Use
Monk admitted using the Claude AI tool to draft a summary judgment opposition without adequately verifying the case citations or quotations. He later claimed to have attempted post-hoc verification through Lexis AI but did not correct the errors until after a judicial show cause order.
Hallucination Details
Cited two completely nonexistent cases. Also fabricated quotations attributed to real cases, including Morales v. SimuFlite, White v. FCI USA, Burton v. Freescale, among others. Several "quotes" did not appear anywhere in the cited opinions.
Ruling/Sanction
The court imposed a $2,000 fine, ordered Monk to complete at least one hour of CLE on generative AI in legal practice, and mandated formal disclosure of the sanctions order to his client. It also permitted amendment of the defective filing but warned of the severity of the misconduct.
Key Judicial Reasoning
The court emphasized that attorneys remain personally responsible for the verification of all filings under Rule 11, regardless of technology used. Use of AI does not dilute the duty of candor. Continued silence and failure to rectify errors after opposing counsel flagged them exacerbated the misconduct.
AI Use
Plaintiff’s counsel admitted using generative AI to draft a motion to remand without independently verifying the legal citations or the factual accuracy of quoted complaint allegations.
Hallucination Details
Cited a fabricated case (details of the specific case name not listed in the ruling). Included fabricated quotations from the complaint, suggesting nonexistent factual allegations.
Ruling/Sanction
The Court imposed a $2,500 sanction payable by December 30, 2024. Counsel was also required to notify the California State Bar of the sanction and file proof of notification and payment. The Court recognized mitigating factors (health issues, post-hoc corrective measures) but stressed the seriousness of the violations.
Key Judicial Reasoning
Rule 11 requires attorneys to conduct a reasonable inquiry into both facts and law. Use of AI does not diminish this duty. Subjective good faith is irrelevant: violations occur even without intent to deceive. AI-generated filings must be reviewed with the same rigor as traditional submissions.
AI Use
In a trust accounting proceeding, the objectant's damages expert testified that he used Microsoft Copilot (described as an AI chatbot) to cross-check his damages calculations presented in a supplemental report.
Hallucination Details
The issue wasn't fabricated citations, but the reliability and verifiability of the AI's calculation process. The expert could not recall the specific prompts used, nor could he explain Copilot's underlying sources or methodology. He claimed using AI tools was generally accepted in his field but offered no proof.
Ruling/Sanction
The court had already found the expert's analysis unreliable on other grounds, but specifically addressed the AI use. The court attempted to replicate the expert's results using Copilot itself, obtaining different outputs and eliciting warnings from Copilot about the need for expert verification before court use. The court held, potentially as an issue of first impression in that court, that counsel has an affirmative duty to disclose the use of AI in generating evidence prior to its introduction, due to AI's rapid evolution and reliability issues. AI-generated evidence would be subject to a Frye hearing (standard for admissibility of scientific evidence in NY). The expert's AI-assisted calculations were deemed inadmissible.
Key Judicial Reasoning
The court emphasized the "garbage in, garbage out" principle, stressing the need for users to understand AI inputs and processes. It stated that the mere fact AI is used does not make its output admissible; reliability must be established. The lack of transparency regarding the AI's process was a key factor in finding the evidence unreliable.
The court held: "Giving Claimant the benefit of the doubt, we suspect such citations were generated by artificial intelligence rather than the result of a deliberate attempt to mislead the Court.
We strongly caution that “[c]iting nonexistent case law or misrepresenting the holdings of a case is making a false statement to a court[;] [i]t does not matter if [generative A.I.] told you so.” Kruse v. Karlen, 692 S.W.3d 43, 52 (Mo. App. E.D. 2024) (quoting Maura R. Grossman, Paul W. Grimm, & Daniel G. Brown, Is Disclosure and Certification of the Use of Generative AI Really Necessary? 107 Judicature 68, 75 (2023)). In Kruse v. Karlen, the appellant's brief contained numerous citations to fabricated, non-existent cases. Id. at 48-51. This Court dismissed the appeal and ordered the appellant to pay $10,000 in damages to the opposing party for filing a frivolous appeal. Id. at 54.
We will not dismiss Claimant's appeal and sanction her as we did the appellant in Kruse v. Karlen because this is a straightforward unemployment compensation case between a pro se litigant and an agency of the State of Missouri, wherein the State did not have to pay outside counsel to respond to the appeal. However, litigants who use generative AI to draft their briefs should not rely on our continued magnanimity."
AI Use
The Court noted that the false citations could stem from AI, disorganized database use, or invention. Counsel claimed a database error but provided no evidence. The Court found the origin irrelevant: verification duty lies with the submitting lawyer.
Hallucination Details
Nineteen separate fabricated citations to fictional Constitutional Court judgments. Fake quotations falsely attributed to those nonexistent decisions. Cited to falsely bolster claims of constitutional relevance in an amparo.
Ruling/Sanction
The Constitutional Court unanimously found that the inclusion of nineteen fabricated citations constituted a breach of the respect owed to the Court and its judges under Article 553.1 of the Spanish Organic Law of the Judiciary. Issued a formal warning (apercibimiento) rather than a fine due to absence of prior offenses. Referred the matter to the Barcelona Bar for possible disciplinary proceedings
Key Judicial Reasoning
The Court stressed that even absent express insults, fabricating authority gravely disrespects the judiciary’s function. Irrespective of whether AI was used or a database error occurred, the professional duty of diligent verification was breached. The Court noted that fake citations disrupt the court’s work both procedurally and institutionally.
AI Use
Counsel admitted the list of authorities and accompanying summaries were generated by an AI research module embedded in his legal practice software. He stated he did not verify the content before submitting it. The judge found that neither Counsel nor any other legal practitioner at his firm had checked the validity of the generated output.
Hallucination Details
The list and summaries were presented with seemingly valid medium-neutral citations but upon scrutiny, none of the cases existed. The court confirmed these were entirely fabricated and arose from unverified AI-generated output.
Ruling/Sanction
The court accepted Counsel’s unconditional apology, noted remedial steps (including voluntary payment of costs to the opposing party), and acknowledged his cooperation and candour. However, it nonetheless referred the matter to the Office of the Victorian Legal Services Board and Commissioner under s 30 of the Legal Profession Uniform Law Application Act 2014 (Vic) for independent assessment. The referral was explicitly framed as non-punitive and in the public interest.
Key Judicial Reasoning
Judge Humphreys emphasized that the responsible use of AI is a matter of growing legal importance. He cited Mata v. Avianca and noted that hallucinated citations undermine judicial efficiency and professional credibility. The court referred to newly issued guidelines from the Supreme Court and County Court of Victoria on the responsible use of AI in litigation, particularly the duty not to mislead and to verify content. The judge concluded that while Counsel did not intend to deceive, his failure to verify the AI-generated list constituted a breach of professional standards warranting referral to regulators.
The court pointed out: "In its brief, the Credit Union points out that the cases cited by Mehlhorn do not exist and speculates that Mehlhorn used an artificial intelligence program to draft her brief-in-chief. In her reply brief, Mehlhorn does not respond to this assertion. Instead, she cites eight new cases, none of which were referenced in her brief-in-chief. It appears, however, that four of those cases are also fictitious. At a minimum, this court cannot locate those cases using the citations provided.
We strongly admonish Mehlhorn for her violations of the Rules of Appellate procedure, and particularly for her citations to what appear to be fictitious cases. Although Mehlhorn is self-represented, pro se appellants “are bound by the same rules that apply to attorneys on appeal.” See Waushara County v. Graf, 166 Wis. 2d 442, 452, 480 N.W.2d 16 (1992). We could summarily dismiss this appeal as a sanction for Mehlhorn’s multiple and egregious rule violations. See WIS. STAT. RULE 809.83(2). Nevertheless, we choose to address the merits of Mehlhorn’s arguments as best as we are able, given the deficiencies in her briefing"
AI Use
The claimant sought to rely on a conversation with ChatGPT to show that the respondent’s claims about the difficulty of retrieving archived data were false.
Ruling/Sanction
No formal sanction was imposed, but the judgment made clear that ChatGPT outputs are not acceptable as evidence.
Key Judicial Reasoning
The Tribunal held that "a record of a ChatGPT discussion would not in my judgment be evidence that could sensibly be described as expert evidence nor could it be deemed reliable".
AI Use
Plaintiff, opposing motions to dismiss, filed a brief containing three fake federal case citations. Defendants raised the issue in their reply, suggesting use of ChatGPT or a similar tool. Plaintiff did not deny the accusation.
Hallucination Details
Three nonexistent cases were cited. Each cited case name and number was fictitious; none of the real cases matching those citations involved remotely related issues.
Ruling/Sanction
The court issued a formal warning to Plaintiff: any future filings containing fabricated citations or quotations will result in sanctions, including striking of filings, monetary penalties, or dismissal. No sanction imposed for this first occurrence, acknowledging pro se status and likely ignorance of AI risks.
Key Judicial Reasoning
Reliance on nonexistent precedent, even by pro se litigants, is an abuse of the adversarial system. The court cited Mata v. Avianca and Park v. Kim as establishing the principle that hallucinated case citations undermine judicial integrity and waste opposing parties’ and courts' resources. Plaintiff was formally warned, not excused.
AI Use
The court does not confirm AI use but references a legal article about the dangers of ChatGPT and states: “We cannot tell from Byrd’s brief if he used ChatGPT or another artificial intelligence (AI) source to attempt to develop his legal citations.”
Ruling/Sanction
The court affirmed the trial court’s judgment, found no preserved or adequately briefed grounds for appeal, and declined to address the vague or unsupported references. No explicit sanction or costs were imposed for the apparent AI-related deficiencies.
AI Use
The court inferred the use of AI from the pattern of errors (fake cases and fabricated quotes) and opposing counsel’s explicit accusation ("ChatGPT run amok"). Plaintiff's counsel did not deny it or clarify origins, leaving the inference unchallenged.
Hallucination Details
Two nonexistent cases cited, and fabricated quotations attributed to real cases:
- Graves v. Lioi (4th Cir.) – quotation absent from opinion
- Bostock v. Clayton County (U.S. Supreme Court) – quotation absent from opinion
Misreporting of Menocal case citation to imply relevance
Ruling/Sanction
The court issued a show cause order demanding explanation why sanctions and/or bar disciplinary referrals should not be imposed. Silent failure to contest fabrication allegations worsened the finding.
Following show cause proceedings, the court declined to sanction counsel.
Key Judicial Reasoning
The judge emphasized that AI use does not lessen the lawyer’s duty to ensure accurate filings. Fabricated cases and misquotes are serious Rule 11 violations. Attorneys are responsible for vetting everything submitted to the court, regardless of source. Silence when fabrication is exposed constitutes further misconduct.
AI Use
The plaintiff, proceeding pro se, submitted filings citing multiple nonexistent cases. The court noted patterns typical of ChatGPT hallucinations, referencing studies and prior cases involving AI errors, though the plaintiff did not admit using AI.
Hallucination Details
Several fake citations identified, including invented federal cases and misquoted Supreme Court opinions. Defendants flagged these to the court, and the court independently confirmed they were fictitious.
Ruling/Sanction
No sanctions imposed at this stage, citing special solicitude for pro se litigants. However, the court issued a formal warning: further false citations would lead to sanctions without additional leniency.
Key Judicial Reasoning
The court emphasized that even pro se parties must comply with procedural and substantive law, including truthfulness in court filings. Cited Mata v. Avianca and Park v. Kim as established examples where AI-generated hallucinations resulted in sanctions for attorneys, underscoring the seriousness of the misconduct.
AI Use
The appellant relied on ChatGPT to generate a list of ten "economically comparable" vehicles for purposes of arguing a lower trade-in value to reduce bpm (car registration tax). The Court noted this explicitly and criticized the mechanical reliance on AI outputs without human verification or contextual adjustment.
Hallucination Details
ChatGPT produced a list of luxury and exotic cars supposedly comparable to a Ferrari 812 Superfast. The Court found that mere AI-generated association of vehicles based on "economic context and competition position" is insufficient under EU law principles requiring real-world comparability from the perspective of an average consumer.
Ruling/Sanction
The Court rejected the appellant’s valuation arguments wholesale. It stressed that serious, human-verified reference vehicle comparisons were mandatory and that ChatGPT lists could not establish the legally required comparability standard under Dutch and EU law (Art. 110 TFEU). No monetary sanction imposed, but appellant’s entire case collapsed on evidentiary grounds.
Key Judicial Reasoning
The Court reasoned that a list generated by an AI program like ChatGPT, without rigorous control or verification, is inadmissible for evidentiary purposes. AI outputs lack the nuanced judgment necessary to assess "similar vehicles" under Art. 110 TFEU and Dutch bpm tax rules. It underscored that the test is based on the perceptions of a human average consumer, not algorithmic proximity.
AI Use
The appellants’ lawyer submitted an opening brief riddled with hallucinated cases and mischaracterizations. The court did not directly investigate the technological origin but cited the systematic errors as consistent with known AI-generated hallucination patterns.
Hallucination Details
Two cited cases were completely nonexistent. Additionally, a dozen cited decisions were badly misrepresented, e.g., Hydrick v. Hunter and Wall v. County of Orange were cited for parent–child removal claims when they had nothing to do with such issues.
Ruling/Sanction
The Ninth Circuit struck the appellants' opening brief under Circuit Rule 28–1 and dismissed the appeal. The panel emphasized that fabricated citations and grotesque misrepresentations violate Rule 28(a)(8)(A) requirements for arguments with coherent citation support.
Key Judicial Reasoning
Fabricated and misrepresented authorities defeat the appellate function. Counsel failed to provide even minimally reliable legal arguments. Attempts to explain at oral argument were evasive and inadequate, reinforcing the sanction. Dismissal was portrayed as necessary to preserve the integrity of appellate review and judicial economy.
AI Use
Michael Cohen, former lawyer to Donald Trump but then disbarred, used Google Bard to find case law supporting his motion for early termination of supervised release. He stated he believed Bard was a "super-charged search engine" and was unaware it could generate fictitious cases.
Hallucination Details
Cohen provided three non-existent case citations generated by Bard to his attorney, David M. Schwartz (not the same Schwartz as in Mata), who included them in a court filing. There was a misunderstanding between Cohen and his attorneys regarding who was responsible for verifying the citations. The fake citations were discovered by Cohen's other counsel, Danya Perry, who disclosed the issue to the court. One fake citation involved a chronological impossibility.
Ruling/Sanction
Judge Jesse Furman identified the citations as fake and issued an order to show cause regarding sanctions against the attorney. However, Judge Furman ultimately declined to impose sanctions on Cohen himself, noting his non-lawyer status, his stated (though surprising) ignorance of generative AI risks, and the expectation that his licensed attorney should have verified the citations. The judge nonetheless described the incident as "embarrassing" for Cohen and denied his underlying motion on the merits.
Key Judicial Reasoning
The court highlighted the importance of verifying AI-generated content and the responsibility of licensed attorneys to ensure the accuracy of filings, even when research suggestions come from clients. The incident further underscored the unreliability of generative AI for legal research if used without verification.
In a footnote, the court held: "The Court notes that citing non-existent case law might potentially warrant sanctions under Federal Rules of Civil Procedure 11(b) and 11(c). See Fed. R. Civ. P. 11(b)–(c). Because the plaintiff is pro se and the Court is dismissing this suit, it has determined that a fuller investigation and consideration of potential sanctions is not warranted at this point in time."
AI Use
The appellant's authorized representative submitted arguments based on ChatGPT outputs attempting to challenge the tax valuation of real property. The representative failed to specify what exact queries were made to ChatGPT, rendering the outputs unverifiable and untrustworthy.
Hallucination Details
No explicit fabricated case law was cited. Instead, the appellant relied on generalized, unverifiable statements produced by ChatGPT to contest the capitalization factor and COVID-19 valuation discounts applied by the tax authorities.
Ruling/Sanction
The Court refused to attribute any evidentiary value to the ChatGPT-based arguments. It found that without disclosure of the input prompts and verification of AI outputs, the content was legally inadmissible as probative material. However, no sanctions were imposed, likely due to the novelty of the misuse and the lack of bad faith.
Key Judicial Reasoning
The Court emphasized that judicial proceedings demand verifiable, fact-based arguments. AI outputs that lack transparency (particularly about the underlying prompt and methodology) cannot serve as a substitute for evidence. The judgment explicitly notes that reliance on ChatGPT statements without verifiability "does not affect" the Court’s reasoning or the tax authority's burden of proof.
AI Use
Vancouver lawyer Chong Ke used ChatGPT to assist in preparing a Notice of Application in a family law case concerning parental travel with children.
Hallucination Details
The application included references to two fictitious cases generated by ChatGPT. Opposing counsel identified the non-existent cases.
Ruling/Sanction
Ms. Ke informed the court she was unaware ChatGPT could be unreliable, had not verified the cases, and apologized. Justice D.M. Masuhara reprimanded the lawyer but rejected the opposing side's request for a special costs order against her. The Law Society of British Columbia confirmed it was investigating Ms. Ke's conduct.
Key Judicial Reasoning
Justice Masuhara stated clearly that "generative AI is still no substitute for the professional expertise that the justice system requires of lawyers" and emphasized that competence in selecting and using technology tools, including AI, is critical for maintaining the integrity of the justice system. The case served as Canada's first high-profile example of the issue, prompting warnings about the need for diligence.
AI Use
Appellant admitted in his Reply Brief that he hired an online consultant (purportedly an attorney) to prepare his appellate filings cheaply. This consultant used generative AI, leading to the inclusion of numerous fictitious citations. Karlen denied intent to mislead but acknowledged ultimate responsibility for the submission.
Hallucination Details
Out of twenty-four total case citations in Karlen’s appellate brief:
- Only two were genuine (and misused).
- Twenty-two were completely fictitious.
- Multiple Missouri statutes and procedural rules were cited incorrectly or completely misrepresented
Ruling/Sanction
The Court dismissed the appeal for pervasive violations of appellate rules and awarded $10,000 in damages to the Respondent for the costs of defending against the frivolous appeal. The Court stressed that submission of fabricated legal authority is an abuse of the judicial system, regardless of pro se status.
Key Judicial Reasoning
The Court invoked Mata v. Avianca to reinforce that citing fake opinions is an abuse of the adversarial system. The opinion emphasized that self-represented parties are fully bound by Rule 55.03 (certification of factual and legal contentions) and the Missouri Rules of Appellate Procedure. The decision warned that the Court will not tolerate fraudulent or AI-hallucinated filings, even from non-lawyers.
AI Use
In a wrongful death case, plaintiff's counsel filed four memoranda opposing motions to dismiss. The drafting was done by junior staff (an associate and two recent law school graduates not yet admitted to the bar) who used an unidentified AI system to locate supporting authorities. The supervising attorney signed the filings after reviewing them for style and grammar, but admittedly did not check the accuracy of the citations and was unaware AI had been used.
Hallucination Details
Judge Brian A. Davis noticed citations "seemed amiss" and, after investigation, could not locate three cases cited in the memoranda. These were fictitious federal and state case citations.
Ruling/Sanction
After being questioned, the supervising attorney promptly investigated, admitted the citations were fake and AI-generated, expressed sincere contrition, and explained his lack of familiarity with AI risks. Despite accepting the attorney's candor and lack of intent to mislead, Judge Davis imposed a $2,000 monetary sanction on the supervising counsel, payable to the court.
Key Judicial Reasoning
The court found that sanctions were warranted because counsel failed to take "basic, necessary precautions" (i.e., verifying citations) before filing. While the sanction was deemed "mild" due to the attorney's candor and unfamiliarity with AI (distinguishing it from Mata's bad faith finding), the court issued a strong warning that a defense based on ignorance "will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known". The case underscores the supervisory responsibilities of senior attorneys.
AI Use
Counsel admitted using ChatGPT to find supporting case law after failing to locate precedent manually. She cited a fictitious case (Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014)) in the reply brief, never verifying its existence.
Hallucination Details
Only one hallucinated case was cited in the reply brief: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014). When asked to produce the case, Counsel admitted it did not exist, blaming reliance on ChatGPT.
Ruling/Sanction
The Court referred Counsel to the Second Circuit’s Grievance Panel for further investigation and possible discipline. Lee was ordered to furnish a copy of the decision (translated if necessary) to her client and to file certification of compliance.
Key Judicial Reasoning
The Court emphasized that attorneys must personally verify the existence and accuracy of all authorities cited. Rule 11 requires a reasonable inquiry, and no technological novelty excuses failing to meet that standard. The Second Circuit cited Mata v. Avianca approvingly, confirming that citing fake cases amounts to abusing the adversarial system.
AI Use
Osborne’s attorney, under time pressure, submitted reply papers heavily relying on a website or tool that used generative AI. The submission included fabricated judicial authorities presented without independent verification. No admission by the lawyer was recorded, but the court independently verified the error.
Hallucination Details
Of the six cases cited in the October 11, 2023 reply, five were found to be either fictional or materially erroneous. A basic Lexis search would have revealed the fabrications instantly. The court drew explicit comparisons to the Mata v. Avianca fiasco.
Ruling/Sanction
The court struck the offending reply papers from the record and ordered the attorney to appear for a sanctions hearing under New York’s Rule 130-1.1. Potential sanctions include financial penalties or other disciplinary measures.
Key Judicial Reasoning
The court emphasized that while the use of AI tools is not forbidden per se, attorneys must personally verify all outputs. The violation was deemed "frivolous conduct" because the lawyer falsely certified the validity of the filing. The judge stressed the dangers to the judicial system from fictional citations: wasting time, misleading parties, degrading trust in courts, and harming the profession’s reputation.
AI Use
Catherine Harber, a self-represented taxpayer appealing an HMRC penalty, submitted a document citing nine purported First-Tier Tribunal decisions supporting her position regarding "reasonable excuse". She stated the cases were provided by "a friend in a solicitor's office" and acknowledged they might have been generated by AI. ChatGPT was mentioned as a likely source.
Hallucination Details
The nine cited FTT decisions (names, dates, summaries provided) were found to be non-existent after checks by the Tribunal and HMRC. While plausible, the fake summaries contained anomalies like American spellings and repeated phrases. Some cited cases resembled real ones, but those real cases actually went against the appellant.
Ruling/Sanction
The Tribunal factually determined the cited cases were AI-generated hallucinations. It accepted Mrs. Harber was unaware they were fake and did not know how to verify them. Her appeal failed on its merits, unrelated to the AI issue. No sanctions were imposed on the litigant.
Key Judicial Reasoning
The Tribunal emphasized that submitting invented judgments was not harmless, citing the waste of public resources (time and money for the Tribunal and HMRC). It explicitly endorsed the concerns raised in the US Mata decision regarding the various harms flowing from fake opinions. While lenient towards the self-represented litigant, the ruling implicitly warned that lawyers would likely face stricter consequences. This was the first reported UK decision finding AI-generated fake cases cited by a litigant
AI Use
Attorney Zachariah C. Crabill, relatively new to civil practice, used ChatGPT to research case law for a motion to set aside judgment, a task he was unfamiliar with and felt pressured to complete quickly.
Hallucination Details
Crabill included incorrect or fictitious case citations provided by ChatGPT in the motion without reading or verifying them. He realized the errors ("garbage" cases, per his texts) before the hearing but did not alert the court or withdraw the motion.
Ruling/Sanction
When questioned by the judge about inaccuracies at the hearing, Crabill falsely blamed a legal intern. He later filed an affidavit admitting his use of ChatGPT and his dishonesty, stating he "panicked" and sought to avoid embarrassment. He stipulated to violating professional duties of competence, diligence, and candor/truthfulness to the court. He received a 366-day suspension, with all but 90 days stayed upon successful completion of a two-year probationary period. This was noted as the first Colorado disciplinary action involving AI misuse.
Key Judicial Reasoning
The disciplinary ruling focused on the combination of negligence (failure to verify, violating competence and diligence) and intentional misconduct (lying to the court, violating candor). While mitigating factors (personal challenges, lack of prior discipline) were noted in the stipulated agreement, the dishonesty significantly aggravated the offense.
AI Use
Defendants alleged that portions of Plaintiff’s response to a motion to dismiss were AI-generated.
Hallucination Details
No specific fabricated cases or fake quotations were identified. The concern was broader: incoherent and procedurally improper pleadings, compounded by apparent AI usage, which raised ethical red flags.
Ruling/Sanction
Rather than imposing sanctions, the court granted the pro se plaintiff leave to amend the complaint. Plaintiff was warned to comply with procedural rules and to submit a coherent, consolidated amended complaint, or face dismissal.
Key Judicial Reasoning
The judge stressed that AI use does not absolve pro se litigants of procedural compliance. Litigants must ensure pleadings are coherent, concise, and legally grounded, regardless of technological tools used. Courts cannot act as de facto advocates or reconstruct fragmented pleadings.
AI Use
Plaintiff, acting without counsel, submitted briefing that included multiple fabricated or erroneous judicial citations, likely generated by an AI tool used for research or drafting. While the tool itself is not named, the nature and clustering of errors mirror known AI output patterns.
Hallucination Details
Cited cases included wholly nonexistent opinions (e.g., "Las Cruces Sun-News v. City of Las Cruces") and real case names with incorrect volume/reporting details (e.g., misattributed circuits or invented page numbers). The citations lacked verifiable authority and were flagged by the court as spurious upon review.
Ruling/Sanction
The court dismissed several claims on substantive grounds but issued a sharp warning about the misuse of AI-generated legal citations. While no immediate sanctions were imposed, the judge explicitly referenced Mata v. Avianca and held this instance to be only the second federal case addressing AI hallucinations in pleadings. The plaintiff was cautioned that any recurrence would result in Rule 11 sanctions, including dismissal with prejudice.
Key Judicial Reasoning
The opinion stressed that access to courts is not a license to submit fictitious legal materials. Rule 11(b) requires factual and legal support for all filings, and even pro se litigants must adhere to this baseline. The court emphasized judicial efficiency, fairness to the opposing party, and the reputational harm caused by false citations. The misuse of AI was implicitly treated as a form of recklessness or bad faith, not excused by technological ignorance
AI Use
Jerry Thomas filed pro se pleadings citing at least ten fabricated cases. The citations appeared plausible but did not correspond to any real authorities. Despite opportunities to explain, Thomas gave vague excuses about "self-research" and "assumed reliability," without clarifying the sources - suggesting reliance on AI-generated content.
Hallucination Details
Ten fake case citations systematically inserted across filings
Fabricated authorities mimicked proper citation format but were unverifiable in any recognized database
The pattern mirrored known AI hallucination behaviors: fabricated authorities presented with apparent legitimacy
Ruling/Sanction
The Court dismissed the action with prejudice as a Rule 11 sanction. It emphasized that fake citations delay litigation, waste judicial resources, and erode public confidence. The Court explicitly invoked Mata v. Avianca for the broader dangers of AI hallucinations in litigation and found Thomas acted in bad faith by failing to properly explain the origin of the fabrications.
Key Judicial Reasoning
Citing fabricated cases (even if resulting from AI use or negligence) is sanctionable because it constitutes an improper purpose under Rule 11. Sanctions were deemed necessary to deter further abuses, with dismissal considered more appropriate than monetary penalties given the circumstances.
AI Use
Lancaster, filing objections to a magistrate judge’s Report and Recommendation, cited several fabricated case authorities. The Court noted the possibility of reliance on a generative AI tool and explicitly warned Lancaster about future misconduct.
Hallucination Details
Fabricated or mutant citations, including:
- Bazzi v. Sentinel Ins. Co., 961 F.3d 734 (6th Cir. 2020) — mutant citation blending two unrelated real cases
- Maldonado v. Ford Motor Co., 720 F.3d 760 (5th Cir. 2013) — nonexistent
- Malliaras & Poulos, P.C. v. City of Center Line, 788 F.3d 876 (6th Cir. 2015) — nonexistent
Court highlighted that the majority of the cited cases in Lancaster’s objections were fake.
Ruling/Sanction
No immediate sanction imposed due to pro se status and lack of prior warnings. However, the Court issued a pointed warning that citing "made-up law" could lead to significant sanctions, either in that Court or any other court to which the case might be remanded.
Key Judicial Reasoning
The Court emphasized that unverified, fabricated legal citations undermine the judicial process and waste both judicial and litigant resources. Even without clear evidence of malicious intent, negligence in checking citations is sanctionable. Rule 11 duties apply fully to pro se litigants.
AI Use
The Court noted that the appellant's argument section appeared to have been drafted by AI based on telltale errors (nonexistent cases, jump-cites into wrong jurisdictions, illogical structure). A recent Texas CLE on AI usage was cited by the Court to explain the pattern.
Hallucination Details
Three fake cases cited. Brief also contained no citations to the record and was devoid of clear argumentation on the presented issues.
Ruling/Sanction
The Court declined to issue a show cause order or to refer counsel to the State Bar of Texas, despite noting similarities to Mata v. Avianca. However, it affirmed the trial court’s denial of habeas relief due to inadequate briefing, and explicitly warned about the dangers of using AI-generated content in legal submissions without human verification.
Key Judicial Reasoning
The Court held that even if AI contributed to the preparation of filings, attorneys must ensure accuracy, logical structure, and compliance with citation rules. Failure to meet these standards precludes appellate review under Tex. R. App. P. 38.1(i). Courts are not obligated to "make an appellant’s arguments for him," especially where brief defects are gross.
AI Use
The plaintiff's attorneys used ChatGPT to generate case law supporting the proposition that a body corporate can be sued for defamation. They forwarded eight cases—none of which exist—to opposing counsel during a post-hearing exchange and were unable to produce them later. Counsel admitted in open court that ChatGPT had been the source.
Hallucination Details
Fictitious cases included:
- Body Corporate of the Brampton Court v Weenen [2012] ZAGPJHC 133
- Body Corporate of Bela Vista v C & C Group Properties CC [2009] ZAGPPHC 54
- Dolphin Whisper Trading 21 (Pty) Ltd v The Body Corporate of La Mer [2015] ZAKZPHC 23
- Bingham v City View Shopping Centre Body Corporate [2013] ZAGPJHC 77
- Body Corporate of Pinewood Park v Behrens [2013] ZASCA 89
- Body Corporate of Empire Gardens v Sithole [2017] ZAGPJHC 23
- Body Corporate of the Island Club v Cosy Creations CC [2016] ZAWCHC 182
- Body Corporate of Fisherman’s Cove v Van Rooyen [2013] ZAGPHC 43
The court verified that the citations, parties, and contents were entirely fictitious.
Ruling/Sanction
The plaintiff’s entire claim was dismissed on legal grounds unrelated to the hallucinations (a body corporate cannot be sued for defamation under South African law).
Punitive costs were imposed on the attorney-and-client scale for the period between March 28 and May 22, 2023, during which the plaintiff’s legal team insisted such authorities existed. The court awarded 60% of standard costs to the defendants for the rest of the proceedings. No personal sanction or bar referral was issued due to counsel’s candor and the court's confidence that the error stemmed from “overzealous and careless” use of ChatGPT, not intent to mislead
Key Judicial Reasoning
The court stressed that AI tools like ChatGPT cannot be trusted for legal citation without human verification. Submitting hallucinated cases—even indirectly—misleads opposing counsel, wastes court time, and undermines trust in legal process. The incident was used to underscore that “good old-fashioned independent reading” remains essential in legal practice.
AI Use
Counsel from Levidow, Levidow & Oberman used ChatGPT for legal research to oppose a motion to dismiss a personal injury claim against Avianca airlines, citing difficulty accessing relevant federal precedent through their limited research subscription.
Hallucination Details
The attorneys' submission included at least six completely non-existent judicial decisions, complete with fabricated quotes and internal citations. Examples cited by the court include Varghese v. China Southern Airlines Co., Ltd., Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Inc., Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines, Inc.. When challenged by opposing counsel and the court, the attorneys initially stood by the fake cases and even submitted purported copies of the opinions, which were also generated by ChatGPT and contained further bogus citations.
Ruling/Sanction
Judge P. Kevin Castel imposed a $5,000 monetary sanction jointly and severally on the two attorneys and their law firm. He also required them to send letters informing their client and each judge whose name was falsely used on the fabricated opinions about the situation.
Key Judicial Reasoning
Judge Castel found the attorneys acted in bad faith, emphasizing their "acts of conscious avoidance and false and misleading statements to the Court" after the issue was raised. The sanctions were imposed not merely for the initial error but for the failure in their gatekeeping roles and their decision to "double down" rather than promptly correcting the record. The opinion detailed the extensive harms caused by submitting fake opinions. This case is widely considered a landmark decision and is frequently cited in subsequent discussions and guidance.
AI Use
Mr. Scott, opposing a motion to dismiss, filed a brief containing multiple fabricated case citations with plausible formatting but nonexistent underlying cases. Court recognized the pattern as typical of AI hallucinations. Scott did not admit AI use, but the inference was clear.
Hallucination Details
Several case names, reporter citations, and quotations provided were fake; no match could be found in legal databases. Quotations attached to these cases were invented. Citations appeared superficially valid (correct format) but were unverifiable
Ruling/Sanction
Complaint dismissed in full
Sanctions imposed: Scott ordered to pay defendant’s reasonable attorney’s fees, costs, and expenses associated with the motion to dismiss and motion for sanctions
Court required affidavit from Fannie Mae detailing fees, after which Scott could contest reasonableness but not the sanction itself
Key Judicial Reasoning
The Court emphasized that using AI tools does not relieve any litigant of their duty to verify legal authorities. Citing or quoting nonexistent cases is a violation of Maine Rule of Civil Procedure 11. Even pro se litigants cannot "blindly rely" on AI outputs and are expected to exercise reasonable diligence. The judgment was framed explicitly to deter future abuse of AI-generated filings.