A growing number of lawyers in the United States are coming under judicial scrutiny for submitting court filings containing fictitious legal citations generated by artificial intelligence (AI), highlighting the risks of unverified AI use in legal practice.
The latest case involves two attorneys facing possible sanctions after a federal judge in Wyoming found fabricated case citations in a lawsuit against retail giant Walmart. One of the lawyers admitted to using an AI tool that produced the false references, calling it an inadvertent mistake.
Following the development, Morgan and Morgan, a prominent personal injury law firm with over 1,000 attorneys, issued an internal warning against unverified AI use in legal documents. The firm, however, declined to comment on the matter. Walmart also refrained from making any public statements.
Growing AI Use in Law
The Wyoming case is among at least seven similar incidents in recent years, where US courts have questioned or disciplined lawyers for submitting AI-generated legal fiction, according to a Reuters report.
A 2023 survey by Thomson Reuters found that 63 per cent of lawyers have used AI in their work, with 12 per cent relying on it regularly. Law firms are increasingly integrating AI into their research and drafting processes, either by contracting external AI providers or developing proprietary tools.
Despite its efficiency, generative AI is prone to fabricating information, known as "hallucinations" in AI terminology. The technology generates responses based on statistical patterns rather than factual verification, raising concerns about reliability in legal settings.
Judicial Crackdown
Federal judges have begun taking action against AI-related legal errors. In one of the earliest cases, a Manhattan court in June 2023 fined two New York lawyers $5,000 for citing non-existent cases in a personal injury lawsuit against an airline.
Other notable cases include:
- In a lawsuit involving Michael Cohen, former lawyer to Donald Trump, a New York judge considered imposing sanctions after Cohen mistakenly provided his attorney with AI-generated citations, though neither was penalised. The judge, however, called the episode "embarrassing."
- In November 2023, a Texas federal judge fined a lawyer $2,000 and ordered them to complete a course on AI in law after they cited fictitious cases in a wrongful termination lawsuit.
- Last month, a federal judge in Minnesota ruled that a misinformation expert had damaged his credibility after admitting to citing AI-generated references in a case involving a parody of US Vice President Kamala Harris.
Legal ethics require attorneys to verify and stand by their court submissions. The American Bar Association has reinforced this stance, advising its 400,000 members that even unintentional AI-generated misstatements could lead to disciplinary action.
Andrew Perlman, dean of Suffolk University’s law school, warned that failing to verify AI-generated legal research amounts to professional incompetence. “When lawyers use ChatGPT or other AI tools to generate citations without verifying them, that’s incompetence, pure and simple,” he stated.
‘Lack of AI Literacy’
Experts argue that the issue lies in how AI is used, rather than in the technology itself. Harry Surden, a University of Colorado law professor specialising in AI and law, noted that lawyers have always made errors in filings, and AI is merely exposing a broader issue of inadequate verification.
“Lawyers need to invest time in understanding the strengths and weaknesses of AI tools,” Surden said, adding that recent incidents reflect a “lack of AI literacy” in the legal profession.
While AI continues to reshape legal work by streamlining research and drafting, its reliability remains a pressing concern. As courts tighten scrutiny, lawyers may need to rethink how they integrate AI into their practice—or risk facing professional consequences.