Will Lawyers Ever Learn About AI in Court?
It’s like the movie Groundhog Day in legal tech land: yet another courtroom, another set of lawyers caught submitting briefs riddled with AI-generated, utterly fake case citations.
At this rate, we’re wondering if it’s time for every lawyer to enroll in a CLE called, “How Not to Make Headlines as the Next AI Hallucination Victim.” Because, these legal AI stories are coming out so frequently, it’s starting to get embarrassing. The current report comes from Bob Ambrogi’s LawSites.
This time, the saga begins in the U.S. District Court for the Central District of California-home to some serious litigation firepower, where you’d hope things like “Do these cases actually exist?” are still on the lawyerly to-do list.
Attorneys from Ellis George LLP and K&L Gates LLP-yes, that K&L Gates, the international powerhouse-found themselves in hot water after submitting a brief to Special Master Michael Wilner that was packed with fake legal citations, courtesy of their favorite AI research tools: CoCounsel, Westlaw Precision, and Google Gemini.
How did this happen? Trent Copeland at Ellis George admitted he used these tools to whip up an outline, which he then circulated to colleagues at K&L Gates-without flagging its AI origins or, crucially, checking the citations. His colleagues then built it straight into the final brief, skipping the basic duty of fact-checking.
When the Special Master challenged a couple of fishy-looking cases, the attorneys scrambled to file a “corrected” version. Spoiler: it still contained at least six bogus citations. In total, they admitted that one-third of their cited cases were either wrong or, quite literally, didn’t exist.
Special Master Wilner didn’t sugarcoat it. He said the lawyers’ conduct was “tantamount to bad faith,” lambasted the lack of disclosure about the AI-generated content, and found the failure to double-check the research “deeply troubling”-especially after being warned about it the first time.
Here’s the damage:
- All versions of their brief got trashed
- The discovery relief they wanted? Denied
- A $31,100 legal fee, payable to the opposing party
- Disclosure of this debacle to their client
Wilner summed it up: “That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.”
Toronto Joins the Hallucination Parade
Meanwhile, north of the border, AI-induced legal headaches are going global. In Toronto’s Ko v. Li case, lawyer Jisuh Lee got the side-eye from Ontario Superior Court Judge Fred Myers when her factum cited two cases that simply didn’t exist.
When pressed, Lee claimed her office doesn’t routinely rely on AI but would “check with her clerk” (never a great sign). She couldn’t produce the cases or find real citations to back up her submissions.
The judge, after reviewing her work post-hearing, discovered yet more problems: another non-existent case and one case used to argue the exact opposite of what it actually held.
His response? “It should go without saying that it is the lawyer’s duty to read cases before submitting them to a court as precedential authorities.” Ouch.
The judge ordered Lee to explain herself or face potential contempt proceedings.
When Will the Message Land?
Despite all the headlines, the sanctions, the blog posts, and the increasingly frustrated judges, lawyers are still falling into this trap. The legal community has been warned: AI is great for brainstorming, but it’s not a shortcut for actual research or-let’s say it louder for the people at the back-verifying your citations.
Until more lawyers learn that lesson, expect this headline to pop up again, and again, and again: “Another Lawyer Sanctioned for Fake AI-Generated Cases.”
Honestly, how hard is it to double-check your sources? AI’s been around, you’d think lawyers of all people would adapt faster. Or are we just seeing the stubborn side of the profession refusing to evolve?
It’s not about being stubborn, it’s about being cautious. The legal field values precision above all, and AI still has a long way to go. Mistakes like these highlight the gap.
But wouldn’t you say it’s also about learning and adapting to new tools? At some point, AI will be more reliable, and starting the learning curve now makes sense.
Fascinating read! It’s clear the legal profession is on the cusp of a major technological shift. Embracing AI, with all its quirks, is crucial for innovation. Props to LawFuel Editors for highlighting this.
This is exactly why we can’t rush to incorporate every new gadget into our practice. The law is based on precedent and reliability, something AI can’t provide yet.
So what happens next for the lawyers who use fake citations? Do they get trained in AI, or is it more about punishing to set an example?
The Toronto case is a prime example of the teething problems at the intersection of AI and traditional professions. It’s a learning curve, but one that promises to redefine how we understand and practice law.
I’ve had ChatGPT4.o produce fake case citations twice.
No, I didn’t file anything with the court that referred to those citations. I checked first. The cases didn’t exist or were about something quite different.
The irritating (and most frightening) aspect is that, on the second occasion, I specifically instructed ChatGPT that hallucinations were unacceptable and unforgivable and it was to err on the side of caution and not mention a case unless it was absolutely certain that it existed. Made no difference! (Well, not enough difference. Maybe it produced only one fake reference instead of six.)
It’s beguiling – it produces answers that are credible and mostly accurate, but laced with hallucinations.
I saw a LinkedIn comment that said AI output should be treated as the output of a work-experience junior. Even that is not quite correct, because any workplace junior who fabricated cases would not get a second chance.
We have to be brave enough to resist the temptation to file without checking, even if we have to delete that part of the submissions or be late.