Lawyers worrying about AI giving bad advice to users may have been looking in the wrong direction. The real risk might be the collateral damage.
A new U.S. lawsuit against OpenAI seeks US$10 million, alleging the company’s chatbot effectively engaged in the unauthorised practice of law after a user relied on ChatGPT to dismantle her own legal case.
According to the complaint, the woman allegedly used the AI tool to:
- Fire her lawyer
- Reopen a settled disability claim
- File more than 40 AI-generated court documents
- Cite statutes and cases that apparently did not exist
The insurer responding to those filings says it spent substantial legal fees dealing with documents that had “no legitimate legal purpose”.
Until recently, most legal debate around AI hallucinations focused on the user being misled. Courts have already scolded lawyers for filing fake AI-generated citations and relying on tools without checking the output.
But this case flips the lens. The claim is essentially that a third party suffered financial harm because someone relied on AI legal advice.
In other words, the alleged damage wasn’t done to the AI user. It was done to the opposing party, which had to respond to a stream of fabricated legal filings.
If courts accept that theory, the consequences for legal AI could be significant.
It would mean AI tools do not merely create risk for the people using them. They potentially create liability where anyone drawn into the dispute incurs costs responding to AI-driven litigation noise.
For law firms already nervous about AI hallucinations and compliance, the case is another reminder that the legal exposure around generative AI may extend far beyond professional negligence or misleading outputs.
It may also involve a new and uncomfortable concept: AI-generated litigation externalities, which could lead to all sorts of potential chaos that someone has to pay for.