The ‘ Hallucinated’ Legal AI Risks in Court
A recent Australian Federal Court ruling serves as a reminder to legal professionals about the risks of unsupervised artificial intelligence use in legal document preparation.
Melbourne law firm Massar Briggs Law has been ordered to pay indemnity costs after a junior solicitor used AI-enabled software that generated fabricated citations in a native title determination. Justice Bernard Murphy found that the AI tools had “hallucinated” information that appeared legitimate but was entirely fictional.
When First Nations Legal and Research Services reviewed the firm’s submission to verify citations for anthropological and historical reports, they discovered that most of the cited documents did not exist, while others existed but were incorrectly cited.
The Court’s Findings
Justice Murphy determined that the situation resulted from “the tendency of generative AI to ‘fabricate’ or ‘hallucinate’ information that looks accurate and reliable but that is not based in fact.”
The junior solicitor had used Google Scholar to generate citations while working remotely, lacking access to the actual documents housed in the firm’s office. When the issue was identified, attempts to replicate the search results revealed that Google Scholar produced different results with each query.
Professional Implications
Justice Murphy specifically chose to publish this order due to its broader significance to the legal profession, recognizing it as part of a “growing problem regarding false citations in documents prepared using AI.”
Principal lawyer Jason Briggs acknowledged the firm’s failure to ensure proper supervision of remote work and adequate verification of the junior solicitor’s citations.
Regulatory Response
The Federal Court is developing comprehensive guidance on AI use in legal practice. Chief Justice Debra Mortimer announced in April that the court is considering guidelines or practice notes to address generative artificial intelligence use.
The court aims to “appropriately balance the interests of the administration of justice with the responsible use of emergent technologies in a way that fairly and efficiently contributes to the work of the Court.”
Key Considerations for Legal Practitioners
This case highlights several critical issues for the profession:
- Verification Requirements: AI-generated citations must be independently verified against actual sources
- Supervision Protocols: Enhanced oversight is essential, particularly for remote work arrangements
- AI Limitations: Recognition that AI tools can produce inconsistent and fabricated results
- Professional Responsibility: Maintaining accountability for document accuracy regardless of the tools used
Moving Forward
While Justice Murphy did not refer the matter to the Victorian Legal Services Board, treating it as a learning opportunity rather than professional misconduct, the case establishes important precedents for AI accountability in legal practice.
The Federal Court’s forthcoming guidelines will provide clearer direction on appropriate AI use while maintaining the integrity of legal proceedings. Legal professionals must implement robust verification processes and supervision protocols to harness AI’s benefits while mitigating its risks.
The underlying native title determination for the Wamba Wemba claim group remains active, demonstrating that while technology failures create complications, they need not derail substantive legal proceedings when addressed appropriately.