Lawyer Warning: Beware The ChatGPT “Legal Research”

Lawyer Warning: Beware The ChatGPT "Legal Research"

New York lawyer, Steven A. Schwartz, who has been practicing law for over three decades, is facing potential sanctions after utilizing the ChatGPT language model for legal research, which subsequently yielded inaccurate information.

Schwartz had prepared a brief aimed at citing relevant legal precedents to support the case’s continuation, following a motion from Avianca Airline’s attorneys to dismiss it.

The brief submitted by Mr. Schwartz attracted the attention of Avianca’s legal team, who alerted the judge to their inability to locate several of the cases referenced.

Consequently, Judge P. Kevin Castel expressed concerns and requested explanations from Schwartz and his colleague, Peter Loduca, citing an “unprecedented circumstance.”

According to Judge Castel’s order, six of the cases referred to in the brief appeared to be fictitious judicial decisions, containing fabricated quotations and internal citations.

The judge’s order has brought attention to the potential misuse and unreliability of artificial intelligence (AI) tools like ChatGPT in the legal profession. The incident has sparked a discussion regarding the need for regulations governing the use of AI in legal research to prevent similar instances of misleading information being presented in court.

Responsibility and Admission

In an affidavit, Schwartz clarified that he included Loduca’s name on the documents because the latter was not authorized to practice in federal court, where the lawsuit was transferred after initially being filed in a state court.

Schwartz took full responsibility for all the legal work performed on the case and claimed that Mr. Loduca was unaware that ChatGPT had been employed for research purposes.

In his own affidavit, Loduca stated that he had no reason to doubt the authenticity of the cases cited in the brief or Mr. Schwartz’s research methodology.

Regulation and AI

The incident has prompted discussions about the need for regulatory measures to address the increasing usage of AI in legal practices and how it should be used and regulated. Already law firms are integrating artificial intelligence tools into research and other activities.

The CEO of OpenAI, the organization behind ChatGPT, has urged the United States to implement regulations to manage the rapid proliferation of AI technologies across various sectors, including the legal profession.

Attached to the Schwartz’s affidavit are screenshots displaying part of his conversation with ChatGPT.

In the conversation, Schwartz sought confirmation of the legitimacy of one of the cases provided by the AI language model. ChatGPT initially asserted the authenticity of the case, mentioning that it could be found in legal research databases upon “double-checking.” The model also claimed the remaining cases were genuine.

Both Schwartz and Loduca have been directed to explain why they should not be sanctioned at a hearing scheduled for June 8, where the implications of their reliance on ChatGPT’s inaccurate legal research will be further examined.

This case involving a lawyer’s reliance on ChatGPT’s erroneous legal research has brought to light important issues surrounding the use of AI in the legal profession.

The incident underscores the necessity for proper regulation and diligence when incorporating AI tools in legal research, ensuring the accuracy and reliability of information presented in court and will doubtless be an ongoing issue regarding the use of AI in legal research and legal work generally.

ReFuel with the top law news weekly that's fun to read
Powered by ConvertKit
Scroll to Top