A New York attorney faces his own judicial hearing after his firm utilized the AI tool ChatGPT for legal research.
A judge stated that the court was facing an unprecedented situation after it was discovered that a filing referenced hypothetical legal cases. The attorney who utilized the tool told the court he was oblivious of the possibility of its content being false.
On demand, ChatGPT generates original text, but comes with warnings that it can "generate inaccurate information." Initial proceedings involved a man suing an airline for purported personal injury. His legal team submitted a brief that cited a number of precedents in an attempt to demonstrate that the case should proceed. However, the airline's attorneys subsequently informed the judge that they were unable to locate several of the cases cited in the brief.
Judge Castel wrote in an order requiring the man's legal team to explain itself that six of the submitted cases appear to be fake judicial decisions with fake quotes and fake internal citations. Over the course of several filings, it became apparent that the research had not been conducted by the plaintiff's attorney, Peter LoDuca, but by a colleague at the same law firm. Steven A. Schwartz, who has been an attorney for over three decades, utilized ChatGPT to search for analogous cases from the past.
Mr. Schwartz clarified in his written statement that Mr. LoDuca had not participated in the research and had no knowledge of its execution. Mr. Schwartz mentioned that he regretted relying on the chatbot that he claimed he had never used for legal research before. He also said that he was unaware that its information could be false.
He has resolved to never use AI to "supplement" his legal research in the future "without verifying its authenticity with absolute certainty." Screenshots appended to the filing appear to depict Mr. Schwarz's conversation with ChatGPT. A message read "is Varghese a real case?" referring to Varghese v. China Southern Airlines Co ltd. This was one of the cases that no other lower could locate.
ChatGPT responded affirmatively, prompting "S" to inquire, "What is your source?" ChatGPT responded once more after double checking that the case was authentic and that it can be found in legal reference databases like Westlaw and LexisNexis. According to it, the other cases that it has submitted to Mr. Schwartz are genuine as well.
At a hearing scheduled for June 8, both of these attorneys, who work for the firm Levidow, Levidow & Oberman, will be required to provide an explanation as to why they should not be reprimanded. Since its initial release in November 2022, ChatGPT has been utilized by a massive user base.
It is able to respond to inquiries in language that is natural and human-like, and it can also replicate different types of writing styles. As a database, it utilizes the internet in the form that it existed in 2021. Concerns have been raised regarding the hazards that may be posed by artificial intelligence (AI), which may include the propagation of biased information and false information.