In a recent case in the US, it was revealed that a lawyer used ChatGPT for legal research, which resulted in the presentation of false information. The incident illustrates once again the potential risks associated with AI in the legal field, including the spread of misinformation.
The case involved a man suing an airline alleging personal injury. The plaintiff’s legal team gave a brief presentation citing several previous court cases in support of their claims, attempting to establish a legal precedent for their claims. However, the airline’s lawyers realized that some of the cases cited did not actually exist and warned the presiding judge.
Judge Kevin Castel, who presided over the case, described the situation as “an unprecedented situation” and expressed surprise and requested an explanation from the legal team.
Steven Schwartz, a colleague of the lead attorney, admitted to using ChatGPT to search for similar legal precedents. In a written statement, Schwartz expressed deep regret that he had “never used artificial intelligence for legal research before and was unaware that its content could be false.”
The screenshots attached to the file show a conversation between Schwartz and ChatGPT. In his request to ChatGPT, Schwartz asks if the Varghese-China Southern Airlines Co Ltd case is genuine.
ChatGPT confirmed its authenticity by stating that the case can be found in legal reference databases such as LexisNexis and Westlaw. However, subsequent investigations revealed that the case did not exist, which cast further doubt on other cases provided by ChatGPT.
In light of this event, both attorneys involved in the case, Peter LoDuca and Steven Schwartz of the law firm Levidow, Levidow & Oberman, were summoned to a disciplinary hearing on June 8 to explain their actions.
This incident is likely to spark debate in the legal community about the appropriate use of AI tools in legal research and the need for comprehensive guidelines to prevent similar occurrences.