A lawyer used ChatGPT for the case! But did it work?

ChatGPT, developed by the artificial intelligence company OpenAI, has been used in almost every industry since its debut in November. Both medical advice, contributions to software engineers, and ...
 A lawyer used ChatGPT for the case!  But did it work?
READING NOW A lawyer used ChatGPT for the case! But did it work?

ChatGPT, developed by the artificial intelligence company OpenAI, has been used in almost every industry since its debut in November. It has been used for many purposes such as medical advice, contributions to software engineers, and shortening processes with its knowledge in the field of law. In the last example, a lawyer used ChatGPT to prepare the case file. So did using ChatGPT work?

Before going into detail, let’s first talk briefly about the background story of the case. A man named Roberto Mata sued Avianca airlines in 2019 for injuries he said were caused by a service trolley while at the airline, claiming the problems were caused by negligence. Steven Schwartz, who has been a lawyer for nearly 30 years, assumed the role of Mata’s representative.

ChatGPT screwed up

But Judge Kevin Castel said at least six of the cases Schwartz submitted as research for a summary were filled with “false citations and illusory judgments containing false references.” The source of the fake cases was none other than ChatGPT.

“The Court is faced with an unprecedented situation,” Judge Castel said in a statement. Schwartz, on the other hand, said in his affidavit that he had never used ChatGPT as a legal research source prior to this lawsuit, and therefore “was unaware of the possibility that its content could be false”. He accepted responsibility for not confirming the sources offered by the chatbot.

Schwartz now faces a sanctions hearing on June 8. In his affidavit, Schwartz said he “greatly regrets using generative artificial intelligence to support the legal research being conducted here, and would never do so in the future without definitively confirming its authenticity.”

The most dangerous problem of productive AI

Generative AI tools can deliver incredible benefits. It can enable a company to work more efficiently, a software developer to do his job faster and better, and a higher quality of education. Likewise, legal AIs can deliver better justice.

But for these to happen, the problem of artificial intelligence “hallucination” needs to be solved. These tools can produce completely “fake” information that is not based on any facts. Moreover, these tools can “correctly” defend the fake information they produce. In this case example, ChatGPT said that the information it gave was correct. On the other hand, development does not seem to stop. Finally, a short time ago, the Supreme Court announced that it is working on an artificial intelligence that is trained on precedent decisions.

Comments
Leave a Comment

Details
162 read
okunma47036
0 comments