Dangers of ChatGPT Highlighted by Lawyers’ Dilemma

You might think that at some point lawyers will find out what they categorically cannot do with artificial intelligence. But that day is not today.

The recent case of a New York federal judge sanctioning lawyers who submitted a legal brief written by the artificial intelligence tool ChatGPT that included references to non-existent court cases raised important questions about the role of AI in the legal profession and the need for ethical compliance. guidelines.

The use of AI in the legal profession is not new, and many law firms are experimenting with AI-based tools to automate tasks such as document review, legal research, and contract analysis. The law-specific artificial intelligence tool has even raised a significant round of venture capital to automate contracts using generative artificial intelligence.

Aron Solomon is Esquire Digital’s Head of Strategy and Editor of Today’s Esquire.

The recent ChatGPT incident has drawn attention to the potential dangers of AI being overused without proper supervision and training. Lawyers Steven Schwartz and Peter LoDuca used ChatGPT to file a motion in New York federal court. Their petition contained references to six non-existent court decisions. As a result, when the judge and opposing counsel were unable to find these cases, the judge demanded that Schwartz and LoDuca provide the full texts of the decisions cited by ChatGPT.

The reason for the lack of cases was simple and truly mind-blowing: ChatGPT invented them.

At the hearing, Judge P. Kevin Castel expressed his intention to consider imposing sanctions on Schwartz and LoDuca for using ChatGPT. Schwartz attempted to explain why he did not further investigate the cases provided by ChatGPT, stating that he was unaware of his ability to fabricate cases. The judge remained unconvinced, stressing that lawyers have a duty to verify the accuracy of the information they present in court.

The ChatGPT case highlights serious concerns about the ethical use of AI. The following points summarize the main lessons to be learned:

  • AI is error-prone: While AI-powered tools can be valuable, they are not error-proof. Lawyers should be careful not to blindly rely on AI-generated information without verifying its accuracy.
  • Training is critical: lawyers using AI tools need to be properly trained to understand the limitations of such tools and the potential risks. They should also be well versed in the ethical considerations of AI use and adhere to ethical principles.
  • Oversight is essential: Law firms need effective oversight mechanisms to ensure the responsible and ethical use of AI. This includes monitoring the use of AI tools, training and support, and ethical compliance.
  • Transparency is key: Lawyers using AI tools must be transparent about their use of AI and must communicate this to clients and the court. This disclosure must include the provision of information about the limitations and potential risks of AI.

All of these points should be obvious, but they are not. Lack of ethical principles and oversight mechanisms to ensure the responsible and ethical use of AI in the legal profession, in which we today are forced to rely on the good/better/even bordering on OK judgment of lawyers who are not technology experts and fail to understand the breadth and scope of what what AI can and cannot do.

While in the long term proper training, oversight and transparency are essential to ensure that AI is used to maintain the integrity of the legal profession, for now, lawyers who do not fully understand how to use it responsibly should not use it at all. .

Aron Solomon is Esquire Digital’s Head of Strategy and Editor of Today’s Esquire.

Content Source

Related Articles