Seeking Reliable Solutions for Every Type of Case Schedule a Free Consultation Today

Lawyer Uses Chat GPT at Client's Peril

Timothy P. Flynn June 12, 2023

machine learning iconBy now everyone has heard an anecdote or two about ChatGPT or similar generative AI programs from Open AI and other companies. Within the legal context, lawyers are tempted to deploy ChatGPT to assist with various legal tasks such as doing legal research and writing a brief.

A federal case from New York -Mata vs Avianca, Inc.- should give lawyers pause before using a generative AI tool like ChatGPT to conduct legal research. In that case, a federal judge from the Southern District of New York ordered a lawyer to show cause as to why he should not be sanctioned by the court.

The situation came to a head when opposing counsel in the case called the court’s attention to case law cited by the Plaintiff’s lawyer that simply did not exist. In ordering the lawyer to show cause, the judge noted that “[s]ix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” Here is a link to the court’s show cause order.

In the court’s show cause order, several cases, with citations as submitted to the court, were exposed as non-existent –“bogus”- cases; the cites contained in the lawyer’s brief were to other published cases. As practicing lawyers, our collective jaw is dropping on this case for many reasons.

First, it is noteworthy that the lawyer that signed the offending brief was simply a professional “strawman” or shill for the lawyer that apparently drafted the brief with ChatGPT’s assistance. This is because the lawyer that signed the filed brief was admitted to practice before the Southern District of New York while the lawyer doing the work was not; the lawyer with the client needed to borrow his colleague’s name and signature.

Second, we here at Clarkston Legal find it incredible that the lawyer drafting the brief would not check already existing databases such as LEXIS, Westlaw, or even Google, for citation accuracy. Trusting the brand-new generative AI program to accurately cite relevant and responsive cases is a massive leap of faith.

In their first year of law school, students are thoroughly trained in how to use Boolean-style searches to conduct computer-assisted legal research. The training goal is to instruct the future lawyer how best to find the most relevant and up to date cases and statutes. This has been part of the basic law school pedagogy since the early 1980s.

These Boolean-style searches are very early examples of generative AI technology. Generative AI is defined as a type of artificial intelligence capable of generating outputs [text or images] in response to user prompts. There have been no known cases where LEXIS or Westlaw have generated “bogus” cases, complete with faux citation, of the type generated by ChatGPT.

Third, we need to get back to lawyer basics in this case: lawyers are trained in how to read, argue and meticulously cite the law; down to the very page on which the lawyer is relying. Most dislike the actual task, but brief writing continues to be the lawyers’ stock-in-trade.  

Seems obvious to us that due diligence within the legal research context requires lawyers to check all facts and cases cited by chatbots or generative AI legal research programs. However, there are mixed messages out there to be sure.

Here are the first few paragraphs from an article recently published in the Detroit Legal News around the time the New York city lawyers described above were being show caused by a federal judge because of how they used ChatGPT in their court filings:

ChatGPT is an AI-powered chatbot that is specifically designed for the legal industry. It is based on the GPT-3.5 architecture and has been tested on a vast amount of legal data. As a result, it can provide lawyers with an unprecedented level of assistance. 

ChatGPT is designed to help lawyers with a wide range of tasks, from legal research and drafting documents to scheduling appointments and managing client relationships. It can understand natural language queries and provide intelligent responses, making it an incredibly valuable tool for lawyers. Selling points for the software include increased efficiency, improved accuracy, enhanced productivity, better client services and cost savings. 

Well, ok, but on the other hand, placing too much faith in ChatGPT’s output can get a lawyer sanctioned. Lawyers must proceed with caution when using “large language models” such as GPT-3.5 and GPT-4; the lawyer must always perform basic due diligence, especially when it comes to legal research and reliance on published case law.

In the case discussed above, the federal judge has taken the matter under advisement and will issue a decision soon. Unusually, for trial courts, an amicus brief was filed by a company known as CereBel Legal Intelligence that claimed its “livelihood is closely associated with the software attributed with error.” Here is a link to their amicus brief. With a stake in the industry, CereBel warned the court not to issue a decision that depresses the benefits of using generative AI in the legal technology industry.

Federal courts can resort to Federal Rule of Civil Procedure 11 to sanction attorneys or parties to a suit who submit pleadings for an improper purpose or that contain frivolous arguments or arguments that have no legal or factual basis. In the Mata -v- Avianca, Inc. case, the federal court can -and perhaps should- sanction the offending lawyer, while including cautionary and limiting language in the opinion. While lawyers will not be held to account for the specific content a generative AI program displays, they will be held accountable for the citations contained in their filed work product.

There are examples across the country of federal judges resisting or outright proscribing the use of AI generated or assisted document submissions in their courtrooms. Judge Brantley D. Starr of the Northern District of Texas recently required litigants in his courtroom to certify that their filings were not compiled with the assistance of generative AI tools.

Similarly, a judge in the United States Court of International Trade issued an order in June requiring lawyers to issue a disclosure notice for any document containing text drafted with the assistance of a generative AI program.

We here at Clarkston Legal do not want to start a negative trend, but so far it is the epic AI and machine learning “fails” that are garnering all the media attention. Here is a link to a recent post from the Law Blogger detailing one such AI “fail”.

If you have a a legal issue and would like an actual human lawyer to meet with you and assess your options, our law firm is at your service. Contact us to schedule a free consultation today.