An Update on Artificial Intelligence and the Legal Industry

An Update on Artificial Intelligence and the Legal Industry

To round off this week’s blog posts for the spooky season, I thought it would be fitting to report on the global war being waged against humanity by our robotic overlords. After all, our last blog post about the application of artificial intelligence in the legal industry was back on January 31, 2023, which you can read here.

Our opinion about the capabilities of artificial intelligence to subsume work traditionally handled by human lawyers was decidedly lukewarm then to say the least. I don’t doubt the exponential pace of technology in the last few years, but presently I’m still not quite convinced that artificial intelligence will be taking over the legal industry à la Terminator 2: Judgment Day, or any sector for that matter, anytime soon.

While our deployment of artificial specific intelligence has been wildly successful in the legal sector, artificial general intelligence on the other hand has not approached anywhere near the same level of success and progress. Case in point: the usage of ChatGPT to conduct legal research in Mata v. Avianca, Inc., F. Supp. 3d, 22-cv-1461 (PKC).

Avianca Inc. moved to dismiss a personal injury lawsuit alleging a knee injury from being struck by a metal serving cart during a flight from El Salvador to New York on the basis that the statute of limitations had expired. In his Affirmation in Opposition, respondent cited multiple judicial authorities and supporting ratio to argue that Title 11 of the U.S. Code tolled the limitation period or that New York law supplies the relevant statute of limitations. In its reply, Avianca Inc. stated that it was unable to locate any of those cases and quotations referenced in the filed material, effectively implying that those cases did not in fact exist.

Yikes.

How could this have happened? Well, as the United States District Court for the Southern District of New York explains at paragraph 39 following the show cause hearing,

[respondent’s lawyers’] first prompt stated, “argue that the statute of limitations is tolled by bankruptcy of defendant pursuant to montreal convention”. (Id. at 2.) ChatGPT responded with broad descriptions of the Montreal Convention, statutes of limitations and the federal bankruptcy stay, advised that “[t]he answer to this question depends on the laws of the country in which the lawsuit is filed”11 and then stated that the statute of limitations under the Montreal Convention is tolled by a bankruptcy filing. (Id. at 2-3.) ChatGPT did not cite case law to support these statements. [respondent’s lawyers] then entered various prompts that caused ChatGPT to generate descriptions of fake cases [emphasis added], including “provide case law in support that statute of limitations is tolled by bankruptcy of defendant under montreal convention”, “show me specific holdings in federal cases where the statute of limitations was tolled due to bankruptcy of the airline”, “show me more cases” and “give me some cases where te [sic] montreal convention allowed tolling of the statute of limitations due to bankruptcy”. (Id. at 2, 10, 11.) When directed to “provide case law”, “show me specific holdings”, “show me more cases” and “give me some cases”, the chatbot complied by making them up[emphasis added].

Imposing sanctions on respondent’s lawyers for subjective bad faith conduct, the District Court had this to say in obiter dicta:

“In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings [emphasis added]. [respondent’s lawyers] abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”

Put simply, while the District Court recognised that artificial intelligence can play a role in the legal industry when appropriately and responsibly utilised, there are clear limitations to this tool. Lawyers ultimately remain responsible for their work-product and cannot simply shift all of their duties to artificial intelligence and call it a day.  

My fellow humans, hold on to those tinfoil hats and Jim Bakker food buckets because it looks like we aren’t going anywhere just yet.

Humanity: 1, Skynet: 0

Aaron Chan

Leave a Comment