Pages

Thursday, June 08, 2023

ChatGPT? Seemed like a good idea at the time.

A NY attorney, apparently pressed for research time, decided to use ChatGPT to find some precedents for a brief he needed to present to court on behalf of his client. 

The client was suing Avianca Airline, claiming that he'd been injured in old-school fashion: his knee was clipped by the metal serving cart. 

It happens.

A few weeks ago, my elbow got clipped - just a bit - by a beverage cart that was being pushed a tad too aggressively. Not enough to sue, mind you. Or even think about suing. But my elbow was just a teeny-tiny bit out there, and the flight attendant should have noticed. It happens.

Anyway: 
When Avianca [citing statute of limitations issues] asked a Manhattan federal judge to toss out the case, Mr. [Roberto] Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations.”
There was just one hitch: No one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarized in the brief. (Source: NY Times)
ChatGPT had, unfortunately, done what any old human junior lawyer or paralegal could have done if they were lazy, stupid, and unscrupulous. It made stuff up.

The lawyer who "wrote" the brief, Steven Schwartz, is no legal newbie. He's been in practice for 30 years. 
Mr. Schwartz said that he had never used ChatGPT, and “therefore was unaware of the possibility that its content could be false.”

Hmmm. How is it that he didn't go to the Google and read up on some of the, ahem, peculiarities and misinformation spewing from chat bots.  

It's not that Steven Schwartz didn't do a bit of due diligence.  

He had, he told Judge [Kevin] Castel, even asked the program to verify that the cases were real.
It had said yes.

When Mr. Schwartz pushed back on the initial "yes," ChatGPT backed up its lie with assurances - and more fake citations.

“What is your source,” he wrote, according to the filing.

“I apologize for the confusion earlier,” ChatGPT responded, offering a legal citation.

“Are the other cases you provided fake,” Mr. Schwartz asked.

ChatGPT responded, “No, the other cases I provided are real and can be found in reputable legal databases.”

Tsk, tsk, ChatGPT. Liar, liar, natural language processing on fire.  

The judge is not amused, by the way. There's a hearing set for today at "to discuss potential sanctions." 

Mr. Schwartz said he “greatly regrets” relying on ChatGPT “and will never do so in the future without absolute verification of its authenticity.”

The shoddy work of Mr. Schwartz and/or ChatGPT was found out because lawyers for Avianca did try to find the cases cited in the brief, presumably using old school search methods, like searching legal databases. Maybe even resorting to pulling dusty volumes down from their library and paging through. And they came up empty. Fake docket numbers. Fake dates. Fake plaintiffs. Fake judges. Even fake quotes. (In one instance, a fake quote cited an additional fake case.)

Welcome to the brave new world, Mr. Schwartz. (Wonder how much he billed his client for his discovery hours...)

No comments:

Post a Comment