The Latest News

The latest LTL news, progress on LTL Projects, and other updates in the legal technology field.

NeLI 2023 Judicial Panel
Neli 2023 Judicial Panel: Panelists: Hon. Angel Mitchell (Dist. Kan.), Hon. Andrew Peck (ret. USDC S.D. NY; DLA Piper), Hon. Noelle Collins (USDC E.D. MO.),Hon. Young Kim (N.D. IL.), Hon. Allison Goddard (S.D. CA.), and Hon. Xavier Rodriguez (USDC W.D. TX.)

The legal system is typically slow to change, and in some ways, it is meant to be. The individuals who make up the system, most prominently Supreme Court justices and many other judicial positions, are not subject to elections and often have lifetime appointments. Similarly, laws themselves tend to be resilient once established and difficult to overturn, especially given doctrines like stare decisis. So, how does the legal system handle the rapid emergence of generative AI (“GenAI”) tools like ChatGPT? Some federal courts have responded by issuing standing orders qualifying or limiting the use of GenAI, mainly initiated by instances of lawyer missteps, like the following two examples:

  • Mata v. Avianca—This widely-publicized case has become the cautionary tale of using GenAI in courts. Essentially, two New York attorneys, in their case against Colombian airline, Avianca, submitted a brief that contained fictitious cases generated by ChatGPT. Despite their attempts to confirm with ChatGPT that the cases were real, they, in fact, were not.

Although the judge mentioned there was nothing “inherently improper” in using GenAI tools, the ethical principles governing lawyers require they “ensure the accuracy of their filings.” Ultimately, the lawyers and their firm were sanctioned with a $5,000 fine.

  • “Robot Lawyer”—Earlier this year, the CEO of startup DoNotPay received threats of prosecution after he posted an offer asking for a lawyer to defend a speeding ticket in California state court using AI-powered chatbot “Robot Lawyer.” The scheme, which would have involved Robot Lawyer coaching the defense through a headset, was cautioned to be a potential violation of local rules of court and ethics requirements.

How Some Federal Judges are Approaching Generative AI in Their Courtrooms:

In May, Judge Brantley Starr (U.S.D.C. N.D. TX) issued a requirement that lawyers practicing before him must certify whether they used GenAI tools to craft their briefs, and, if so, that the information generated was checked by a human or through traditional databases. Judge Starr’s motivation for the order is rooted in a concern that GenAI tools do not swear oaths to uphold the law and represent clients, as lawyers do. Furthermore, neither he nor his staff use these tools in order to avoid even the appearance that AI may be influencing or deciding cases.

At this year’s National eDiscovery Leadership Institute conference, hosted by the University of Missouri—Kansas City School of Law, GenAI was a prominent topic throughout many of the discussions. In one of the judicial panels, judges from around the nation provided insight from the bench as they addressed the latest eDiscovery developments and ethical issues surrounding the discovery process. Mata v. Avianca was one of the cases the judges addressed.  

Although the panelists do not currently have any standing orders regarding the use of GenAI, they all emphasized the importance of personal accountability when lawyers conduct legal research and the role of judges as arbiters of quality control. Judge Noelle C. Collins (U.S.D.C. E.D. MO) expressed concerns of using GenAI herself because it “has no moral center,” but foresees it as a tool judges, generally, will incorporate more into practice. She also cites the relative difficulty of modifying local rules of court, which may delay some courts’ response to the new technology.

Through standing orders, some courts have taken the approach of banning the use of AI—not just GenAI—outright, which Judge Xavier Rodriguez (U.S.D.C. W.D. TX) said is “just wrong.” He further explained that artificial intelligence is incorporated, to some extent, into many aspects of life, whether it be in daily activities like shopping on Amazon or doing research in legal databases like Lexis or Westlaw. Ultimately, Judge Rodriguez stressed the potentially cumbersome implications of these orders if taken literally, in that, lawyers would essentially have to notify judges every time they used an online database.

Judge Allison Goddard (U.S.D.C. S.D. CA) went so far as to say that standing orders for GenAI are unnecessary. She mentioned that the hallucination issue with tools like ChatGPT (as seen in the Mata case) is essentially one judges deal with from lawyers anyway when they misinterpret cases or argue a point of law that is not fully supported by prior decisions. Judge Goddard said it is easier to figure out when GenAI hallucinates than when lawyers do. In drawing an analogy to the first scientific calculators and the general fear that students would become over-reliant on the new tool, she said GenAI is similar; that lawyers will not become reliant on it but learn to use it as the way of practicing law evolves.

Finally, Judge Young B. Kim (U.S.D.C. N.D. IL) stressed the need to work with GenAI tools and referenced its incorporation into legal writing courses at some law schools. His approach is that the technology is already here and going to get better, and the lawyers of tomorrow will have experience using it, so courts should prepare as well. Judge Kim speculated that sanctions against the lawyers in Mata v. Avianca from the state bar may have occurred in addition to the seemingly light fines imposed by the court, raising awareness to would-be lawyers and current practitioners that misuse of GenAI can be costly.

The Upshot:

Although the legal system can be slow-moving in some respects, attorneys, judges, and bar associations need to keep pace with GenAI tools and their applications to law practice. Legal technology competency is critical for effective representation and is required by professional conduct rules. And though GenAI is just one area of legal tech, its impact is bound to be lasting.

National eDiscovery Leadership Institute (NeLI)

NeLI is one of the leading annual conferences for electronic discovery. It was formed in 2014 to provide top-notch eDiscovery educational opportunities and foster cooperation between the bench and the bar. For more information about NeLI and this year’s conference, click here.

Sources: Learn more about Mata v. Avianca, Robot Lawyer, and Judge Starr’s AI Pledge.