Dr. Maura Grossman speaking at NeLI 2023: The Artificial Lawyer: What You Need to Know About AI, Generative AI, and Bias
In a recent thought-provoking lecture at the National eDiscovery Leadership Institute hosted by UMKC, Dr. Maura Grossman, J.D., Ph.D., a distinguished expert in the field of artificial intelligence and the law, presented a comprehensive overview of AI's significance, potential, and the pressing challenges it poses. Her talk delved into the broad implications of AI, its evolution, and the hurdles that must be surmounted to harness its capabilities effectively.
Dr. Grossman conceptualizes artificial intelligence (“AI”) as a general-purpose tool that will someday be as ubiquitous as electricity or fire. She says like many new technologies we eventually become accustomed to, we will refer to AI as “software.” Its impact will be defined by how it's used, and the regulatory guardrails placed around it. But AI is not just one thing. AI broadly encompasses intelligent, or cognitive-like, tasks performed by computers, and is not limited to a single technology or function. AI is distinct from technology like automation and robotics as it primarily revolves around the use of algorithms, machine learning, and natural language processing (NLP). Much of the AI we interact with today is what Dr. Grossman calls “Narrow” or “Weak AI,” which excels at specific tasks better than humans, like language generation. Another kind of AI she describes is “General” or “Strong AI,” which, in theory, can perform as proficiently as humans across a wide range of tasks—but is likely still a long way off.
Machine learning, a foundational element of generative AI (“GenAI”) tools like ChatGPT, has been in existence since the early 1940s, but recent advancements, namely, increased computing power and data availability, coupled with declining memory and storage costs, have driven the increased accessibility of these tools to the public and legal community. Yet, this newfound power brings its own set of challenges. Dr Grossman raised concerns regarding encryption and security in the not-so-distant era of nano and quantum computing, as the processing power of computers continues to grow seemingly exponentially.
GenAI also incorporates deep learning, a subset of machine learning, that involves intricate neural networks with numerous layers that process data by building upon inputs. In fact, ChatGPT boasts 96 network layers between input and output. What goes on as the processing of one layer incorporates and builds on the one prior is referred to as a “black box” because even computer scientists have a difficult time explaining or pinpointing exactly how the input produced the output. Slightly less mysterious, however, is how GenAI leverages NLP, which differs from traditional statistical models, focusing on meaning rather than mere binary calculations, by examining word co-occurrence to construct more nuanced models.
GenAI tools draw on massive data sources, particularly the internet, to generate content in response to user prompts. Deploying machine learning and NLP, they excel at creative tasks and content synthesis. While they can produce innovative content, they may sometimes "hallucinate," but Dr. Grossman suggests this occasional generation of unexpected content should be viewed as a feature rather than a bug.
The legal realm is one where AI has made significant inroads. The application of AI in the legal sector began with technology-assisted review (TAR) in eDiscovery in the mid-2000s. Since then, its reach has expanded to include due diligence tracking, legal research, contract analysis, litigation outcome forecasting, and more. GenAI is expected to enhance the delivery of legal services and expand access to justice for those who cannot afford legal representation, but is unlikely to replace a lawyer’s critical thinking, compassion, and empathy.
Highlighting one of the crucial challenges posed by AI, Dr. Grossman emphasized the inherent biases that can emerge from training AI models. Data, as the foundation of AI, often carries historical biases that get perpetuated in AI systems. These biases come from how the data was collected, who was collecting it, and how the variables were defined and measured. One instance of the negative effects of bias in AI that Dr. Grossman referenced was the case of COMPAS. COMPAS was an AI tool originally intended to guide legal professionals in making risk assessment decisions, but was later employed in sentencing determinations, which revealed racial disparities and biases as inherent in the tool. This case also illustrates the danger of “function creep,” where AI tools designed for one purpose are utilized for another.
In the quest to detect and eliminate bias, Dr. Grossman discussed the need for AI tools to be accurate to be fair, the complexity of defining fairness, and the privacy implications of obtaining reliable and valid data—which is necessarily personal and sensitive in nature. A few steps in the right direction, according to Dr. Grossman, would be to diversify AI developers and stakeholders, insist on the testing of AI tools as they emerge, and demand transparency of how these tools are created.
Ultimately, AI’s journey towards “software” and widespread use in the legal field is a complex one requiring careful consideration and proactive measures to address the multifaceted challenges it presents.
National eDiscovery Leadership Institute (NeLI)
NeLI is one of the leading annual conferences for electronic discovery. It was formed in 2014 to provide top-notch eDiscovery educational opportunities and foster cooperation between the bench and the bar. For more information about NeLI and this year’s conference, click here.