The Latest News

The latest LTL news, progress on LTL Projects, and other updates in the legal technology field.

MIT's Task Force on Responsible Use of Generative AI for Law

With the rise in both capability and widespread use of generative AI models, a Task Force convened by the research arm of law.MIT.edu and chaired by Dazza Greenwood aims to situate our evolving understanding of the technology with long-held ethical standards and responsibility guidelines in the legal context.

Outputs

In June 2023, the Task Force published the second iteration, Version 0.2, of its guiding principles on using generative AI in the practice of law. This version asks lawyers to adhere to seven principles:

  1. Duty of Confidentiality to the client in all usage of AI applications;
  2. Duty of Fiduciary Care to the client in all usage of AI applications;
  3. Duty of Client Notice and Consent* to the client in all usage of AI applications; 
  4. Duty of Competence in the usage and understanding of AI applications;
  5. Duty of Fiduciary Loyalty to the client in all usage of AI applications;
  6. Duty of Regulatory Compliance and respect for the rights of third parties, applicable to the usage of AI applications in your jurisdiction(s);
  7. Duty of Accountability and Supervision to maintain human oversight over all usage and outputs of AI applications; 

Additionally, with the publishing of Version 0.2 the Task Force has constructed examples, or “use cases,” that illustrate what scenarios or actions would be consistent and inconsistent with each principle. For instance, below are the examples for the first principle regarding lawyers’ “Duty of Confidentiality to the client in all usage of AI applications:”

Example: Inconsistent

Example: Consistent

You share confidential information of your client with a service provider through prompts that violate your duty because, for example, the terms and conditions of the service provider permit them to share the information with third parties or use the prompts as part of training their models.

Ensure you don't share confidential information in the first place, such as by adequately anonymizing the information in your prompts, or possibly ensure contractual and other safeguards are in place, including client consent.

 

What’s Coming

Currently in development and open to public comment is Version 0.3, in which the Task Force hopes to bring in a greater diversity of views from a variety of practicing lawyers as well as from practitioners outside the U.S. Another of the primary goals with Version 0.3 is to understand better how lawyers are using generative AI in their practices right now and use that information to predict where more use cases may develop.

How to Contribute

The Task Force maintains a website where the public can contribute thoughts, comments, and concerns regarding the responsible use of generative AI in law. Visit https://law.mit.edu/ai for more information.