Bracing For A Generative AI Revolution In Law

Law360 Pulse

Authored Article

Author(s) , Paige Hunt

The power and the peril of generative AI has been one of the most talked- about topics in legal circles since ChatGPT's explosion into public conscience in late 2022.

The arrival of generative AI into our world is now a stark reality. Lawyers need to understand both the basics of the technology and its legal implications when considering the use of generative AI.

Generative AI Within the Legal Industry

Many legal professionals have started to explore practical uses of generative AI in areas such as research, discovery and legal document development.

A Thomson Reuters study found that 82% of law firm attorneys believe that ChatGPT and generative AI could be applied to legal work.[1] However, trepidation remains, as the same study found that only 51% felt it should be applied to legal work.

This isn't shocking.

Similar feedback has been received in the medical field and other professions. We have long relied on predictions of AI, and generally a trust but verify approach has aided our comfort. The fundamental principle of human oversight cannot be underscored enough for generative AI to be successful.

In legal, draft work product will be presented by generative AI, but it is up to the attorneys to take the rough material produced by technology and mold it into a final product that accomplishes the specific objectives of their clients. The predicted benefits of generative AI will never arrive without trained, human lawyers to oversee the process.

The Dual Effects of AI on Discovery

Generative AI's impacts on the discovery process are twofold.

On the one hand, these capabilities will enable completely new methods of data extraction, linking, summarization and reasoning — extending existing predictive AI capabilities into a whole new realm. Not only will AI be able to analyze and identify characteristics about documents going through discovery, but it will now be able to generate information and insights about those documents, too.

On the other hand, the adoption of generative AI into the enterprise through tools like GitHubCoPilot are going to add an entirely new layer of complexity and questions related to documents during the discovery process.[2] If CoPilot is used to write an email, where does responsibility for the content of that email lie? Is there a need to differentiate documents created by AI versus created by a human? How will that be handled?

Currently, most available generative AI products are limited in scale and have yet to show proven value.

Initially, uses of generative AI focused on using conversation to augment search — how can we gain information from a corpus using natural language understanding? Exciting at first for the novelty, these approaches seem to be more AI in search of a problem, rather than a problem uniquely fit for a generative AI solution.

As generative AI is just nearing its first birthday, it is still too early to have clarity about how either of these components will evolve, but based on past precedent and the specific qualities of generative AI, there are some leading areas of promise.

Getting the Most Out of Your AI Strategy

The most favorable generative AI discovery approach involves looking beyond the standard workflows of today and reimagining how they could change.

What would it look like if you could connect existing predictive AI capabilities and new generative ones? How might the previous view of the Electronic Discovery Reference Model change?

The organizations that are best positioned to successfully develop and/or embrace generative AI in discovery are those already fluent in leveraging predictive AI in discovery now. There are many low-risk entry options into predictive AI for discovery.

For example, predictive AI can be used to help secure personally identifiable and personal health information. Organizations can use AI to automate the redaction of the identified information, and by doing so, can greatly reduce the laborious efforts to manually redact each instance of sensitive data.

More sophisticated use cases include privilege prediction. Oftentimes, static search term lists are overinclusive and do not identify all potentially privileged documents. An organization could begin to develop a privilege prediction model with their data while still leveraging traditional, potentially privileged search lists.

The combination of the AI and the familiar search list provides an enhanced privileged identification process that is then tested and verified by attorneys. The return on investment organizations reap from this approach increases as the process is repeated for each new model.

Current Legal Considerations and Guidance

The European Union is leading the way in terms of regulations with its Artificial Intelligence Act.

This law seeks extraterritorial reach, like the General Data Protection Regulation, and looks to enact an expansive, horizontal regulatory scheme across all industries.

Similar to the implementation of the GDPR, U.S. companies may unwittingly find themselves subject to its reach and need to monitor its passage. Canada is taking a similar, broad-based approach with Bill C-27.

Currently, there are no U.S. laws enacted that create specific rules for the use of generative AI tools. Specific legal guidance from federal regulatory agencies and courts in the U.S. is limited.

However, American lawmakers have indicated significant motivation to exercise greater oversight on the development and use of generative AI tools. Recently, Senate Majority Leader Chuck Schumer organized a hearing on the future of AI regulations, which Elon Musk, Bill Gates and Mark Zuckerberg all attended.

Schumer indicated the goal is to "maximize the benefit and minimize the harm … and that will be our difficult job." Whether the oversight is a horizontal model providing guidance across industries like the EU's AI Act, or a vertical model where individual departments and agencies create guidance targeted to their industry groups, is still to be determined.

For now, agencies such as the U.S. Equal Employment Opportunity Commission, the U.S.Department of Justice and the Federal Trade Commission have issued statements asserting that generative AI is within their regulatory authority. These agencies further indicated that bias, discrimination and misleading consumer information and privacy that are present in automated systems will be held to existing legal standards.

Beyond these efforts, it is likely that the Federal Rules of Civil Procedure will need a refresh to account for generative AI. Discovery is going to include even more data types, algorithms, training sets and prompts. Expect that bar associations and other industry stakeholders will issue guidance on generative AI competencies for legal application.

Courts are just beginning to encounter the new questions raised by this technology. There are many different examples of guidance and opinions, and some judges have even asked attorneys to confirm they will not use generative AI to write legal briefs.[3] Other scholars and judges have suggested that litigants cite the tool they have used to create their work product when it is submitted to a court.

Before You Adopt, Think Cybersecurity

Along with efficiencies and innovation, generative AI presents new risks, like data security, privacy and reliability.

For example, it is critical to assume that any information used for ChatGPT prompts could become public. A company's intellectual property and reputation can be immediately compromised due to undisciplined use of ChatGPT and similar tools.

With the rise in interest and use of these tools, we'll also see a subsequent rise in cybersecurity risk.[4] Machines will inevitably learn that a business is interested in, for example, certain proprietary topics or competitive opportunities. This information can be accessed by bad actors, who will use it to get better at impersonation. As such, a first reasonable policy step for any business is to sound the alert about the potential for AI-generated phishing attempts.

Organizations need to invest in developing a mature information governance program. New technology onboarding should be planned as part of the program, as well as preservation and defensible disposal. Employee education and training is also paramount in mitigating risks associated with generative AI.

The National Institute of Standards and Technology published an AI risk management framework in collaboration with public and private sectors. The framework outlines characteristics of trustworthy AI and offers guidance for addressing them.

The characteristics include "valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with their harmful biases managed."

A mature information governance program will allow companies implementing the next generation of AI tools to evaluate, test and continuously monitor the performance of their technologies across all of these risk vectors.

What's Next for Generative AI

The IT function has typically taken the lead on responding to the technology innovations of the day; but with generative AI, it has become clear that the attorneys need a major seat at the table in this new conversation. Organizations must prepare for the generative AI revolution we are sure to see in the coming years. Those who excel in our new reality will remain curious and nimble. Those who do nothing will become obsolete.

 

Republished with permission. The article "Bracing For A Generative AI Revolution In Law" was originally published by Law360 Pulse on November 13, 2023.