Sign in to my dashboard Create an account
Menu

A lawyer's wish list on GenAI

weights
Contents

Share this page

Christine Lam photo
Christine Lam

The legal landscape of generative AI (GenAI) remains uncertain and complex. February 2, 2025, marked the first compliance deadline of the European Union Artificial Intelligence Act (EU AI Act), banning certain “Prohibited Risk” AI uses. And now, after that first deadline, waves of compliance requirements will go into effect over the next 2 or 3 years in Europe. At the same time, in the United States, well over 100 AI-related lawsuits are making their way through the courts, more than 40 of which involve intellectual property disputes.

Nevertheless, amid a rapidly changing legal landscape, enterprises must unlock the power of GenAI so that they can stay competitive. GenAI promises to radically transform many kinds of jobs, promoting fast innovation and bringing workers unparalleled productivity gains.

Undoubtedly, the legal teams in many enterprises are trying to keep up with these fast-changing technical and legal developments to advise their business clients. When I manage to catch my breath from time to time, I stare into space and daydream about (or, try to conjure up) ways to make some of GenAI’s major legal issues magically disappear.... 

...so, from a lawyer's perspective, what would my GenAI wish-list look like? 

  • Music to my ears. To a lawyer, the training data provenance is critical when analyzing the risks associated with a GenAI model. Many disputes and lawsuits involve the alleged use of unlicensed or infringing data for AI training—a matter that’s likely to go to the U.S. Supreme Court. In the meantime, whenever I hear that models can be trained by using only public domain, licensed, and/or permissive open-source content, with no performance implications, it makes me somewhat gleeful! 
  • Transparency is key. Disclosure of training materials is usually a double-edged sword. It could invite a massive amount of litigation for the model providers and perhaps for users. At the same time, disclosure helps users understand the risks and comply with transparency requirements in the European Union and in other regions. The lack of certainty in the laws at this time may give open-source models, especially the ones with training data transparency, a boost. And the European Union has specifically exempted certain AI systems under free and open-source licenses from some of its regulations. 
  • Bring AI to the data. Many GenAI services are cloud native. There’s no question that there are many benefits to being in the cloud, taking advantage of its native AI platforms. But sometimes your mean lawyer tells you that the data just can’t be in the cloud because of regulatory, legal, and security reasons. A lawyer’s wish would be to bring AI to the data, wherever the data sits! 
  • Not all data is the same. The myriad of regulations around the world keep me busy. For legal and security reasons, enterprises must protect sensitive data and comply with data governance models when using AI services. For example, you can’t use AI services with certain sensitive data or certain prohibited applications. With application-based types of laws like the EU AI Act, it’s critical to understand what data is appropriate for a given application or purpose, but how to follow the law is ambiguous. If clean data governance when using GenAI were possible through technology, I can’t tell you how excited I’d be as a lawyer! 
  • Smaller is simpler? I know that many of us were very thrilled about GPT-4, Claude, Gemini, and other models. These AI models boast hundreds of billions to a trillion parameters. But lawyers, unfortunately, have to think about the corresponding potential of infringement lawsuits. The U.S. Supreme Court is unlikely to provide clarity soon on whether the use of unlicensed copyrighted content for model training is “fair use” and therefore legal. I wonder whether small language models or smaller models that are trained on customizable, licensed, or public data—without performance compromise—would free lawyers from infringement concerns. And if developers could get suggestions based on only the enterprise’s internal and private code base, that would be RAD—oh, I meant to say RAG (as in retrieval-augmented generation). 
  • Keep it safe. Regardless of whether it is preprocessing and postprocessing, it’s hard not to lose sleep if GenAI model are being used without those two safeguards, especially in light of so many GenAI-related intellectual property infringement lawsuits. No doubt it's important to understand training data provenance. In addition, ideally, preprocessing would screen out all infringing or illegal prompts, and postprocessing would screen out infringing or noncompliant output. Luckily, technology appears to be moving in that direction!

In the rapidly changing era of GenAI, the one thing we know for certain is that it’s here to stay. We have to walk a tightrope between using groundbreaking technology and keeping ourselves safe, both ethically and legally. While my wish list reflects the present GenAI legal challenges, who knows what new legal solutions I will be wishing for next month. For the time being, at the very least, we can make one lawyer deeply happy for now by checking off her wish list! 

Learn More

To learn about NetApp® executive thought leadership on AI and GenAI, including the demands of AI and key business drivers, visit our AI thought leadership page.  

**This blog was prepared for general information purposes and should not be regarded as legal advice. 

Christine Lam

Christine Lam is NetApp's VP and Chief IP & AI Counsel , overseeing product legal, AI, and intellectual property (IP) matters for the company. Christine advises on a wide range of matters during all phases of the product lifecycle, such as technology partnerships, IP, AI governance, open-source use, and cybersecurity. In particular, she specializes in navigating today’s complex legal issues in the rapidly evolving technology landscape, including GenAI. Christine holds a BS and an MS from MIT and a JD from the University of California, Berkeley.

View all Posts by Christine Lam

Next Steps

Drift chat loading