In February, the internet was abuzz with commentary regarding two significant decisions from United States District Courts concerning artificial intelligence (AI) tools in legal contexts. One case, *United States v. Heppner*, dealt with a criminal defendant's "chat" with a popular AI being treated as a waiver of confidentiality, making the conversations admissible for the prosecution. Coincidentally, *Warner v. Gilbarco, Inc.*, decided the same day, appeared to take an opposite stance for a *pro se* litigant. By March, a new controversy emerged with *Nippon Life Insurance Company of America v. OpenAI Foundation*, where a corporate plaintiff sued OpenAI, alleging that ChatGPT generated false legal pleadings which cost the plaintiff $300,000 to defend. This article delves into these pivotal cases, exploring the complex implications of AI on attorney-client privilege, work product doctrine, and the evolving boundaries of legal practice and developer liability.
Chats Between a Represented Party and ChatGPT Aren’t Protected
Judge Rakoff’s decision in *United States v. Heppner* established a critical precedent regarding the confidentiality of client-AI interactions. The court determined that a criminal defendant's conversations with a large language model, such as ChatGPT, do not benefit from attorney-client or work product privileges. The rationale was straightforward: an AI is not a lawyer, and sharing case-related information with a public, self-training AI system is equivalent to disclosing it to a layperson. This voluntary disclosure effectively waives any claim to privilege, making such communications susceptible to lawful seizure and use by the prosecution, as demonstrated by the electronic records obtained during the defendant's arrest. The court unequivocally treated these digital 'chats' with AI just like conversations with a human third party, emphasizing the loss of confidentiality.
But What About When the Party is Representing Themselves?
In contrast to *Heppner*, the case of *Warner v. Gilbarco, Inc.* presented a seemingly opposite, yet nuanced, perspective on AI communications for *pro se* litigants. Here, the court acknowledged that chats between a self-represented party and an AI tool could indeed constitute work product, reflecting the litigant's mental impressions developed in anticipation of litigation. However, the crucial distinction lay in the concept of waiver. Unlike *Heppner*, where disclosure to a public AI was seen as waiving privilege, *Warner* focused on whether the *pro se* litigant's storage of work product with an AI was *reasonably likely* to fall into an adversary's hands. Concluding that the opponent had no direct access, the court viewed the disclosure to the large language model as an administrative action, thereby not triggering a work product waiver. This suggests a different standard for privilege when the party is acting as their own attorney.
Two Takeaways
The article presents two key takeaways from these developments. Firstly, it highlights the urgent need for legal counsel to educate clients, even sophisticated ones, about the inherent risks of discussing confidential case information with consumer-level AI tools like ChatGPT or Claude. These platforms are often designed to train on user inputs, making them unsecure and prone to waiving attorney-client privilege. Lawyers are advised to implement a written AI warning at the outset of client retention, explicitly stating that only secure, confidential, closed AI systems should be used, and cautioning against the potential for information leaks or adversarial access if public AIs are utilized. Secondly, the article addresses the broader societal and legal challenge of regulating AI's capabilities and mitigating its potential for misuse. It questions the effectiveness of disclaimers provided by AI developers when they are aware their products can facilitate unlawful conduct, drawing parallels to Tesla's liability for its 'Full-Self Driving' feature despite warnings. The underlying argument is that commercial entities marketing AI tools without sufficient safeguards against known misuses might incur liability, pushing the legal system to define new boundaries for responsibility in the burgeoning AI landscape.
Conclusion
The concluding remarks emphasize that the cases of *Heppner*, *Warner*, and *Nippon* collectively mark a pivotal moment, forcing courts to move beyond viewing AI as a mere novelty and to confront its profound impact on established legal concepts such as privilege, work product, and the very definition of legal practice. These cases underscore the increasing friction between how AI developers promote their tools and how end-users actually deploy them, often overlooking disclaimers about risks. As Judge Rakoff noted, the legal implications of AI are only just beginning to be understood. This rapidly evolving landscape compels judges, legal practitioners, and technologists alike to collaborate in establishing clear responsibilities and robust guardrails, recognizing that traditional disclaimers alone are proving insufficient to govern the complex challenges posed by artificial intelligence in the legal sphere.