Generative artificial intelligence tools are now embedded in daily business operations—from drafting emails and summarizing documents to analyzing data and preparing internal reports—whether they are sanctioned or not. But as companies and employees adopt these tools, courts are beginning to address an important question:
Are AI prompts, uploads, and outputs safe from discovery?
Two recent federal decisions provide early guidance—and a cautionary lesson.
The Emerging Legal Framework
Courts are applying traditional privilege principles to AI use. The core questions remain the same:
- Attorney-client privilege hinges on attorney-client communication, confidentiality, and no waiver to third parties.
- Work product hinges on anticipation of litigation, prepared by or for counsel, and not waived by disclosure to an adversary.
United States v. Heppner (S.D.N.Y. Feb. 17, 2026)
In Heppner, a criminal defendant generated documents using the publicly available AI platform Claude after learning he was under investigation. He later shared those AI-generated materials with his attorneys and claimed his “communications” with Claude were privileged.
The court rejected both attorney-client privilege and work product protection on approximately 31 documents. It emphasized:
- The AI platform was not an attorney or an agent of an attorney.
- The defendant used the tool without counsel’s direction.
- The platform’s privacy policy permitted retention and potential disclosure of inputs and outputs.
- Any privileged information entered into the system was effectively shared with a third party, resulting in waiver.
The court concluded that communications with a public AI platform were not protected by privilege or work product doctrine under these circumstances.
Warner v. Gilbarco (E.D. Mich. Feb. 10, 2026)
In contrast, the court in Gilbarco refused an overbroad discovery request seeking information about a pro se plaintiff’s AI use, characterizing the request as an improper attempt to probe litigation strategy.
While Gilbarco appears to provide some comfort that even public AI use does not automatically eliminate work product protection, its holding may not extend far beyond its specific factual and procedural context.
Defendants requested the pro se plaintiff to produce “all documents and information concerning her use of third-party AI tools in connection with this lawsuit.” The request was not limited to specifically identified conversations or documents that were known or suspected to have been uploaded to or generated by the plaintiff’s AI, and therefore was treated more like a fishing expedition. No analysis of the nature of the AI or the plaintiff’s contractual terms was conducted.
Together, these decisions show that AI use does not automatically destroy privilege—but careless use can. These are early decisions, and courts’ treatment of AI tools and vendor relationships is likely to vary by jurisdiction, tool terms, and how the tool is used.
Why This Matters for Companies
Many organizations now permit employees to use generative AI tools for:
- Drafting internal communications
- Preparing reports
- Summarizing contracts
- Transcribing meetings or taking minutes
- Analyzing data
- Searching document repositories
Some organizations prohibit their employees from using AI.
Regardless, employees may use their employer’s or their personal AI accounts (colloquially referred to as “shadow IT”) to upload sensitive information or documents to AI.
If employees input sensitive information—including legal advice, internal investigations, trade secrets, or anticipated litigation strategy—into public AI systems, those materials may be discoverable in litigation.
Importantly, not all AI systems operate the same way. Enterprise-grade tools with negotiated contractual protections differ significantly from publicly accessible consumer platforms.
Practical Steps for Legal Departments
In-house legal teams should consider proactive measures now.
1. Inventory AI Use Across the Organization
Understand what platforms employees are using—formally and informally. Shadow IT adoption of public AI tools is common.
2. Distinguish Between Public and Enterprise Tools
Publicly accessible AI platforms often retain and process user data under broad terms of service. Enterprise tools may offer stronger confidentiality protections and guarantees regarding model training—but only if properly configured and governed.
3. Establish Clear Written AI Policies
Your policies should address:
- Whether employees may input confidential or proprietary information
- Whether legal advice or investigation materials may be uploaded
- Approval processes for AI tool adoption
- Required use of company-approved platforms
- Prompt and output retention and deletion expectations
4. Train Employees on Privilege Risk
Many employees assume AI is “just software,” like Outlook or Google Drive. Courts may treat AI as a gossipy neighbor if training policies and confidentiality protections are insufficient. Employees should understand that uploading sensitive legal content to a public chatbot may jeopardize privilege.
5. Coordinate AI Use With Counsel During Investigations or Litigation
If AI tools are used in connection with internal investigations or anticipated litigation, legal departments should:
- Direct and supervise that use
- Evaluate whether the platform qualifies as a protected agent
- Review privacy terms before uploading sensitive material
- Consider whether protective orders address AI uploads
6. Review Protective Orders and Discovery Protocols
Litigation protective orders are increasingly including provisions restricting AI uploads of produced documents. Companies should ensure compliance before using AI tools to analyze opposing party’s discovery materials.
How Butler Snow Can Help
Butler Snow uses vetted, enterprise-grade AI tools with contractual protections. However, each client’s AI ecosystem is different.
We can assist with:
- Reviewing and updating AI governance policies
- Auditing AI-related handbook provisions
- Advising on privilege implications of specific platforms
- Developing employee training programs
- Evaluating AI use in internal investigations and litigation
Proactive governance now can reduce discovery exposure later.
If you would like to discuss your organization’s AI policies or litigation risk, please contact your Butler Snow attorney or reach out to: AICommittee@butlersnow.com
We will continue to monitor developments as courts further define the intersection of generative AI, confidentiality, and privilege.
