Kobre & Kim's Team

February 19, 2026

User Beware: AI and the Potential Lack of the Attorney–Client Privilege

Use of artificial intelligence in legal disputes and investigations can unintentionally imperil sensitive information. A recent federal court ruling allowing prosecutors to access AI-generated materials highlights how AI tools’ data-retention practices and terms of use can make communications discoverable. Practical guardrails can help potential litigants preserve privilege, protect legal strategy, and reduce litigation and reputational risk.


A recent decision from a Manhattan federal court highlights a fast-emerging risk for companies and executives using artificial intelligence tools in connection with legal disputes and investigations. In pretrial proceedings, an executive facing a high-profile fraud case used Anthropic’s AI tool (Claude) to create reports outlining his defense strategy and potential legal arguments.  When federal agents exercised a search warrant, they seized electronic devices containing those documents. The Court held that law enforcement could review and use those documents.  It determined that:

In the months and years to come, we will inevitably see many more fights regarding the use of artificial intelligence and the circumstances when such queries and the documents that they generate will and will not be protected from disclosure. 

These developments underscore the risks for companies and individuals as AI tools become embedded in everyday workflows.  In particular, unsupervised or informal use in sensitive legal contexts can undermine privilege, complicate trial strategy, and create reputational exposure.

How to Mitigate Risk

  1. Treat AI Tools as Potential Third Parties for Privilege Purposes. Legal and compliance teams should vet AI vendors and terms of use and assume that inputs to non-enterprise tools may not be confidential, thereby jeopardizing a litigant’s ability to claim privilege. Legal and compliance teams should bear in mind that, like traditional search engines, AI tools may be subject to search warrants for stored user communications (which may be retained for years or even indefinitely absent a different established retention policy), and law enforcement may also seek “reverse” or “keyword” warrants requiring AI companies to identify users who searched for specific topics.
  2. Establish Clear Protocols for AI Use in Legal Contexts. Companies should implement policies that restrict the use of public or third-party AI tools for preparing materials related to litigation, regulatory inquiries, audits, or internal investigations. Sensitive legal work should be routed through counsel-approved platforms and workflows to preserve privilege and work-product protections.
  3. Centralize Control of Legal Strategy and Case Development. Companies should encourage employees and executives to consult with attorneys before preparing legal narratives, timelines, or analyses using AI tools related to legal disputes.

As regulators and courts scrutinize how AI is used in legal matters, it is essential that companies implement guardrails to protect sensitive and confidential information.