On February 10, 2026, a U.S. federal judge ruled that thirty-one documents generated by a defendant using a generative artificial intelligence (AI) tool, and later shared with his lawyers, were not protected by privilege.
The use of generative AI to summarize advice, analyze facts, and draft legal materials is becoming increasingly common. Still, this case shows how it can also expose clients to legal risk.
Attorney-client privilege protects confidential communications between a lawyer and a client made for the purpose of obtaining legal advice. Work product protection applies to materials prepared in anticipation of litigation. Both doctrines depend on a recognized legal relationship.
If sensitive information is entered into a consumer-grade AI platform, the legal structure underpinning these protections may be compromised. As a result, privilege may be lost or may not arise in the first place. The recent decision in United States v. Heppner highlights this risk and its potential implications in jurisdictions beyond the United States.
In 2025, Bradley Heppner was the target of a U.S. federal investigation for alleged securities and wire fraud. He had already hired lawyers, but on his own initiative, and not on their advice, he turned to a public generative AI tool to help “organize his defense.”
He entered detailed prompts into the AI platform, including information obtained through discussions with his lawyers. He then used the AI-generated outputs to structure his thoughts and prepare for future conversations with counsel.
Upon Heppner’s arrest, the FBI seized thirty-one AI-generated documents from his electronic devices. His lawyers claimed that these documents were protected by the attorney-client privilege and the work product doctrine, arguing that:
At the hearing on February 10, 2026, Judge Rakoff of the Southern District of New York rejected those arguments.
The Court determined that because Claude AI is not a licensed attorney, the AI-generated documents did not constitute communication between Heppner and his counsel. Instead, they were treated as exchanges between non-attorneys and therefore did not qualify for the attorney-client privilege.
The Court also relied on Claude’s privacy policy. Users had agreed that the provider could collect both “inputs” and “outputs,” use them to train the model, and disclose them to third parties, including government authorities. On that basis, the Court found there was no reasonable expectation of confidentiality.
In addition, the Court found that, even if the documents had been prepared in anticipation of litigation, they would not be protected as work product because they were not prepared by counsel or at counsel’s direction.
United States v. Heppner illustrates how courts are likely to approach AI-generated documents and the reasons such materials may not be protected by privilege. The decision sets out the factors that can exclude AI outputs from legal protection. For both clients and lawyers, the case underscores that the use of AI tools in preparing or discussing legal advice can directly affect privilege. Furthermore, it confirms that AI tools cannot replace lawyers and cannot independently generate privileged legal advice.
In the UK, the Courts and Tribunals Judiciary of England and Wales has issued Guidance for responsible use of AI in Courts and Tribunals, warning that uploading privileged material to a consumer-grade AI platform is likely to be treated as a disclosure inconsistent with confidentiality. The guidance is explicit: “Any information that you input into a public AI chatbot should be seen as being published to the whole world.”
While the Heppner decision lacks binding authority in England and Wales, an English court would likely follow a comparable analytical framework. It would review the platform’s terms of use, examine whether the provider can store or reuse the data, and ask whether any disclosure could realistically be described as “limited.”
English case law also confirms that documents do not become privileged simply because they are sent to lawyers, per Imerman v Tchenguiz [2009] EWHC 2901.
Further, English courts have historically taken a restrictive approach to extending privilege. In Three Rivers District Council v Governor and Company of the Bank of England [2001] UKHL 16, the House of Lords clarified that legal advice privilege protects only communications between lawyer and client, and that “client” is interpreted narrowly. If AI tools sit outside that defined relationship, privilege may fail even before questions of third-party disclosure arise.
Taken together, these principles suggest that, on similar facts, an English court would likely reach a result broadly aligned with the U.S. decision in Heppner.
At the EU level, the position is, if anything, more restrictive. The doctrine of confidentiality has developed through the case law of the European Court of Justice (“the Court”).
In AM & S Europe Ltd v Commission, the Court held that privilege applies only to communications with independent external lawyers and only where those communications are made for the purposes and in the interests of the client’s rights of defense. That approach was reaffirmed in Akzo Nobel Chemicals Ltd v Commission, where the Court drew a firm line between independent external lawyers and in-house counsel, finding that the latter are not protected in Commission competition investigations.
This framework has direct implications for AI use. Under EU law, privilege attaches only within a recognized channel: communication with independent external counsel for defense purposes. If a client independently uses a generative AI tool to analyze facts, structure arguments, or prepare a strategy, as in Heppner, that material will not fall within EU LPP at all.
The EU context is further complicated by the General Data Protection Regulation (GDPR). When a lawyer inputs client personal data into a consumer AI tool, the lawyer acts as a data controller and the AI provider as a data processor. GDPR requires a formal data processing agreement setting out the processor’s obligations, security measures, and restrictions on further processing. Consumer AI tools’ standard terms and conditions typically do not satisfy these requirements, as they often permit the use of data for model training and the disclosure of data to third parties.
Where the AI provider is based outside the European Economic Area, Chapter V GDPR transfer rules apply. This requires either an adequacy decision or appropriate safeguards, such as standard contractual clauses, and the lawyer must assess whether the transfer can be carried out lawfully under the recipient country’s legal framework.
Therefore, lawyers and clients face a dual risk: loss of privilege and potential regulatory exposure under data protection law. The EU AI Act adds a third layer of compliance. AI systems that assist in interpreting facts or applying law may qualify as high-risk, triggering obligations regarding risk management, transparency, and human oversight. Lawyers who deploy such tools on behalf of clients may be classified as “deployers” and bear corresponding compliance responsibilities.
One of the most practically significant aspects of Judge Rakoff’s opinion in Heppner is what he left open: whether the result would have been different if counsel had directed the client to use the AI tool.
Under United States v. Kovel, 296 F.2d 918 (2d Cir. 1961), a lawyer may extend attorney-client privilege to third-party agents whose assistance is necessary to the lawyer’s provision of legal services. Classic examples include accountants retained for tax advice, translators, private investigators, and medical experts. Judge Rakoff noted that if defense counsel had instructed Heppner to use Claude as part of the legal strategy:
“Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.”
The answer likely depends on the distinction between consumer-grade and enterprise-grade AI deployments, as the latter generally involve contractual confidentiality guarantees, no-training clauses, compliant data processing agreements, restricted third-party disclosure, and security certifications.
In Heppner, the court’s analysis was explicitly grounded in the privacy policy of the consumer version of Claude AI, which permitted data collection, use for training, and disclosure to third parties, including government authorities. That analysis would not apply, or would apply very differently, to an enterprise deployment with more stringent safeguards.
The practical takeaway is clear. Counsel-directed use of contractually secured enterprise AI tools may preserve privilege and satisfy professional secrecy obligations, but consumer-grade tools used independently, whether by client or lawyer, almost certainly do not.
The Unifying Principle
In most jurisdictions, consumer-grade AI platforms are or are likely to be regarded as third parties for legal purposes.
AI does not give legal advice and cannot replace qualified counsel. Where AI tools are used in the ordinary course of business, users need to understand how their data is handled. This means checking the platform’s privacy policy to see whether inputs and outputs are stored, reused, or shared, and whether model training access can be disabled.
While such safeguards may enhance control over data, they do not solve the privilege problem. As Heppner shows, once information is disclosed outside the protected lawyer-client relationship, there is a real risk that a court will find that confidentiality has been lost.
Some paid or enterprise versions of AI tools offer enhanced privacy controls compared to free consumer models. However, even these features may not eliminate legal risk. The guidance issued by the Courts and Tribunals Judiciary in England and Wales cautions that judges should approach AI use on the assumption that, “even with history turned off, data entered is being disclosed.”
For clients, the safest approach is to involve external counsel before using AI tools to analyze facts, prepare defense strategies, or summarize legal advice. For legal teams, the task becomes to anticipate this risk and advise clients accordingly.
The analysis in this article has proceeded with the assumption that a human (i.e., a client or a lawyer) makes a conscious decision to input information into an AI tool, usually a large language model (LLM). Some would say that this is already becoming outdated.
The issue discussed above is set to intensify with the rise of autonomous AI agents, systems capable of independently planning multi-step tasks, calling external services, and coordinating with other agents.
In Heppner, there was at least one identifiable moment at which a person chose to share information with a third-party platform. In an agentic workflow, client data may be automatically and invisibly transmitted to, processed by, and stored across multiple external services, without the supervising lawyer being able to audit what was shared, with whom, or when.
The legal frameworks examined in this article were designed for a world in which a person decides to share information with another identifiable person or entity. Autonomous agents do not fit neatly into that world. Unlike a prompt typed into a chat box, an agent does not pause for consent… Will privilege (as we know it) survive the prompt, or will it need to be reengineered for the automation era?
Authors: Anne MacGregor, Hannah Byrne, Uroš Rajić