News & Insights

Client Alert

February 23, 2026

Federal Court Rules Using AI Prompts May Waive Privilege


Picture this. A senior executive is interviewed by in-house counsel in connection with a high-stakes matter. The executive, wanting to better consider the legal issues raised and assist herself in obtaining further legal advice from counsel, asks an AI tool some follow-up questions, and the AI tool creates a document memorializing the back-and-forth, which she saves on her computer.

Is the AI-generated document, prepared by the executive in connection with her discussion with counsel, privileged? Likely not, at least in the Southern District of New York. That was the ruling in United States v. Heppner, one of the first judicial opinions to address claims of privilege over AI-generated materials that result from a non-lawyer client’s use of AI. It carries significant implications—for privilege, data privacy and security, and trade secrets.

 

The Common Use of AI Tools and the Arguments Related to Privilege

There are nearly a billion active users of third-party AI chatbots across the world. It is highly likely that executives, employees and other clients are actively using these tools in their everyday lives.

In Heppner, a company executive under criminal investigation had submitted prompts related to the government’s investigation through Claude, Anthropic’s AI chatbot, which then produced AI-generated outputs at the executive’s request. Those outputs were saved as a document on a device the government seized in connection with its investigation.

The executive claimed attorney-client privilege and work-product protection over the AI-generated outputs and asked that the document containing them be segregated from other seized files. The government challenged that claim and moved for a “ruling that documents the defendant generated through an artificial intelligence tool are not privileged.”

The government focused on three facts: (1) Claude is a third-party tool whose terms of service disclaim the creation of any attorney-client relationship, (2) Claude’s privacy policy limits confidentiality by permitting Anthropic to share information with “governmental authorities” and “third parties,” and (3) the executive created the prompts at his own “behest” rather than at counsel’s direction. The government even attached to its motion its own interaction with Claude. It asked, “Can Claude provide legal advice?” to which the tool responded, “… I’m not a lawyer and can’t provide formal legal advice or recommendations.”

Defense counsel argued that the documents were protected by privilege, largely based on the sequence and purpose of the client’s use of the AI tool. Specifically, after the client spoke with his counsel, the client used the AI tool to create reports of those discussions and of the government’s allegations, and did so “for [the] express purpose of talking to counsel.” Defense counsel argued that the executive was only using the tool to assist himself in working with his lawyer and understanding and preparing for litigation.

 

The Court’s Ruling

Judge Rakoff framed the question presented as follows: “[W]hen a user [of generative AI] communicates with a publicly available AI platform in connection with a pending criminal investigation, are the AI user’s communications protected by attorney-client privilege or the work product doctrine?” He acknowledged that he appeared to be answering “a question of first impression nationwide[.]”

With respect to the claim of attorney-client privilege, Judge Rakoff found the AI-generated documents “lack at least two, if not all three, elements of the attorney-client privilege.”

First, the documents were not correspondence with counsel, because, as all parties agreed, Claude is not an attorney. Judge Rakoff rejected the suggestion that “whether Claude is an attorney is irrelevant because a user’s AI inputs, rather than being communications, are more akin to the use of other Internet-based software, such as cloud-based word processing applications.” The court reasoned that such interactions “are not intrinsically privileged in any case” and are missing the foundation of privilege, which is a “trusting human relationship.” (Emphasis added). Notably, the analysis appears to presume that the back-and-forth with Claude was a communication with a third party, and thus turned on whether that third party was an attorney.

Next, the court held that the communications underlying the documents were also not confidential because Claude is both a third-party AI platform and Anthropic discloses in its Privacy Policy that it collects data on inputs and outputs, uses that data to train itself, and shares that data with third parties, including governmental regulatory authorities. Judge Rakoff ruled there is no reasonable expectation of privacy in communicating with an AI tool in this context.

Finally, there was no attorney-client privilege because the client did not communicate with Claude for the purpose of obtaining legal advice from Claude—though this was a closer call, as Judge Rakoff noted himself, because the client asserted that he “communicated with Claude for the ‘express purpose of talking to counsel.’” Judge Rakoff considered whether this would be “akin” to bringing within the scope of privilege a “highly trained professional who may act as a lawyer’s agent” in communicating with the client. He ruled it was not, however, because counsel never instructed the client to use Claude.

With respect to the claim of work product, Judge Rakoff focused on whether the client was using generative AI at the direction of counsel and found that he was not. This was fatal to the claim of work product, the court reasoned, because the “purpose of the doctrine is to protect lawyers’ mental processes.” (Emphasis added). Judge Rakoff noted that he was disagreeing in part with a Southern District ruling by a magistrate judge.

How this specific ruling may apply in civil litigation, where the Federal Rules expressly protect work product prepared by a “party or its representative,” Fed. R. Civ. P. 26(b)(3) (emphasis added), and do not necessarily require the involvement of an attorney, remains an open question.

 

The Court’s Footnote Regarding Waiver of Attorney-Client Privilege

Relegated to a footnote in the court’s analysis may be the most significant finding: “[E]ven if certain information that [the client] input into Claude was privileged, he waived the privilege by sharing that information with Claude and Anthropic, just as if he had shared it with any other third party.” Again, this finding presumes that sending a prompt to an AI tool “shar[es] that information with” the AI bot itself—a presumption that future cases involving more secure AI tools (for example, internally secure enterprise tools) may test.

 

Implications

This ruling has potentially significant consequences for the use of AI by clients. Central to any analysis will be (1) the nature of the tool, (2) the privacy and confidentiality of the tool, (3) the specific contents of the inputs and outputs, and (4) whether counsel directed the client to use the AI tool. Notably, it remains unclear after Heppner whether a client’s use of an internal and confidential AI tool can protect attorney-client privilege. Judge Rakoff’s reasoning, in particular his belief that the privilege could turn on whether “Claude is an attorney,” does not appear to turn on whether the tool is public or private. Indeed, the opinion suggested that disclosing privileged information to a third-party AI tool is the same “as if [the client] had shared [the privileged communications] with any other third party.”

Beyond the implications for privilege, this ruling also reinforces some of the long-considered risks of leveraging AI tools for the review or analysis of confidential information:

  • Data Privacy and Security: Using third-party AI tools—especially consumer-grade public tools—risks allowing those tools to “train” on the data you supply in your inputs. Such use can therefore pose risks to company privacy and security and violate internal policies governing the same.
  • Trade Secrets: The court held that, under the terms of Claude’s privacy policy, communication with that tool could not be considered confidential in the context of the attorney-client privilege. It is worth considering what other tools that run on text inputs may have similar privacy policies and what other legal consequences—for example, trade secret protection—may be breached by the finding that “communication” with these tools is not confidential.
  • Confidentiality or NDA Obligations: Inputting terms or information learned through confidential agreements or NDAs with third parties risks breaching those obligations.

Although the law here is still developing, the ruling in Heppner underscores the importance of appropriately governed AI and the maintenance of current and robust policies concerning the use of AI, Terms of Service, and Privacy Policies.