This morning, Judge Jed Rakoff of the Southern District of New York issued what appears to be the first federal ruling on whether communications with a generative AI platform are protected by attorney-client privilege or the work product doctrine. The answer: No. And the reasoning should concern every legal professional who has ever typed confidential information into a consumer AI tool.
The case is United States v. Benjamin Heppner, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026). A federal court has now held that conversations with consumer AI tools are not protected by attorney-client privilege — and the reasoning applies to every legal team using AI today.
Here is what happened, what the court held, and — most importantly — what you should do about it right now.
I. The Facts
Benjamin Heppner, an executive charged with securities fraud, wire fraud, and related offenses, used the consumer version of Anthropic’s Claude to prepare reports outlining his defense strategy after learning he was the target of a federal investigation. FBI agents seized those AI conversations — approximately thirty-one documents — during a search of Heppner’s home.
Heppner’s lawyers argued the documents were privileged. They said Heppner had inputted information learned from counsel, created the documents for the purpose of speaking with counsel, and later shared the outputs with his lawyers.
The Government moved to compel production. Judge Rakoff agreed with the Government on every point.
II. What the Court Held
Judge Rakoff’s ruling rests on three independent grounds, each of which should be understood carefully.
First
A general-purpose AI platform is not an attorney.
The court found, unsurprisingly, that Claude is not a lawyer. Without an attorney-client relationship, discussions of legal issues between two non-attorneys are simply not privileged. The court also rejected the analogy to cloud-based word processors, noting that recognized privileges require “a trusting human relationship” with “a licensed professional who owes fiduciary duties and is subject to discipline.” No such relationship exists between a user and an AI platform.
Second
The communications were not confidential.
This is where the ruling gets sharp. The court examined Anthropic’s privacy policy and found that it provides for collection of both user inputs and AI outputs, use of that data for training, and the right to disclose data to third parties — including “governmental regulatory authorities.” As a result, Heppner could have had “no reasonable expectation of confidentiality” in his communications with Claude. The court emphasized that sharing information with a third-party AI platform is functionally equivalent to sharing it with any other third party: any privilege that might have otherwise attached is waived.
Third
Heppner was not seeking legal advice from Claude, and his lawyer did not direct him to use it.
The court noted that Claude itself disclaims providing legal advice. More critically, Heppner’s counsel conceded that they “did not direct” Heppner to use Claude. The court observed that had counsel directed the use, Claude might arguably have functioned as a Kovel agent — a professional retained by counsel whose work falls within the privilege. But Heppner acted on his own, so that door stayed closed.
On the work product doctrine, the court held that the AI Documents were not “prepared by or at the behest of counsel” and did not “reflect” counsel’s strategy at the time of creation. That the documents later affected counsel’s strategy was insufficient.
III. What the Ruling Does Not Say
This is equally important. Judge Rakoff did not hold that any use of AI tools in legal practice is inherently unprivileged. The opinion contains a significant carve-out: if a lawyer directs the use of an AI tool, the tool could potentially function as a Kovel-type agent within the protection of the privilege.
The ruling is narrowly grounded in the specific facts before the court — a non-lawyer, using a consumer AI product with a permissive privacy policy, acting without attorney direction, in a criminal case where the documents were seized via search warrant rather than produced in discovery. The door for attorney-directed AI use remains open.
IV. The Three Questions Every Legal Team Should Answer Today
If your team is using AI tools — and statistically, your team almost certainly is — this ruling creates an immediate obligation to assess your exposure.
Question 1: Does your AI provider train on your inputs?
This is the threshold issue. Judge Rakoff’s confidentiality analysis hinged on Anthropic’s privacy policy, which permits data collection and training on user inputs and outputs. If your AI provider reserves the right to train on your data or share it with third parties, any expectation of confidentiality is destroyed.
What to Check
Review your provider’s terms of service and data processing agreements. Consumer AI products (ChatGPT’s free tier, Claude’s consumer interface, Gemini’s consumer product) generally do retain and may train on your data. API-based enterprise agreements typically do not, but you need to verify this for your specific contract. Your AI tool should contractually commit to not training on your inputs or outputs, not retaining your data beyond the session, and not sharing your data with third parties except as required by law.
Question 2: Is the AI tool being used by a lawyer, or at the direction of a lawyer?
The court left open the possibility that attorney-directed use of AI could preserve privilege under the Kovel doctrine. This means the who and the how of AI use in your legal department matters enormously.
What to Do
Establish a clear policy that AI tools for legal work are to be used by attorneys or at the explicit direction of attorneys. Document this policy. If non-attorney staff use AI tools for legal tasks, ensure they are doing so under attorney supervision and that this is reflected in your workflows.
Question 3: Does the AI tool disclaim providing legal advice?
Judge Rakoff noted that Claude itself disclaims providing legal advice, which cut against Heppner’s argument that he was seeking legal advice from the platform. While this point was not dispositive on its own, it contributed to the court’s overall analysis.
What It Means
Consumer AI tools are designed to disclaim legal advice. Purpose-built legal AI tools that are designed for use by attorneys in the practice of law occupy a different position. The tool’s intended purpose and the context of its use matter.
V. Practical Steps for GCs and Legal Teams
-
1Immediately: Audit Every AI Tool
For each tool your legal department uses, document: whether the provider trains on inputs, the data retention policy, whether the provider can share data with third parties, and whether the tool is used by lawyers or at attorney direction.
-
2This Week: Issue Updated AI Usage Guidance
At minimum, prohibit entering privileged or confidential information into any consumer AI tool. Route all legal AI work through enterprise-grade tools with appropriate data protection commitments.
-
3This Month: Review Your Outside Counsel Guidelines
If your firms are using AI, their use should be subject to the same data protection standards you require of any vendor handling privileged information. Add AI usage provisions to your outside counsel guidelines and data security questionnaires.
-
4Ongoing: Integrate AI into Your Information Governance Framework
Treat AI tools the same way you treat any third-party vendor that handles privileged or confidential data: with a formal assessment, contractual protections, and periodic review.
VI. What This Means for Legal AI
Heppner does not condemn AI in legal practice. It condemns careless AI use in legal practice — specifically, the use of consumer-grade tools that retain data, train on inputs, and are used outside the attorney-client relationship.
The ruling actually clarifies the path forward. Legal teams can continue to leverage AI’s enormous productivity benefits, but they need to use tools that are designed for professional legal use, architecturally committed to data privacy, and operated by lawyers or at their direction.
The difference between a consumer AI chatbot and a purpose-built legal AI platform is the same difference that has always existed between discussing your case with a stranger at a bar and discussing it with your lawyer. The conversation might be about the same topic. The protection could not be more different.
How White Shoe Solves This
White Shoe AI Was Built for This Exact Problem
We built White Shoe because we saw this collision coming — between consumer AI tools and the confidentiality obligations that define legal practice. White Shoe is Your AI Outside Counsel, the AI operating system for legal teams, designed from the ground up to give legal professionals the full power of generative AI without compromising privilege, confidentiality, or professional responsibility.
Here is how White Shoe addresses every issue Judge Rakoff identified in Heppner.
No training on your data. Period.
White Shoe connects to leading foundation models from Google, OpenAI, Anthropic, and Perplexity exclusively through enterprise API agreements with training explicitly disabled. Your inputs are never used to train any model. Your outputs are never fed back into any training pipeline. No AI provider in our stack has the right to use your data for model improvement. This is not a toggle in a settings menu — it is an architectural commitment baked into every API connection we maintain. If Heppner had used a tool with this architecture, the court’s entire confidentiality analysis would not have applied.
Built by lawyers. For lawyers.
White Shoe is not a consumer chatbot that happens to answer legal questions. It is a professional legal platform with 30+ specialized AI Associates designed for specific legal workflows — contract redlining, compliance analysis, litigation risk modeling, corporate governance, and more. Every interaction is initiated by a legal professional in the course of legal practice. Judge Rakoff left the door open for attorney-directed AI use to qualify for privilege protection. White Shoe is designed to walk through that door.
Enterprise-grade confidentiality architecture.
White Shoe encrypts sensitive company data client-side before storage. Our engineering team never has access to actual client names or privileged information — the database holds only anonymized identifiers. Company Profiles are encrypted so that the organizational context powering your AI outputs remains confidential at every layer of the stack. This is the standard that Heppner now demands.
Your context. Seven surfaces. Always protected.
White Shoe’s intelligence layer, Firm IQ, personalizes every output to your organization’s context, style, and legal domain expertise — and it travels with you across seven product surfaces: the web platform, email (cc:WhiteShoe), Microsoft Word, Chrome, Slack (@WhiteShoe), mobile, and Contract Lifecycle Management. Your privileged context stays within a system that was purpose-built to protect it, no matter where you work.
Stop Risking Privilege. Start Using AI Built for Lawyers.
Set up Firm IQ in under fifteen minutes. White Shoe AI provides enterprise-grade data protection from day one — no training on your data, encrypted Company Profiles, and purpose-built legal AI across every surface where you work.
This article is for informational purposes only and does not constitute legal advice. Consult qualified counsel regarding your specific circumstances.

