You’ve been documenting everything with ChatGPT. The comments your boss made about your pregnancy. The times you were passed over for promotion. Your timeline of events before you filed your EEOC complaint.
You thought it was private. You thought you were just organizing your thoughts.
It’s not private, according to a recent decision in the Southern District of New York. And in a lawsuit, your employer can potentially demand every word of it.
On February 10, 2026, Federal Judge Jed Rakoff in New York ruled that the attorney-client privilege or work product doctrine does not protect documents created using AI tools and later shared with lawyers.
The case involved Bradley Heppner, a former CEO charged with securities fraud. Before his arrest, Heppner used Claude (the consumer version) to prepare 31 documents about his legal situation. He outlined defense strategies, analyzed potential arguments, and created timelines.
Then he shared those AI-generated documents with his defense lawyers.
The government moved to access them. Heppner’s lawyers argued they were privileged. Judge Rakoff disagreed.
His reasoning: The communications were with an AI tool, not an attorney. The AI platform’s privacy policy explicitly stated that user inputs were not confidential and could be disclosed to government authorities. And AI is not a lawyer—it holds no law license, owes no duty of confidentiality, and isn’t subject to professional regulation.
According to Judge Rakoff, the documents were discoverable.
Why this matters in employment cases
The Rakoff decision was criminal, but the principle may apply equally to civil litigation, including employment discrimination cases.
You’re facing workplace retaliation. You ask ChatGPT whether you have a legal claim. You feed it the timeline of events. You describe what your manager said after you complained about discrimination.
All of that is possibly discoverable.
Your employer can request it in discovery. They can get your prompts, the AI’s responses, and see what you were thinking, what you were researching, and when you started planning legal action.
What employers are already requesting
Employment defense lawyers have caught on. They’re now including AI-specific requests in their discovery demands.
They’re asking for all communications with AI platforms related to the claims. They’re asking when you started using AI to research employment law. They’re asking for drafts of your EEOC complaint created or edited using AI tools.
Here’s what makes this dangerous: The prompts you used often reveal more than your polished final complaint. They show your thought process, your motivations, and your concerns before you made your complaint.
If you asked an AI tool, “Can I get fired for complaining about my boss?” two months before you actually complained, that would look like premeditation. If you asked “how much can I get in an employment lawsuit” before filing your EEOC charge, opposing counsel will argue your claim isn’t genuine.
The timing of your AI usage becomes evidence.
The privacy policy problem
Most people using ChatGPT, Claude, or Gemini are using the free consumer versions. Those versions have privacy policies that explicitly disclaim confidentiality.
OpenAI’s terms state that they may use your prompts to train their models. Anthropic’s policy states that inputs may be disclosed to governmental authorities and third parties.
When you agreed to those terms, you agreed that your conversations wouldn’t be confidential. Even if you later send those AI-generated documents to a lawyer, privilege doesn’t attach retroactively. You already disclosed the information to a third party with no confidentiality obligation.
Enterprise versions are different, but not a solution for employees
Enterprise versions of AI tools don’t train on user data and maintain stricter confidentiality. If a lawyer uses enterprise AI to draft work product, there’s a stronger argument for privilege protection.
But employees don’t have enterprise accounts. You’re usually using the free version on your personal device. That creates discoverable records with no privilege protection.
What this means if you’re experiencing workplace discrimination
Using AI to organize your thoughts isn’t the same as talking to a lawyer. Using AI to draft your complaint isn’t always protected under the law. Using AI to research whether you have a case can create a record that your employer can possibly access.
The lesson here is this. It’s safer not to use AI to document sensitive facts before you’ve consulted with an actual lawyer. It’s safer not to use AI to draft complaints or analyze your legal rights. It’s safer not to use AI to create timelines or organize evidence.
And if you’ve already used AI for any of these purposes, tell your lawyer immediately. They need to know what’s out there before your employer requests it in discovery.
The correct sequence
First, consult with an employment lawyer. Have an actual confidential conversation with someone who owes you a duty of confidentiality. That conversation is privileged.
Second, if your lawyer advises you to document facts or create a timeline, follow their specific instructions about how to do that.
Third, if your lawyer decides to use AI tools to help prepare your case, they can do so in a way that preserves work product protection because they’re directing the work.
The key is lawyer first, documentation second. Not the reverse.
How to protect yourself
Talk to a lawyer before using AI for anything related to a potential legal claim. Don’t document sensitive workplace issues using AI platforms. And if you’ve already done this, tell your employment lawyer so they can plan for it.
The broader implication
The Rakoff decision signals how courts may view AI interactions, as communications with third-party services, not private thought processes.
Most users don’t think of it that way. They think of AI as a private assistant, not a potential witness.
That disconnect creates risk. And in employment litigation, where documentation and timing matter enormously, that risk is acute.
We’re navigating this in real time
At Risman & Risman, we’re seeing this issue emerge in employment discrimination cases. Employers are requesting AI conversation histories. They’re looking for inconsistencies between AI-generated drafts and final complaints.
We’re also proactively advising clients on AI risks before they create problematic records.
If you’re experiencing discrimination, retaliation, or harassment at work, we can help you understand your rights and navigate your case properly, including how to document your situation without creating discoverable AI records that can be used against you.
And if you’ve already used AI platforms to document workplace issues or research your legal options, we need to know that so we can address it strategically.
Call us at 212-233-6400 or contact us online for a free confidential consultation.
The rules around AI and litigation are evolving rapidly. But one principle is already clear: your AI conversations aren’t private, and they can potentially be used against you in court.