• Dotika
  • Posts
  • Anthropic wants to train Claude on your data

Anthropic wants to train Claude on your data

ALSO : OpenAI launches real-time voice API

News of the day

1. Anthropic changes policy: Claude users must decide by September 28 whether they allow their conversations to be used for AI training. Read more

2. OpenAI launches its real-time voice API that processes speech directly, detects laughter and switches languages automatically during conversation.  Read more

3. TIME unveils its TIME100 AI 2025 list of the 100 most influential people in AI Read more

4. Over a billion people use AI: we're entering the Mass Intelligence era with powerful models now as accessible as a Google search Read more

Our take

Hi Dotikers!

It was one of the last bastions of privacy in the conversational AI universe. Anthropic, until now exemplary in its policy of not using user data to train its models, has just pulled a 180-degree turn that deserves our full attention.

The change is radical. Users of Claude (Free, Pro, Max, and Claude Code) have until September 28, 2025, to make a choice with heavy consequences: authorize the use of their conversations to improve future AI models, or explicitly opt out. Even more striking: data retention explodes from 30 days to 5 years for those who accept these new terms.

The official line? Anthropic cites collective improvement: your exchanges will help create more powerful models for programming, analysis, and reasoning, while strengthening harmful content detection. The company justifies the 5-year retention by AI's long development cycles, today's models were designed 18 to 24 months ago.

The underlying reality is more stark. In a frantic race where high-quality conversational data has become the essential fuel for AI models, Anthropic can no longer afford to leave this competitive advantage to rivals OpenAI and Google. The company thus joins the pack, abandoning its differentiating stance on data protection.

The devil's in the interface. The acceptance design raises eyebrows: a prominent "Accept" button, a discreet opt-out option pre-toggled to "On." Does this choice architecture, denounced by some experts as "dark pattern" design, really guarantee the informed consent being promised?

A tense legal backdrop adds another dimension. OpenAI faces a court injunction requiring it to indefinitely preserve all ChatGPT conversations as part of a lawsuit with the New York Times. Is Anthropic anticipating similar constraints?

What’s up for Dotika ?

As you may know, we recently created our company Dotika to help businesses achieve their business objectives using AI.

If you'd like to learn more about our approach, we invite you to read the great article that Silicon Luxembourg dedicated to us (thanks again to them!)

Reply

or to participate.