In partnership with

News of the day

1. Anthropic accidentally exposed its most advanced AI model due to a security blunder. The leaked model marks a significant leap in AI reasoning capabilities. The incident highlights the intense competition between AI companies like Anthropic and OpenAI. Read more

2. Mistral AI unveils Voxtral TTS, an open-weight model cloning voices from 3s of audio in 9 languages, democratizing advanced voice synthesis. Read more

3. Wikipedia bans AI-generated text for article content, allowing limited AI use for human-reviewed copyedits. The policy change reflects community concerns over AI's role in information integrity. Read more

4. OpenAI has added plugins to Codex, integrating third-party services like Box, Figma, and Slack. These plugins bundle reusable workflows, MCP servers, and app integrations into installable packages. The move aims to make Codex more versatile, competing with Anthropic’s Claude Code and Google’s Gemini CLI. Read more

Our take

Hi Dotikers!

Yesterday, we noted that Google was delivering while OpenAI was preparing. Anthropic, for its part, found a third way: announcing in spite of itself.

It was Fortune that broke the story on Thursday. A cybersecurity researcher from the University of Cambridge and an analyst at LayerX Security discovered that Anthropic's content management system had, by default, made nearly 3,000 unpublished internal documents publicly accessible. Among them, a draft blog post describing in detail an unreleased model called Claude Mythos, alongside another surprise: details of a private CEO summit planned at an 18th-century English countryside manor. Anthropic confirmed the incident, attributing it to "human error" in the CMS configuration.

What emerges from the documents is more interesting than the leak itself. Mythos is described as the most powerful model Anthropic has ever developed, placed in a new tier called Capybara, sitting above the current Opus lineup. It reportedly scores "dramatically higher" than Claude Opus 4.6 on tests of coding, academic reasoning, and cybersecurity. That last point is worth pausing on: Anthropic's own draft states that the model poses unprecedented cybersecurity risks, and that the company intends to "act with extra caution" before any broad release. A model flagged as dangerous by its own creators is, at the very least, a sign that they are reading their own safety reports.

Anthropic is currently testing Mythos with a small group of early access customers. No release date has been announced. The company is using the phrase "step change," the same expression circulating internally at OpenAI about Spud. When two companies reach for the same superlative to describe models the public has yet to see, some restraint is in order. Promises of technological breakthroughs cost considerably less than the GPUs required to deliver them.

Alex.

88% resolved. 22% stayed loyal. What went wrong?

That's the AI paradox hiding in your CX stack. Tickets close. Customers leave. And most teams don't see it coming because they're measuring the wrong things.

Efficiency metrics look great on paper. Handle time down. Containment rate up. But customer loyalty? That's a different story — and it's one your current dashboards probably aren't telling you.

Gladly's 2026 Customer Expectations Report surveyed thousands of real consumers to find out exactly where AI-powered service breaks trust, and what separates the platforms that drive retention from the ones that quietly erode it.

If you're architecting the CX stack, this is the data you need to build it right. Not just fast. Not just cheap. Built to last.

Meme of the day

Reply

Avatar

or to participate

Keep Reading