In partnership with

News of the day

1. Google forms elite AI coding team, led by Sergey Brin, to compete with Anthropic and develop self-improving models. Read more

2. Moonshot AI launches Kimi K2.6, a 1T parameter multimodal MoE model excelling in long-horizon coding and agent swarms up to 300 agents. Read more

3. Anthropic is expanding its data center operations globally, hiring contract specialists in Europe and Australia. This marks a significant step in their international infrastructure development. Read more

4. OpenAI's Codex introduces Chronicle, a feature that tracks screen activity for task memory, enhancing future assistance but raising security concerns. Read more

Our take

Hi Dotikers!

Yesterday we walked you through Claude Design, Anthropic Labs' new product that collapses the whole design-to-code workflow into a single conversation with Claude Opus 4.7. Canva, Brilliant and Datadog already on board, with testimonials saying what used to take a week of briefs and review rounds now happens in a single session. A clear signal that Anthropic isn't just winning on coding, they're extending the perimeter around it.

Google, on the other side, is visibly feeling the heat. The Information revealed this weekend that DeepMind has assembled a strike team led by Sebastian Borgeaud (ex head of pre-training) with one mandate : close the coding gap with Anthropic. An internal assessment openly admits that Anthropic's coding tools are currently better than Google's own, and Sergey Brin didn't try to soften it in his memo to employees : teams must "urgently bridge the gap in agentic execution and turn our models into primary developers" of code. Every Gemini engineer is now required to use internal agents on complex multi-step tasks, Google tracks team-level usage of its internal coding tool "Jetski" and ranks teams accordingly (same playbook as Meta with its token leaderboards), and Gemini is increasingly being trained on Google's private codebase to speed up the catch-up. Some teams outside DeepMind are even making AI training sessions mandatory. Brin is also explicit about the endgame : stronger coding is the stepping stone toward AI that can improve itself, and a real coding agent combined with AI that runs math and experiments would eventually automate most of what AI researchers and engineers actually do.

The real question for this week isn't whether Google can catch up on coding. It's whether Google, OpenAI and the rest can catch up on the category itself, which is being redefined in real time.

Alex.

AI Agents Are Reading Your Docs. Are You Ready?

Last month, 48% of visitors to documentation sites across Mintlify were AI agents, not humans.

Claude Code, Cursor, and other coding agents are becoming the actual customers reading your docs. And they read everything.

This changes what good documentation means. Humans skim and forgive gaps. Agents methodically check every endpoint, read every guide, and compare you against alternatives with zero fatigue.

Your docs aren't just helping users anymore. They're your product's first interview with the machines deciding whether to recommend you.

That means: clear schema markup so agents can parse your content, real benchmarks instead of marketing fluff, open endpoints agents can actually test, and honest comparisons that emphasize strengths without hype.

Mintlify powers documentation for over 20,000 companies, reaching 100M+ people every year. We just raised a $45M Series B led by @a16z and @SalesforceVC to build the knowledge layer for the agent era.

Meme of the day

Reply

Avatar

or to participate

Keep Reading