News of the day
1. Google DeepMind's AI co-mathematician achieves record scores on math benchmarks, using agent teams to tackle complex problems and aid human researchers. → Read more
2. Generative AI and autonomous agents are supercharging identity theft in the US, enabling industrial-scale operations from data acquisition to deepfake documents. → Read more
3. Hermes Agent overtakes OpenClaw on OpenRouter, signaling a shift towards self-improving AI agents focused on depth over breadth. → Read more
4. Hollywood writers are now AI trainers, assessing chatbots and annotating data to survive. This gig work is often soul-crushing, with low pay and unstable contracts, highlighting the economic impact on creative professionals. → Read more
Our take
Hi Dotikers!
Google DeepMind just released a paper on its AI co-mathematician, and the setup is pretty striking. The idea is simple: instead of asking a model for an answer, you hand a workspace to a team of agents. A lead coordinator, sub-agents writing code, digging through the literature, attempting proofs in parallel, all wrapped in built-in review cycles. Basically the exact pattern we know from Claude Code, but ported to frontier math research.
The numbers speak for themselves: 48% on FrontierMath Tier 4, a benchmark designed to make models sweat for years, against 19% for raw Gemini 3.1 Pro. More than doubling a score just by orchestrating smartly is rare. Even more telling, the story of Marc Lackenby at Oxford, who cracked an open problem from the Kourovka Notebook thanks to a proof strategy buried in an output that the system's own reviewers had rejected. The machine sometimes produces gold its own judges mistake for sand.
What this confirms is that we've found the recipe that scales: take the agentic architecture born in code and move it elsewhere. Last week it was OpenAI injecting GPT-5-class reasoning and tool calls into realtime voice. This week it's DeepMind doing the same for math. The same grammar (coordination, parallelism, review, iteration) is eating domains one by one.
And against the usual alarmist narrative, Lackenby's story is a useful reminder: value doesn't come from the model alone, it comes from the researcher plus agent pairing. The expert keeps the intuition to spot what the machine doesn't know it has found. As long as we frame this as augmentation rather than substitution, the ceiling keeps rising.
Alex.
Someone just spent $236,000,000 on a painting. Here’s why it matters for your wallet.
Late last year, a Klimt sold for the highest price ever paid for modern art at auction.
An outlier sure, but it wasn't a fluke. U.S. auction sales grew 23.1% in 2025. The $1-5mm segment even grew 40.8% YoY.
Meanwhile, Apollo’s chief economist Torsten Slock said to expect ‘zero in return in the S&P 500 over the coming decade.’
Each environment is unique, but after dot-com, post war and contemporary art grew about 24% annually for a decade. After 2008, about 11% for 12 years.
It’s also had near-zero correlation with the S&P 500 since ‘95.*
Now, Masterworks lets you invest in shares of artworks featuring legends like Banksy, Basquiat, and Picasso.
$1.3 billion invested across over 500 artworks.
28 sales to date.
Net annualized returns on sold works held 12 months+ like 14.6%, 17.6%, and 17.8%.
Shares can sell quickly, but my subscribers can skip the waitlist:
*Investing involves risk. Past performance is not indicative of future returns. See important Reg A disclosures at masterworks.com/cd.
Meme of the day





