News of the day
1. OpenAI launches GPT-Rosalind, a biology-tuned LLM to help researchers navigate complex data and specialized fields, accelerating discovery. → Read more
2. OpenAI's Codex update transforms it into a superapp with background computing, parallel agents, and an in-app browser, moving beyond coding. → Read more
3. Anthropic's CPO resigns from Figma's board as reports suggest Anthropic's new AI model will include competing design tools. → Read more
4. Luma launches Innovative Dreams, an AI production studio with Wonder Project, to create faith-based films using real-time AI agents for enhanced filmmaking. → Read more
Our take
Hi Dotikers!
Yesterday, we watched Anthropic ship Opus 4.7 while keeping Mythos under wraps, citing cyber caution and an openly tiered commercial strategy. Today, OpenAI is playing the same tune with an interesting twist: release a model tailored for biology, and make it clear upfront that not just anyone will get their hands on it.
GPT-Rosalind, announced yesterday, is the first model in OpenAI's Life Sciences series. Named after Rosalind Franklin, it targets biochemistry, genomics, protein design, and translational medicine. On BixBench, a bioinformatics benchmark, the model posts a 0.751 success rate. On LABBench2, it beats GPT-5.4 on six out of eleven tasks, with a marked edge on molecular cloning protocol design. But the number that raises an eyebrow comes from an evaluation run with Dyno Therapeutics on previously unpublished RNA sequences: the model ranks above the 95th percentile of human experts on sequence-to-function prediction. The announced partners, Amgen, Moderna, Thermo Fisher Scientific, and the Allen Institute, set the tone for the target customer.
Access is granted through a trusted access program reserved for qualified U.S. companies, with filters on research mission, governance obligations, and technical safeguards to flag high-risk queries. A month earlier, one hundred scientists signed an open letter calling for tighter controls on the biological data used to train such models. OpenAI clearly read the memo.
Two readings are possible. Either we welcome a new discipline from a lab that hasn't always shone for its restraint, and we acknowledge that biosecurity and restricted access genuinely go hand in hand. Or we note that in practice, gating also serves to reserve frontier models for big pharma budgets while dressing the whole thing up in the language of precaution. Both can be true. The domain-specific strategy, meanwhile, is starting to look a lot more like the future than a generalist GPT-6 whose actual purpose we're still waiting to pin down. Sam Altman is discovering market segmentation, two years behind DeepMind.
Alex.
Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.
Meme of the day





