In partnership with

News of the day

1. SubQ unveils the first fully subquadratic LLM, breaking transformer limitations with linear compute scaling for massive context windows and lower costs. Read more

2. Stripe's internal AI tool, Protodash, revolutionizes product design by enabling rapid, code-free prototyping for designers and PMs, enhancing collaboration and efficiency. Read more

3. Greg Brockman testifies in Musk v. Altman trial, defending his $30B OpenAI stake and commitment to the company's nonprofit mission. Read more

4. China navigates the complex challenge of advancing AI while ensuring job security, exploring strategies for economic balance and workforce adaptation. Read more

Our take

Hi Dotikers!

Today we're talking about SubQ, and it's potentially a big deal. To understand why, we need to go back to a fundamental flaw in the AI tools you use every day.

The architecture behind ChatGPT, Claude or Gemini is called the Transformer. And from day one, it's had a well-known problem: the more information you give it at once, the more it loses the thread, and the more the bill blows up. That's why no one can really afford to dump an entire book or a full codebase into a model. Technically possible, economically out of reach.

Subquadratic, a young American company that just raised 29 million dollars with a team of researchers from Meta, Google, Oxford and Cambridge, claims to have cracked that equation. Read this carefully: they're not releasing a model smarter than today's Claude or GPT. On standard benchmarks their SubQ sits neck and neck with the best, no better no worse. The real shift is elsewhere: at equal quality, 50 times cheaper and 50 times faster at 1 million tokens, and they hold up to 12 million tokens where other models break down well before.

In other words, this isn't a smarter AI, it's an AI whose cost doesn't explode when you give it a lot to digest. And that changes the game as much as a leap in intelligence: a full codebase loaded in one go, a memory that doesn't fray after a few hours, a search across hundreds of documents in a single prompt. Everything today's workarounds have been trying to fake for three years.

A word of caution: several teams have made the same kind of promise before folding when it came time to scale. A preview is still a preview.

The contrast with yesterday is delightful: xAI cut Grok 4.3's prices by optimizing inside the Transformer. Subquadratic is going after the structural cause itself. Two ways to wage the price war, and only the second one really redefines what becomes possible to ask of an AI.

Alex.

Headline: Your marketing stack reports to one place now.

Your media buyer opens Slack at 8am. There's already a cross-platform brief in #growth: Google Ads spend vs. ROAS, Meta CPA by campaign, Stripe revenue by channel. Viktor posted it at 6am. Nobody asked for it.

Same colleague caught a spend spike overnight on your brand campaign. Flagged it before anyone logged in. The problem was handled before the first standup.

Your strategist reviews trends. Your account manager checks attribution. Same Slack channel. Same colleague. Before anyone's first coffee.

Google Ads, Meta, Stripe. One message. No Looker. No Data Studio. No dashboard tab left open since Tuesday.

11,000+ teams use Viktor daily. SOC 2 certified. Your data never trains models.

Meme of the day

Reply

Avatar

or to participate

Keep Reading