News of the day
1. 1X launches its World Model, enabling Neo robots to learn new tasks from video and prompts, enhancing real-world understanding and autonomous action generation → Read more
2. OpenAI's upcoming "Sweetpea" audio wearable, rumored to feature muscle sensors and Siri-like controls, is reportedly being prepared for mass production by Foxconn → Read more
3. Google's Veo 3.1 AI now generates videos from reference images, supporting vertical formats and 4K upscaling for greater creative control → Read more
4. Bandcamp has banned AI-generated music, prohibiting tracks created entirely or substantially by generative AI to protect human artists. → Read more
Our take
Hi Dotikers!
Remember: last October, 1X Technologies launched NEO, billed as the first consumer-ready humanoid robot. For $20,000 (or $499/month), the startup promised a home assistant capable of folding laundry, organizing shelves, or opening the door for guests. Science fiction was entering our living rooms.
But NEO had a limitation shared by all current humanoid robots: learning. Like its competitors (Figure, Tesla Bot, Agility), it relied on thousands of hours of teleoperation, humans remotely piloting the robot to demonstrate each task. A slow, expensive, and hard-to-scale process.
Three months later, 1X is changing the game.
The company just unveiled the 1X World Model, a new AI architecture. The concept: rather than learning exclusively from robotic data, NEO now trains on human videos at internet scale. The robot first "imagines" what it needs to do by generating a video prediction, then translates that visualization into real movements.
Why does it work? Because NEO was designed to resemble a human as closely as possible: same size, same proportions, bio-inspired movements. What it sees humans do on video, it can replicate. According to 1X, this approach enables NEO to perform tasks it has never seen, including two-handed manipulations or human interactions.
The other advantage: continuous self-improvement. NEO collects its own data, and every advance in generative video models (Sora, Veo) translates directly into better robotic capabilities. A robot that learns like us, by watching and imitating.
A.
Meme of the day



