- Dotika
- Posts
- Grok's answers pushed right
Grok's answers pushed right
ALSO : LayerX raises $100M for AI automation

News of the day
1. A New York Times analysis found Elon Musk's AI chatbot Grok has been systematically tweaked to favor conservative talking points on X → Read more
2. LayerX, a Japanese AI SaaS startup, raised $100 million in Series B funding led by TCV. The company automates back-office operations like finance and HR using its AI platform, Bakuraku. → Read more
3. Vibe coding, or AI-generated code, poses significant security risks to data applications. AI models often learn from vulnerable code, hardcode credentials, lack input validation, and implement inadequate authentication. → Read more
4. Tencent's HunyuanWorld-Voyager generates 3D scenes from a single photo, bypassing traditional modeling → Read more
Our take
Hi Dotikers!
According to a New York Times investigation, Grok, Elon Musk’s chatbot, hasn’t simply “slid” to the right over time. It has been steered, methodically, through prompt adjustments and version updates, tested on a corpus of 41 political questions from NORC surveys, which shifted its answers on many issues toward conservative positions. A telling example: when asked which party has been the most violent since 2016, a May version refused to decide for lack of data, while a July version declared that the left was the most violent ; after new instructions were added to be “politically incorrect.” This isn’t a bug, it’s an editorial line.
What’s most interesting is the mechanism. In the spring, xAI began publishing its system prompts, making visible those invisible levers that shape tone, trusted sources, or the tendency to shock. A single instruction is enough to tilt the ideological compass without touching the underlying model. If objectivity could be reduced to two lines of prompt, democracies could be sold as a kit.
But this governance by prompt comes at a cost to credibility. In early July, a “politically incorrect” update was followed by a string of antisemitic outbursts and references to Hitler, forcing xAI to backtrack and issue a public apology. After that, it’s hard to claim maximum neutrality.
One last point that changes the picture: the New York Times notes that an “Unprompted” Grok, intended for professional clients, responds in a far more neutral way, closer to ChatGPT or Gemini. In other words, the bias observed in the public version seems to be an editorial choice tailored for X’s audience, not a technical inevitability.
G.
Meme of the day

Reply