Search A Light In The Darkness

Friday, 27 February 2026

Left-wing ideology is being encoded into AI systems to censor “wrongthink”

 The “AI safety” movement, led by companies like Anthropic, is not about preventing runaway superintelligence but rather about controlling thought and narrative.

Anthropic’s content moderation system filters out inquiries and commands that challenge certain political ideologies, such as climate change, gender identity and election integrity.

The movement’s goal is to create an infrastructure for automated censorship, where AI systems parrot the “right” opinions and associate with the “right” kind of people, rather than allowing users to explore ideas and have honest discussions.

In 2021, a group of researchers dramatically departed OpenAI, the company behind ChatGPT. Led by Dario Amodei, OpenAI’s former vice president of research, they cited deep concerns about “AI safety.” The company was moving too fast, they warned, prioritising commercial interests over humanity’s future. The risks were said to be existential. These Effective Altruists were going to do things the right way.

Their solution? Start a new company called Anthropic, premised on building AI “the right way” with “safety” (that word will become a recurring theme), and “proper guardrails.” They initially raised hundreds of millions (today, that number is in the tens of billions) from investors who bought the pitch: we’re the good guys preventing runaway artificial general intelligence (“AGI”).

Noble, right? Except these supposed guardrails against AGI have become pretty much impossible to quantify. What we do have is an incredibly sophisticated content moderation system that filters inquiries and commands through a Silicon Valley thought bubble. It doesn’t seem like they’re trying to prevent AGI from destroying humanity, but instead, to prevent you from challenging the core tenets of their political philosophy....<<<Read More>>>...