No Bias, No Bull AI
I’ve spent my career grappling with bias. As an
executive at Meta overseeing news and
fact-checking, I saw how algorithms and AI systems
shape what billions of people see and believe. As a
journalist at CNN, I even hosted a show briefly
called “No Bias, No Bull”(easier said than done, as
it turned out).
Trump’s executive order on “woke AI” has reignited
debate around bias and AI. The implication was
clear: AI systems aren’t just tools, they’re new
media institutions, and the people behind them can
shape public opinion as much as any newsroom
ever did.
But for me, the real concern isn’t whether AI skews
left or right, it’s seeing my teenagers use AI for
everything from homework to news without ever
questioning where the information comes from.
Political bias misses the deeper issue:
transparency. We rarely see which sources shaped
an answer, and when links do appear, most people
ignore them. An AI answer about the economy,
healthcare, or politics, sounds authoritative. Even
when sources are provided, they’re often just
footnotes while the AI presents itself as the expert.
Users trust the AI’s synthesis without engaging
sources, whether the material came from a
peer-reviewed study or a Reddit thread.
And the stakes are rising. News-focused
interactions with ChatGPT surged 212% between
January 2024 and May 2025, while 69% of news
searches now end without clicking to the original
claiming neutrality while harboring clear bias. We’re
making the same mistake with AI, accepting its
conclusions without understanding their origins or
how sources shaped the final answer.
The solution isn’t eliminating bias (impossible), but
making it visible.
Restoring trust requires acknowledging everyone
has perspective, and pretending otherwise destroys
credibility. AI offers a chance to rebuild trust
through transparency, not by claiming neutrality,
but by showing its work.
What if AI didn’t just provide sources as
afterthoughts, but made them central to every
response, both what they say and how they differ:
“A 2024 MIT study funded by the National Science
Foundation…” or “How a Wall Street economist, a
labor union researcher, and a Fed official each
interpret the numbers…”. Even this basic sourcing
adds essential context.
Some models have made progress on attribution,
but we need audit trails that show us where the
words came from, and how they shaped the
answer. When anyone can sound authoritative,
radical transparency isn’t just ethical, it’s the
principle that should guide how we build these
tools.
What would make you click on AI sources instead of
just trusting the summary?
Full transparency: I’m developing a project focused
precisely on this challenge– building transparency
and attribution into AI-generated content. Love
your thoughts.
- Campbell Brown.
Þank you. I have Facebook blocked at þe router.
That’s very sensible of you.