Science Feedback: Building an AI system to combat climate misinformation
Project: Climate Safeguards
Newsroom size: 10 - 20
Solution: An AI-powered platform that monitors and analyses climate misinformation across French TV and radio news programmes
Science Feedback, a fact-checking organisation founded by climate scientists, has spent over a decade combating scientific misinformation across digital platforms. This French-based organisation, known for its rigorous evidence-based approach and network of scientific experts, has become a leading voice in verifying climate claims on social media and online news outlets.
Yet despite their extensive experience, Science Feedback recognised a critical blind spot in the fact-checking community’s coverage: traditional broadcast media. This realisation led to the development of Climate Safeguards, an ambitious AI-powered project in partnership with Data For Good and QuotaClimat to monitor and analyse climate misinformation across French TV and radio news programmes.
The problem: Unmonitored misinformation at scale
While social media misinformation has received significant attention, traditional broadcast media – despite reaching millions daily – remained largely unchecked. Science Feedback identified this critical gap.
"French TV news programmes reach about two million viewers daily – the same influence as a viral social media claim," explains Charles Terroille, the project lead. "Yet we had no idea how much misinformation existed in these trusted channels or what form it took."
The scale proved daunting: 35,000 two-minute segments weekly made manual fact-checking impossible. Meanwhile, partner organisation QuotaClimat observed that regulatory responses to reported misinformation took months, allowing false narratives to spread unchallenged.
Building the solution: AI as a filter, not a replacement
The team's approach was deliberately measured. Rather than replacing human fact-checkers, they designed AI to act as an intelligent filter, identifying segments likely containing misinformation for human review.
"The goal wasn't to automate fact-checking, but to empower it," notes Terroille. "We needed to maintain credibility, and that requires human expertise verifying every detection."
The technical architecture combines a GPT-4o mini fine-tuned model that scores transcripts for potential misinformation, Whisper to enhance transcript quality for flagged segments, Label Studio for human annotation, and Metabase for visualisation and trend analysis.
Crucially, the team leveraged existing expertise. Their data scientist, Charlotte Samson, had previously developed a TikTok misinformation detection tool, while Data For Good provided infrastructure expertise. This in-house capability avoided outsourcing pitfalls, where technical teams often lack journalistic sensibility.
Learning through iteration
The project began with extensive manual analysis. "We fact-checked hundreds of cases manually to understand what we were dealing with," recalls Terroille. This groundwork revealed that broadcast misinformation differs significantly from social media content.
"A model performing brilliantly on social media simply wouldn't work for TV and radio," explains Samson. "The speaking patterns, the transcript quality, the context – everything was different."
The team discovered that comprehensive coverage mattered more than speed. As scientists, they wanted "the full picture, not just scattered detections" to make credible claims about misinformation trends.
Navigating unexpected challenges
Summer 2025 brought an unanticipated test. Climate debates intensified in France, and detected misinformation quadrupled overnight. "We went from about 100 detections monthly to 400, and it stayed that way throughout the summer," Terroille recalls. "We never anticipated that workload."
The team faced an ethical paradox: using energy-intensive AI to combat climate misinformation. Their solution was to filter for climate content first, then apply AI selectively. They're in the process of developing lighter open-source alternatives, though these currently sacrifice some accuracy.
Expanding impact beyond France
The project's replicable framework has enabled rapid international expansion – already operational in Brazil, with Poland and Spain launching soon. "Countries with smaller budgets can implement it," explains Terroille. "Everything's open-source except data costs and human coordination.”
Early findings reveal surprising geographic variations: Brazil shows significantly less broadcast climate misinformation than France. "This nuance is crucial," Terroille emphasises. "Misinformation isn't uniform across contexts."
Creating new possibilities
Beyond detection, the system could one day enable real-time journalist support during interviews, and cross-country misinformation analysis. "We could provide channels with the top repeated false claims and dedicated fact-checks," suggests Terroille.
As one of the few large-scale analyses of broadcast media misinformation, the project advances scientific understanding of how misinformation operates across media ecosystems while serving stakeholders – media, regulators, and the public.
Lessons for newsrooms
The Climate Safeguards project shows that responsible AI implementation requires:
Human-in-the-loop design: AI amplifies human expertise rather than replacing it.
Ethical resource use: Targeted application minimising environmental impact.
Contextual adaptation: Solutions must account for medium-specific characteristics.
Sustainable architecture: Replicable, efficient design enables broader adoption.
Comprehensive approach: Understanding phenomena requires systematic, complete analysis.
As misinformation tactics evolve and amplification tools become more accessible, Science Feedback's approach offers a template for combating false narratives while maintaining fact-checking's human credibility. Their success scaling across countries proves that responsible, targeted AI can strengthen journalism's defence against misinformation without compromising ethical principles.
Explore Previous Grantees Journeys
Find our 2024 Innovation Challenge grantees, their journeys and the outcomes here. This grantmaking programme enabled 35 news organisations around the world to experiment and implement solutions to enhance and improve journalistic systems and processes using AI technologies.
The JournalismAI Innovation Challenge, supported by the Google News Initiative, is organised by the JournalismAI team at Polis – the journalism think-tank at the London School of Economics and Political Science, and it is powered by the Google News Initiative.
