Shomrim: Teaching AI to spot hidden bias

Project: SourceGuard: AI-powered credibility standard for journalism 

Newsroom size: 10 - 20

Solution: An AI-powered fact-checking tool that detects over 30 types of flaws in news articles -such as missing context, unsubstantiated claims, and linguistic bias- to help journalists identify subtle reporting flaws.


When journalists at Shomrim, an Israeli investigative newsroom, began developing their AI-powered fact-checking tool, they faced an unexpected philosophical challenge: how do you teach a machine to recognise something that even experienced journalists often miss?

After hours of debate about what constitutes a fact and deconstructing journalism's five Ws, the team created a sophisticated AI system that identifies approximately 30 different types of flaws across four categories: omitted factual details, statements without context, linguistic issues, and unsubstantiated claims. As Michael Levi, Shomrim's Head of Data Journalism, explains: "When you deconstruct these elements, you begin to see the invisible manipulation that happens, often unintentionally, in news articles."

The problem: When emotions replace facts

The tool emerged from observing troubling patterns in Israeli journalism that proved global. “While the Israeli media rhetoric, like the people themselves, tends to be informal and warm,” Levi notes. "We discovered this isn't just an Israeli problem – it's simply more nuanced in other markets."

The core issue? News outlets, driven by engagement metrics, use emotional content to compensate for missing facts. "Every news outlet is a business," Levi explains. "To make money, they need engagement. To get engagement, they evoke emotions – usually negative ones."

This creates "cognitive gap-filling" – readers unconsciously supplementing missing information. "Readers aren't even aware they're filling in missing information," says Doron Sela, Shomrim's Chief Operating Officer. The team drew inspiration from Aristotle's concept of enthymeme – omitting information to make audiences believe something was stated without the speaker taking responsibility. In modern journalism, this ancient manipulation technique has found new life.

Building the solution: Strategic sacrifices and unexpected lessons

Shomrim assembled a compact team: a product manager, project supervisor, and three data scientists. Budget constraints meant strategic sacrifices. "We decided UX/UI design wasn't essential initially," Levi recalls. "Our target audience is professional journalists – they don't need fancy interfaces."

This proved problematic. "We learned it's not a 'nice to have' but necessary," Sela admits. "Even professional users need intuitive interfaces."

The system uses OpenAI's GPT-4.0 Mini as its primary language model, though the team tested Claude and Gemini extensively. The infrastructure relies on standard tools: AWS for storage and JSON for parsing articles. Importantly, they used standard AI models without custom databases or RAG, allowing the LLM to analyse articles purely based on its training.

The challenge of AI freedom

The project's biggest technical challenge involved finding the right balance of AI autonomy. "Initially, we gave the LLM very strict instructions," Levi explains. "Then we realised it could parse articles into factual units on its own, very effectively."

However, giving the AI too much freedom led to what Levi diplomatically calls "less accurate output” – essentially, hallucinations. The team found themselves in a philosophical dilemma: "Sometimes you wonder, maybe the LLM is right and I'm wrong. There's this paradoxical feeling when working with AI."

The solution required months of back-and-forth calibration. "We're now at a balanced point where we have specific instructions that still give room for the LLM to work its magic," Levi says.

Cultural resistance and the Zuckerberg example

The most significant challenge was internal resistance. When an article about Zuckerberg and Trump omitted Zuckerberg's CEO role, journalists dismissed it as nitpicking.

"Everyone said, 'everybody knows who Mark Zuckerberg is,'" Levi recalls. "But omitting his professional context creates an ominous atmosphere. It becomes personal – just 'Zuckerberg,' not 'Facebook CEO."

This epitomises the "invisible manipulation" the tool exposes. Though initially sceptical, consistent flagging awakened what Levi calls "dormant critical muscles." Even explaining the system's logic began changing how journalists read articles.

The opportunities: Building outward from Shomrim’s newsroom

The team envisions a tiered approach to impact, beginning with enhancing their own newsroom's journalism through better sources, better articles, and increased efficiency. From there, they plan to expand to other newsrooms and journalism schools. As Levi notes, “It's about training future journalists to do better.” The ultimate vision is a browser extension for general readers, though this presents the challenge of bridging the gap between public expectations for simple verdicts and the tool's sophisticated, nuanced analysis. "The general public wants a simple good/bad verdict, but that's not what our tool provides," Sela cautions, highlighting the difficulty of democratising technology designed for professional critical analysis.

The Future of 'cyborg journalism'

The project represents what Levi calls "cyborg journalism” – humans and machines continuously train each other. The team is implementing feedback mechanisms for journalists to dispute the AI's findings, creating a dialogue that improves both.

"This is a mental cyborg, a professional augmentation that makes you aware of things and trains you to see them yourself," Levi explains.

This positions AI not as journalism's replacement but its enhancement. As Sela emphasises: "We're showing AI can improve journalism without replacing it – not in editing or writing, but in sharpening our critical faculties."

By teaching machines to see what humans miss, this newsroom is pioneering a new approach to journalistic integrity in the age of AI.

Lessons for newsrooms

  • Expect cultural resistance: The primary resistance to AI won't be from the technology itself, but from newsroom culture. When AI reveals uncomfortable truths about existing journalistic practices, initial dismissal should be anticipated.

  • Prioritise user experience (UX/UI): Don't underestimate the importance of design, even for professional users. Treating an intuitive interface as optional for internal tools is problematic; even experts require intuitive interfaces to meaningfully engage with AI feedback.

  • Strive for balance in AI freedom: Successful implementation requires careful balance. Too much freedom for the AI can lead to hallucinations, while too little stifles new insights. The right balance allows the AI to identify human-missed patterns while maintaining accuracy.

  • Prepare for professional discomfort: Understand that the most valuable AI applications will challenge professional blind spots. This inherent discomfort is precisely where the tool's true value lies.

Explore Previous Grantees Journeys

Find our 2024 Innovation Challenge grantees, their journeys and the outcomes here. This grantmaking programme enabled 35 news organisations around the world to experiment and implement solutions to enhance and improve journalistic systems and processes using AI technologies.

Previous Grantees
Read 2024 Report

The JournalismAI Innovation Challenge, supported by the Google News Initiative, is organised by the JournalismAI team at Polis – the journalism think-tank at the London School of Economics and Political Science, and it is powered by the Google News Initiative.