Scaling local listening: how Radio France used AI to monitor 44 stations simultaneously
The French broadcaster leveraged Google’s NotebookLM to analyse hours of local broadcasts in real-time, allowing it to capture the 'pulse of the regions' during the agricultural crisis.
Radio France, the French public service broadcaster, operates a vast network which consists of 44 local radio stations covering the entire country. This network produces an immense volume of live audio daily, acting as a massive sensor of local sentiment and news. However, the sheer quantity of content makes it difficult for the national newsroom to monitor, analyse, or repurpose this local material in real-time.
Under the guidance of Alexandre Barlot, journalist and editorial AI project manager, Radio France undertook an experiment to bridge this gap. By utilising generative AI, specifically Google's NotebookLM, they aimed to synthesise live antenna output to inform national coverage, proving that AI can turn ephemeral live radio into structured, actionable data.
The problem: how to leverage their own content in record time
The initiative stemmed from a casual conversation between Barlot and the director of information for France Bleu. The network was preparing for a special broadcast featuring the Prime Minister, Michel Barnier, to discuss the ongoing agricultural crisis.
To challenge the Prime Minister effectively, the editorial team wanted to use testimonials and concerns voiced by farmers on their local stations that very morning. The problem was volume. France Bleu’s morning slot runs from 09:00 to 11:00, and with 44 stations broadcasting simultaneously, this amounted to 88 hours of audio. The interview with the Prime Minister was scheduled for 13:00.
“It is humanly impossible to listen to 44 hours of antenna for a rendering two hours later,” explains Barlot. The team needed a way to listen to the entire country at once, extract key quotes, and identify trending topics within a narrow two-hour window.
Building the solution: agile workflows and accessible tools
The team adopted an agile, experimental approach to solve the volume issue. They did not aim for a fully integrated software solution immediately but rather a "spike", i.e. a proof of concept to test viability.
The workflow was deliberately kept simple: First, using a tool called Yacast, the team extracted the audio from the 09:00–10:00 slot for all 44 local stations. These files were exported to a Google OneDrive folder, from where Barlot dragged and dropped the 44 audio files (MP4 format) directly into NotebookLM. Using a pre-prepared, structured prompt (over a page long), they queried the AI to categorise concerns, extract specific quotes, and identify production types associated with different regions.
The process was remarkably fast. “In 49 minutes, we managed to export everything and put it in NotebookLM,” Barlot notes. By 11:00 AM and just one hour after the broadcasts finished, the team delivered a PDF summary and a structured table of regional concerns to the editorial team via WhatsApp.
The experiment relied on a lightweight stack of tools, prioritising accessibility over custom development for this phase:
NotebookLM: This was the central engine. Unlike standard LLMs that search the entire web, NotebookLM operates using Retrieval-Augmented Generation (RAG). It answers questions based only on the specific documents or audio files uploaded to it. This was crucial for ensuring the analysis was strictly based on the morning's radio output.
Yacast: Used for capturing and exporting the broadcast streams.
Microsoft Copilot: Employed briefly to assist with the visual formatting of the final PDF report to ensure it was readable on mobile devices.
The team behind the project
This was not a massive engineering effort but a targeted editorial operation. The core team consisted of Barlot, who orchestrated the AI interaction, prompt engineering, and editorial verification, and one colleague who handled the technical logistics of extracting, encoding, and managing the 44 distinct audio files.
The skills required were less about coding and more about editorial agility and prompt engineering. They needed to understand the structure of a radio schedule (to tell the AI when ad breaks occurred) and possess the journalistic judgement to know what information would be valuable to the newsroom.
Challenges
While the text summarisation was useful, the team faced significant hurdles regarding precision and data reliability.
Timecodes: NotebookLM struggled to provide accurate timestamps for the quotes it found. In one instance, the AI claimed a quote occurred at 2 minutes, when it actually appeared at 44 minutes. This required the team to cross-reference with transcripts to verify the location of the audio. According to Barlot, this was improved later.
Hallucination and memory: At the time of the experiment, the tool claimed to have no memory of previous prompts, yet seemingly recalled context from prior sessions during testing, leading to confusing interactions.
Data privacy: During the session, Barlot emphasised a crucial caveat for newsrooms using public AI tools, as they do not own the technology and can not ensure where sensitive data may end. The team had to be careful not to upload confidential data, restricting the input to public broadcast audio only.
Opportunities they see
The experiment demonstrated that AI could transform local radio from a fleeting medium into a structured database of public sentiment.
Detecting weak signals: By aggregating 44 local feeds, the tool can spot emerging trends that a single editor might miss. For example, if farmers in five different regions mention "overturned signs" simultaneously, it becomes a national story rather than isolated local incidents. Barlot describes this as having “44 sensors on the territory.”
Editorial auditing: Beyond breaking news, the tool offers a way to analyse coverage diversity. It can help newsrooms check if they are falling into repetitive angles or if they are achieving gender parity in their interviews. This is especially relevant for a public service media that aims to represent the diversity of the country, as Radio France intends.
Augmenting, not replacing: The goal is to free up journalists to do high-value work. “It increases the journalist ‘s value more than it replaces him,” says Barlot. Instead of spending hours monitoring feeds, journalists receive a synthesiszed view, allowing them to focus on verification and storytelling.
Lessons for newsrooms
Control your data to control your outputs
The power of tools like NotebookLM lies not in their AI capabilities but in their constraint: NotebookLM only knows what you feed it. This transforms an AI from an unpredictable oracle into a research assistant working exclusively with vetted materials. For journalists, this means applying sourcing expertise before querying the AI, not trusting it to find sources. The quality of analysis depends entirely on the quality of the corpus.
Verification remains non-negotiable, but citation makes it feasible
AI tools that provide source citations transform the verification challenge. Rather than blindly trusting outputs or manually re-researching claims, journalists can immediately check the AI's interpretation against the original material. NotebookLM's clickable references made it possible to verify 44 hours of testimony in a timeframe that wouldn't allow listening to even a fraction of the source audio. Verification shifted from impossible to practical.
The workflow question is harder than the technology question
Radio France's experiment succeeded technically but faces integration challenges. Journalists understand the value but struggle to find time for this new layer of analysis within existing production demands. The lesson: AI adoption requires not just training on tools but rethinking job structures, task allocation, and what gets prioritised. As Barlot puts it: “'The idea is really to augment our journalists with information and editorial content. But we also need to ask how we free up time for them to use it.”
———
This case study was produced as part of the 2025 edition of JournalismAI Discovery course in French. Access all sessions recordings here.
