Generating Change 2023: What have we seen before?

In 2019, we conducted our first global survey on how newsrooms are using AI in their work. At the time, even some early adopters were at the beginning of the AI integration journeys. Since then, so much has changed in the world of AI and in the ways mediamakers approach and use AI technologies. Some patterns, though, have persisted. This article sheds light on some of them. It is meant to serve as a comparative exercise to help us better understand some of the trends we’re seeing. 

Thinking ahead, we wonder what could be new or familiar in the next JournalismAI report in a few years. Will relying on generative AI technologies in editorial tasks become an industry norm, for instance? Or could it become a widely unacceptable practice?

Well, first things first. Let’s take a look at the findings from this year’s report compared to 2019. We wanted to reach a larger and more diverse group of media professionals this year. We collected insights from 105 media organisations from 46 different countries spanning Latin America, sub-Saharan Africa, the Middle East and North Africa (MENA), Asia Pacific, Europe, and North America. This year’s sample includes small and large newsrooms, emerging and legacy organisations, for-profit and non-proft platforms on a wide spectrum with regards to their AI integration journeys. 

Newsroom Definitions of AI Vary Widely

In 2019, respondents defined AI technologies in a range of ways. While some provided scientific definitions, the majority related their operational definitions of AI technologies to their purpose for using them and their roles in their respective organisations. For instance, a respondent in 2019 remarked: 

Technically, I take it to mean machine learning/neural network-driven systems, but for the purposes of newsroom technology I think of it more as any system of automation that’s more than a very simple tool.

Similarly, responses to this year’s survey were also diverse and related ther definitions to the potential benefits of  AI technologies and newsrooms’ motivations for integrating them in the newsroom, such as increasing efficiency, or better serving the newsroom’s audience and mission. Here’s an example from this year’s survey:

For us, AI represents a group of technologies that can assist and empower [our team] by providing insights and automated support across a range of editorial, operational and communications tasks.

The range of definitions this year and in 2019 echoe once again the fluidity of the term and the complexity of the topic. It’s best to approach AI definitions with a degree of flexibility that reflects the complex reality of AI technologies and their applications. For this reason, we continue to refer to AI as an umbrella term for a wide variety of related technologies acknowledging that many processes described as AI often incorporate more conventional technologies:

Artificial intelligence is a collection of ideas, technologies, and techniques that relate to a computer system’s capacity to perform tasks normally requiring human intelligence.

Streamlining Processes Remains A Key Motivation for AI Integration

We were curious to know if the newsroom's intentions for integrating AI technologies have changed much over the past five years. When we asked newsrooms about their motivations for using AI in 2019, many said they hoped AI would make jouralists’ work more efficient. In 2023, this remained a key objective behind using AI for journalists. It makes sense. Journalists are usually passionate about their work, which often entails manual time-consuming tasks. Automating them would mean freeing up journalists to engage in more innovative and creative tasks, such as investigative missions and field work, which they believe AI cannot perform.

Financial Constraints, Technical Challenges Most Cited

In 2023, journalists continued to cite financial constraints and technical difficulties as key challenges to their AI integration journeys. These challenges are often interrelated. A lack of resources makes it difficult for newsrooms to hire technical personnel, but also to assess their AI training needs, and to implement training plans that would take away from journalists’ time. Additionally, respondents in Global South countries mentioned that AI experts who may be hired to take the lead on AI strategizing are few and they are incentivized to work at foreign companies that often offer higher pay. This is another way these challenges often lead to one another. 

The Question Remains; How To Integrate AI While Upholding Journalistic Values

Ethical concerns about AI integration in journalism are not new and extend to almost all industries. Perhaps it is trickier for journalists than other professionals to grapple with these concerns, because journalism is rooted in serving the public interest, and usually the developers of AI technologies work in the profit-driven tech sector, which means they do not necessarily prioritise journalists’ concerns. A response from 2019 summarised this tension well:

“In general, it is essential to keep in mind that the objective of a high-quality newspaper cannot merely be economic success. If a machine learning algorithm is trained to maximise revenue, the risk of valuing click-bait articles more than investigative research projects is high. Therefore, it should be carefully considered what metrics to optimise for and how to maintain the quality standards “

Journalists continue to grapple with the ethical question as many respondents told us recently:

Upholding trust, accuracy, fairness, transparency, and diversity in news content, while mitigating biases and maintaining journalistic integrity, is a priority for us in the era of AI-powered technologies.

How to deal with these challenges has been the topic of much scholarly debate. How can journalists uphold accuracy, accountability, and transparency when AI technologies have a poor record on these very issues? Algorithmic bias is a result of these shortcomings and it disproportionately affects marginalised communities, potentially causing serious harm (e.g. racial discrimination in facial recognition technologies).

For those reasons, respondents continued to call on technology companies to be more transparent about the training data they use and how the systems work, an attempt to demystify the black boxes of AI and insist on accountability. As a society, we will continue to grapple with these questions, but we have a long way to go. This is what respondents told us five years ago and what they’re saying today.

In our next article, we’ll delve into the new findings we uncovered this year, reflecting the strides some organisations have made in AI integration, a potential overall rise in AI literacy, and the emergence of new AI technologies like genAI that are more accessible to older AI technologies. 

DOWNLOAD THE 2023 JOURNALISMAI REPORT

JournalismAI is a global initiative of Polis - the journalism think tank at the London School of Economics and Political Science (LSE) - and is supported by the Google News Initiative.

Previous
Previous

MP Interests Tracker: Utilising GenAI to uncover insights in the UK Register of Financial Interest

Next
Next

How AI is Generating Change in newsrooms, worldwide