The Optimist’s Guide to the Digital News Report

If you work in the media industry and want to feed your pessimism, the  Digital News Report 2025 makes it easy for you, because this is what it tells you: influencers are challenging established media brands right and left, news avoidance is at an all-time high, and it is becoming increasingly difficult (and costly!) to reach audiences because they are spread across even more platforms – sorted according to political preferences and educational level.. Welcome to the journalistic dreariness of the propaganda age! However, if you want to pave the way for journalism’s future, the only thing that helps is to look at things through the optimist’s glasses. And through these, the media world looks much friendlier already. Here are a few encouraging findings from the publication by the Reuters Institute for the Study of Journalism at the University of Oxford, whose material media professionals like to dissect and discuss:

Firstly, trust in established media is stable. This has been true for the global average for three years – this time, the report covers around 100,000 online users in 48 markets – but also for Germany, where the long-term study on media trust conducted by the University of Mainz recently recorded similar figures. Yes, things looked even better in Germany ten years ago. But the figure currently stands at 45 percent (Mainz study: 47 percent), which is respectable by international standards. As elsewhere, public broadcasters perform particularly well. In addition, the researchers note that users of all age groups prefer traditional media brands when they doubt the veracity of information. The oft-repeated narrative of dwindling trust in the media cannot be substantiated this year either – although trust in the media and media usage are two different things.

Secondly, attracting audiences to your own platforms – that can be done. At least, that’s what the Norwegians, Swedes, and Finns have proven. Public broadcasters there have invested heavily in their own video platforms and are very restrictive when it comes to posting their content on platforms such as YouTube or X. The Finnish broadcaster Yle now attracts more users to its platform than all other providers in Finland combined. The study tours to Scandinavia by many media professionals are therefore justified.

Thirdly, energetic journalists can benefit from the influencer trend and successfully start their own businesses.Frenchman Hugo Travers (Hugodecrypte) now reaches as many users aged 35 and under in France as established media brands: 22 percent of them said they had heard of him in the previous week. The audience appreciates the (perceived) authenticity and approachability of such personal brands. The fly in the ointment: many demagogues on the political right have benefited from this so far, and the line between journalism and opinion-making is blurred. Research by the news agency AFP has revealed that politicians in Nigeria and Kenya hired influencers specifically to spread false messages.

Fourthly, willingness to pay remains stable – and there is room for improvement. Okay, the percentage of people who pay for digital journalism averages 18 percent – that could certainly be higher. But it’s also quite something to know that, despite all the free content available online, around one in five people are willing to pay for journalism – in Germany, the figure is 13 percent. The researchers believe that the subscription market is far from exhausted. Where paying is already common practice, the key is to intelligently bundle offerings and create more interesting pricing models that cater to different types of users. Incidentally, regional and local newspapers in Germany stand out in international comparison with their subscription rates. On the one hand, the researchers speculate that this is an expression of federalism and the fact that many users strongly identify with their regions. On the other hand, projects such as data pooling in Drive or Wan-Ifra’s Table Stakes Europe may also have contributed to this success; they encourage the exchange of experiences, networking, and a focus on targeting specific audiences and user needs.

Fifth, text lives on – especially in this part of the world. Yes, there are highly respected experts who predict at AI conferences that the future of journalism lies in chat – specifically, spoken chat. People would rather talk and listen than write and read, they say. Elsewhere, media professionals complain that young users only digest short-form video, if they pay any attention to journalism at all. The figures do not support these claims. Text is still the most important format for 55 percent of users worldwide. This is different in some countries in Asia and Africa, which could also have to do with later literacy rates. But it is definitely still worthwhile for media companies to invest in first-class texts. There is ample evidence that young people also enjoy listening to long podcasts or binge-watching series. Only one thing does not work today and will work less and less as AI delivers decent quality: poor text.

Sixth, the audience is smarter than many journalists believe. When it comes to the use of AI, for example, respondents expect pretty much what is predicted or feared in the industry: journalism production is likely to become cheaper and even faster, while factual accuracy and trustworthiness will decline. Young consumers in particular are skeptical about media use and verify a lot. In countries such as Thailand and Malaysia, where journalism is largely consumed via TikTok and Facebook, users are very well aware that they may be exposed to lies or fantasy news on these platforms. When it comes to “fake news,” 47 percent of respondents consider online influencers and politicians to be the greatest threat, which is likely a realistic assessment. And many users worry that they could miss important stories if media companies personalize their offerings more in order to turn these users into loyal customers. 

Incidentally, what respondents worldwide want from journalism is: more impartiality, factual accuracy, transparency, and original research and reporting. Media researchers couldn’t have put it better themselves.

This column was published in German for the industry publication Medieninsider on 17th June 2025.

 

Peter Archer, BBC: “What AI doesn’t change is who we are and what we are here to do”

The BBC’s Director of Generative AI talks about the approach of his organization to developing AI tools, experiences with their usage and the rampant inaccuracies AI assistants produce – and what is needed to remedy them. This interview was conducted for the EBU News Report “Leading Newsrooms in the Age of Generative AI” that will be published by the European Broadcasting Union.

BBC research recently revealed disturbing inaccuracies when AI agents provided news content and drew on BBC material. About every second piece had issues. Did you expect this?  

We expected to see a degree of inaccuracy, but perhaps not as high as we found. We were also interested in the range of different errors where AI assistants struggle including factual errors, but also lack of context, and the conflation of opinion and fact.

It was also interesting that none of the four assistants that we looked at – ChatGPT, Copilot, Gemini, and Perplexity – were much better or worse than any of the others, which suggests that there is an issue with the underlying technology.  

Has this outcome changed your view on AI as a tool for journalism?  

With respect to our own use of AI, it demonstrates the need to be aware of the limitations of AI tools.

We’re being conservative about the use of generative AI tools in the newsroom and our internal guidance is that generative AI should not be used directly for creating content for news, current affairs or factual content.

But we have identified specific use cases like summaries and reformatting that we think can bring real value.

We are not currently allowing third parties to scrape our content to be included in AI applications. We allowed ChatGPT and the other AI assistants to access our site solely for the purpose of this research. But, as our findings show, making content available can lead to distortion of that content.  

You emphasised working with the AI platforms was critical to tackle this challenge. Will you implement internal consequences, too? 

Generative AI poses a new challenge – because AI is being used by third parties to create content, like summaries of the news.

I think this new intersection of technology and content will require close working between publishers and technology companies to both help ensure the accuracy of content but also to make the most of the immense potential of generative AI technology.  

So, you think the industry should have more self-confidence? 

Publishers, and the creative and media industries more broadly, are critical to ensuring generative AI is used responsibly. The two sectors – AI and creative industries – can work together positively, combining editorial expertise and understanding of the audience with the technology itself.

More broadly, the media industry should develop an industry position – what it thinks on key issues. The EBU can be a really helpful part of that. In the UK, regulators like Ofcom are interested in the AI space.

We need a constructive conversation on how we collectively make sure that our information ecosystem is robust and trusted. The media sector is central to that.

On the research, we will repeat the study, hopefully including other newsrooms. Because I’m fascinated to see two things: Do the assistants’ performances change over time? And do newsrooms of smaller languages see the same issues or maybe more? 

Do you think the media industry in general is behaving responsibly towards AI? Or what do you observe when you look outside of your BBC world?  

On the whole yes, and it’s great to see different perspective as well as areas of common interest. For example, I think everybody is now looking at experiences like chat assistants.

There’s so much to do it would be fantastic to identify common priorities across the EBU group, because working on AI can be hard and costly and where we can collaborate we should.

That said, we have seen some pretty high-profile mistakes in the industry – certainly in the first 12 to 18 months after ChatGPT launched – and excitement occasionally outpaced responsible use.

It’s also very helpful to see other organizations testing some of the boundaries because it helps us and other public service media organizations calibrate where we are and what we should be doing.  

There are huge hopes in the industry to use generative AI to make journalism more inclusive, transcend format boundaries to attract different audiences. Are these hopes justified?  

I’m pretty bullish. The critical thing is that we stay totally aligned to our mission, our standards, and our values. AI changes a lot, but what it doesn’t change is who we are and what we’re here to do.

One of the pilots that we’re looking at how to scale is taking audio content, in this example, a football broadcast, and using AI to transcribe and create a summary and then a live text page.

Live text updates and pages on football games are incredibly popular with our audiences, but currently there’s only so many games we can create a live page for. The ability to use AI to scale that so we can provide a live text page for every football game we cover on radio would be amazing.

One of the other things that we’re doing is going to the next level with our own BBC large language model that reflects the BBC style and standards. This approach to constitutional AI is really exciting. It’s being led out of the BBC’s R&D team – we’re incredibly lucky to have them.  

Do you have anything fully implemented yet?  

The approach that we’ve taken with generative AI is to do it in stages. In a number of areas, like the football example, we are starting small with working, tactical solutions that we can increase the use of while we work on productionised versions in parallel.

Another example is using AI to create subtitles on BBC Sounds. Again, here we’ve got an interim solution that we will use to provide more subtitles to programmes while in parallel we create a productionised version that is that is much more robust and easier to scale across all audio.

A key consideration is creating capabilities that can work across multiple use cases not just one, and that takes time.  

What is your position towards labelling?  

We have a very clear position: We will label the use of AI where there is any risk that the audience might be materially misled.

This means any AI output that could be mistaken for real is clearly labelled. This is particularly important in news where we will also be transparent about where AI has a material or significant impact on the content or in its production – for example if an article is translated using AI.

We’re being conservative because the trust of our audience is critical.  

What’s the internal mood towards AI? The BBC is a huge organization, and you are probably working in an AI bubble. But do you have any feel for how people are coming on board?  

One of the key parts of my role is speaking to teams and divisions and explaining what AI is and isn’t and the BBC’s approach.

Over the last 12 months, we’ve seen a significant increase in uptake of AI tools like Microsoft Copilot and many staff are positive about how AI can help them in their day-to-day work.

There are of course lots of questions and concerns, particularly as things move quickly in AI.

A key thing is encouraging staff to play with the tools we have so they can understand the opportunities and limitations. Things like Microsoft Copilot are now available across the business, also Adobe Firefly, GitHub Copilot, very shortly ChatGPT.

But it’s important we get the balance right and listen carefully to those who have concerns about the use of AI.

We are proceeding very carefully because at the heart of the BBC is creativity and human-led journalism with very high standards of editorial. We are not going to put that at risk.  

What’s not talked about enough in the context of generative AI and journalism? 

We shouldn’t underestimate the extent to which the world is changing around us. AI assistants, AI overviews are here to stay.

That is a fundamental shift in our information landscape. In two or three years’ time, many may be getting their news directly from Google or Perplexity.

As our research showed, there are real reasons for concern. And there is this broader point around disinformation. We’ve all seen the Pope in a puffer jacket, right? And we’ve all seen AI images of floods in Europe and conflict in Gaza.

But we’re also starting to see the use of AI at a very local level that doesn’t get much exposure but could nevertheless ruin lives.

As journalists, we need to be attuned to the potential misinformation on our doorstep that is hard to spot.  

This interview was published by the EBU on 26th March 2025.