Let’s talk more about what quality journalism truly means!

As a rapporteur for Wan-Ifra’s World News Media Congress 2025 in Krakow and member of their Expert Panel, Alexandra had the honor of sharing her key insights on stage in the final wrap-up, together with co-experts Jeremy Clifford (UK) and Chris Janz (AUS). This is the written-up version:

🏄 It’s about strategy: No matter which technology or platform you are using, it won’t help you when you don’t know your mission and the needs of your audiences. And when you have a strategy, follow it – and cut down on the rest.

🏄 It’s about direct and loyal relationships to users and customers: Give people more reasons to go directly on your site and engage, to download your app, to subscribe to your products, to attend your events. In an AI mediated environment when referrals from search decline and your brand will further lose visibility, this is the only way to make your business sustainable.

🏄 It’s about brand: Trust is rooted in brands. This could be personal brands or organizational brands. Double down on clarifying and delivering the value proposition of your brand. Young people tend to be less loyal or even brand agnostic. Put particularly effort in attracting and retaining the next generations of users by understanding their needs.

🏄 It’s about emotion: In a sea of choices, signals that trigger emotional responses matter. Feeling connected is a human need. When so much of life is dominated by technology, people are even more likely to look for authenticity. Particularly young people want to be listened to, not talked down to.

🏄 It’s about place: In a globalized, sometimes confusing world, many people are looking for meaning and human connection in their communities. Much of political polarization is fueled by the rural-urban divide: people from outside the political centres often feel not represented in public debates and policy making. There is potential for excellent storytelling away from where power crowds. Local journalism matters.

🏄 It’s about journalism: In an age when content can be produced at scale by AI, we need to move journalism up the value chain, as SVT’s Director General Anne Lagercrantz put it in a recent interview. And every news organization needs to explore and talk more about what that means for them. We don’t talk about what we mean by quality journalism nearly enough.
 

Anne Lagercrantz, SVT: “Journalism has to move up the value chain”

Anne Lagercrantz is the Director General of SVT Swedish Television. Alexandra talked to her about how generative AI has created more value for audiences, SVTs network of super users, and what will make journalism unique as opposed to automated content generation. 

Anne, many in the industry have high hopes that AI can do a lot to improve journalism, for example by making it more inclusive and appealing to broader audiences. Looking at SVT, do you see evidence for this?  

I can see some evidence in the creative workflows. We just won an award for our Verify Desk, which uses face recognition and geo positioning for verification.  

Then, of course, we provide automated subtitles and AI-driven content recommendations. In investigative journalism, we use synthetic voices to ensure anonymity.  

I don’t think we reach a broader audience. But it’s really being inclusive and engaging. 

In our interview for the 2024 report, you said AI hadn’t been transformative yet for SVT. What about one year later? 

We’re one step further towards the transformative. For example, when I look at kids’ content, we now use text to video tools that are good enough for real productions. We used AI tools to develop games then we built a whole show around it.  

So, we have transformative use cases but it hasn’t transformed our company yet.  

What would your vision be? 

Our vision is to use AI tools to create more value for the audience and to be more effective.  

However – and I hear this a lot from the industry – we’re increasing individual efficiency and creativity, but we’re not saving any money. Right now, everything is more expensive.  

Opinions are split on AI and creativity. Some say that the tools help people to be more creative, others say they are making users lazy. What are your observations?  

I think people are truly more creative. Take the Antiques Roadshow as an example, an international format that originated at the BBC.  

We’ve run it for 36 years. People present their antiques and have experts estimate their value. The producers used to work with still pictures but with AI support they can animate them.  

But again, it’s not the machine, it’s the human and the machine together.  

You were a newsroom leader for many, many years. What has helped to bring colleagues along and have them work with AI?  

I think we cracked the code. What we’ve done is, we created four small hubs: one for news, one for programmes, one for the back office and one for product. And the head of AI is holding it all together.  

The hubs consist of devoted experts who have designated time for coaching and experimenting with new tools. And then there’s a network of super users, we have 200 alone in the news department.  

It has been such a great experience to have colleagues learn from each other.  

It’s a top-down movement but bottom-up as well. We combine that with training, AI learning days with open demos. Everyone has access and possibility.  

We’ve tried to democratize learning. What has really helped to change attitudes and culture was when we created our own SVTGPT, a safe environment for people to play around in. 

What are the biggest conflicts about the usage of AI in the newsroom? 

The greatest friction is to have enthusiastic teams and co-workers who want to explore AI tools, but then there are no legal or financial frameworks in place.  

It’s like curiosity and enthusiasm meeting GDPR or privacy. And that’s difficult because we want people to explore, but we also want to do it in a safe manner. 

Would you say there’s too much regulation?  

No, I just think the AI is developing at a speed we’re not used to. And we need to find the time to have our legal and security department on board.  

Also, the market is flooded with new tools. And of course, some people want to try them all. But it’s not possible to assess fast that they’re safe enough. That’s when people feel limited. 

No one seems to be eager to talk about ethics any longer because everyone is so busy keeping up and afraid of missing the boat. 

Maybe we are in a good spot because we can experiment with animated kids’ content first. That’s different from experimenting with news where we are a lot more careful.  

Do you get audience reaction when using AI?  

There are some reactions, more curious than sceptical.  

What also helps is that the Swedish media industry has agreed upon AI transparency recommendations, saying that we will tell the audience that is AI when it has a substantial influence on the content. It could be confusing to label every tiny thing.  

Where do you see the future of journalism in the AI age now with reasoning models coming up and everyone thinking, oh, AI can do much of the news work that has been done by humans before? 

I’m certain that journalism has to move up in the value chain to investigation, verification and premium content.  

And we need to be better in providing context and accountability.  

Accountability is so valuable because it will become a rare commodity. If I want to contact Facebook or Instagram, it’s almost impossible. And how do you hold an algorithm accountable?  

But it is quite easy to reach an editor or reporter. We are close to home and accountable. Journalists will need to shift from being content creators and curators to meaning makers.  

We need to become more constructive and foster trust and optimism.  

Being an optimist is not always easy these days. Do you have fears in the face of the new AI world? 

Of course. One is that an overreliance on AI will lead to a decline in critical thinking and originality.  

We’re also super aware that there are a lot of hallucinations. Also, that misinformation could undermine public trust, and that it is difficult to balance innovation with an ethical AI governance.  

Another fear is that we are blinded by all the shiny new things and that we’re not looking at the big picture.  

What do you think is not talked about enough in the context of journalism and AI? 

We need to talk more about soft values: How are we as human beings affected by new technology?  

If we all stare at our own devices instead of looking at things together, we will see loneliness and isolation rise further.  

Someone recently said we used to talk about physical health then about mental health, and now we need to talk about social health, because you don’t ever need to meet anyone, you can just interact with your device. I think that’s super scary.  

And public service has such a meaningful role in sparking conversations, getting people together across generations.  

Another issue we need to talk more about is: if there is so much personalization and everyone has their own version of reality, what will we put in the archives? We need a shared record.

This interview was published by the EBU on 16th April as an appetizer for the EBU News Report “Leading Newsrooms in the Age of Generative AI”. 

Kasper Lindskow, JP Politiken Media Group: “Generative AI can Give Journalists Superpowers”

Kasper Lindskow is the Head of AI at the Danish Politiken Media Group, one of the front runners in implementing GenAI based solutions in the industry. He is also co-founder of the Nordic AI in Media Summit, a leading industry conference on AI. Alexandra spoke to him about how to bring people along with new technologies, conflicts in the newsroom, and how to get the right tone of voice in the journalism. 

Kasper, industry insiders regard JP/Politiken as a role model in implementing AI in its newsrooms. Which tools have been the most attractive for your employees so far?  

We rolled out a basic ChatGPT clone in a safe environment to all employees in March 2024 and are in the process of rolling out more advanced tools. The key for us has been to “toolify” AI so that it can be used broadly across the organization, also for the more advanced stuff.  

Now, the front runners are using it in all sorts of different creative ways. But we are seeing the classic cases being used most widely, like proofreading and adaptation to the writing guides of our different news brands, for example suggesting headlines.  

We’ve seen growing use of AI also for searching the news archive and writing text boxes.  

Roughly estimated, what’s the share of people in your organization who feel comfortable using AI tools on a daily basis? 

Well, the front runners are experimenting with them regardless of whether we make tools available. I’d estimate this group to be between 10 and 15 percent of newsroom staff. I’d say we have an equally small group who are not interested in interacting with AI at all.  

And then we have the most interesting group, between 70 and 80 percent or so of journalists who are interested and having tried to work with AI a little bit.  

From our perspective, the most important part of rolling out AI is to build tools that fit that group to ensure a wider adoption. The potential is not in the front runners but in the normal, ordinary journalists. 

This sounds like a huge, expensive effort. How large is your team?  

We are an organization of roughly 3,000 people. Currently we are 11 people working full-time on AI development in the centralized AI unit plus two PhDs. That’s not a lot. But we also work for local AI hubs in different newsrooms, so people there spend time working with us.  

This is costly. It does take time and effort, in particular if you want high quality and you want to ensure everything aligns with the journalism.  

I do see a risk here of companies underinvesting and only doing the efficiency part and not aligning it with the journalism. 

Do you have public-facing tools and products? 

In recommender systems we do, because that’s about personalizing the news flow. That’s public facing and enabled by metadata.  

We’re also activating metadata in ways that are public facing just for example in “read more” lists that are not personalized.  

But in general, we’re not doing anything really public facing with generative AI that does not have humans in the loop yet. 

What are the biggest conflicts around AI in your organization or in the newsroom? 

Most debates are about automated recommender systems. Because sometimes they churn out stuff that colleagues don’t find relevant.  

But our journalists have very different reading profiles from the general public. They read everything and then they criticize when something very old turns up.  

And then, of course, you have people thinking: “What will this do to my job?”  

But all in all, there hasn’t been much criticism. We are getting a lot more requests like: “Can you please build this for me?” 

What do you think the advancement of generative AI will do to the news industry as a whole? 

Let’s talk about risks first. There’s definitely a risk of things being rolled out too fast. This is very new technology. We know some limitations, others we don’t.  

So, it is important to roll it out responsibly at a pace that people can handle and with the proper education along the way.  

If you roll it out too fast there will be mistakes that would both hurt the rollout of AI and the potential you could create with it, impacting the trustworthiness of news.  

Another risk is not taking the need to align these systems with your initial mission seriously enough. 

Some organizations struggle with strategic alignment, could you explain this a bit, please?  

Generative AI has a well-known tendency to gravitate towards the median in its output – meaning that if you have that fast prototype with a small prompt and roll it out then your articles tend to become dull, ordinary and average.  

It’s not necessarily a tool for excellence. It can be but you really need to do it right. You need to align it with the news brand and its particular tone of voice, for example. That requires extensive work, user testing and fine-tuning of the systems underneath.  

If we don’t take the journalistic work seriously, either because we don’t have resources to do it or because we don’t know it or move too fast, it could have a bad impact on what we’re trying to achieve. Those are the risk factors that we can impact ourselves. 

The other risks depend on what happens in the tech industry? 

A big one is when other types of companies begin using AI to do journalism. 

You mean companies that are not bound by journalistic values? 

If you’re not a public service broadcaster but a private media company, for the past 20 years you’ve experienced a structural decline.  

If tech giants begin de-bundling the news product even further by competing with journalists, this could accelerate the structural decline of news media.  

But we should talk about opportunities now. Because if done properly, generative AI in particular has massive potential. It can give journalists superpowers.  

Because it helps to enrich storytelling and to automate the boring tasks? 

We are not there yet. But generative AI is close to having the potential for, once you have done your news work with finding the story, telling that story across different modalities.  

And to me that is strong positive potential for addressing different types of readers and audiences. 

We included a case study on Magna in the first EBU News Report which was published in June 2024. What have your biggest surprises been since then? 

My biggest positive surprise is the level of feedback we are getting from our journalists. They’re really engaging with these tools. It’s extremely exciting for us as an AI unit that we are no longer working from assumptions but we are getting this direct feedback.  

I am positively surprised but also cautious about the extent to which we have been able to adapt these systems to our individual news brands. Our tool Magna is a shared infrastructure framework for everyone.  

But when you ask it to perform a task it gives very different output depending on the brand you request it for. You get, for example, a more tabloid-style response for Ekstra Bladet and a more sophisticated one for our upmarket Politiken.  

A lot of work went into writing very different prompts for the different brands.  

What about the hallucinations everyone is so afraid of? 

This was another surprise. We thought that factuality was going to be the big issue. We had many tests and found out that when we use it correctly and ground it in external facts, we are seeing very few factual errors and hallucinations.  

Usually, they stem from an article in the archive that is outdated because something new happened, not because of any hallucinations inside the model.  

The issue is more getting the feel right in the output, the tone of voice, the angles that are chosen in this publication that we’re working with – everything that has to do with the identity of the news brand.  

This interview was published by the EBU as an appetizer for the News Report “Leading Newsrooms in the Age of Generative AI” on .8th April 2025.

Prof. Pattie Maes, MIT: “We don’t have to simplify everything for everybody”

Prof. Pattie Maes and her team at the MIT Media Lab conduct research on the impact of generative AI on creativity and human decision-making. Their aim is to advice AI companies on designing systems that enhance critical thinking and creativity rather than encourage cognitive offloading. The interview was led for the upcoming EBU News Report “Leading Newsrooms in the Age of Generative AI”.  

It is often said that AI can enhance people’s creativity. Research you led seems to suggest the opposite. Can you tell us about it?  

You’re referring to a study where we asked college students to write an essay and had them solve a programming problem.  

We had three different conditions: One group could use ChatGPT. Another group could only use search without the AI results at the top. And the third group did not have any tool.  

What we noticed was that the group that used ChatGPT wrote good essays, but they expressed less diversity of thought, were more similar to one another and less original. 

Because people put less effort into the task at hand? 

We have seen that in other experiments as well: people are inherently lazy. When they use AI, they don’t think as much for themselves. And as a result, you get less creative outcomes.  

It could be a problem if, say, programmers at a company all use the same co-pilot to help them with coding, they won’t come up with new ways of doing things.  

As AI data increasingly feeds new AI models, you will get more and more convergence and less improvement and innovation.  

Journalism thrives on originality. What would be your advice to media managers? 

Raising awareness can help. But it would be more useful if we built these systems differently.  

We have been building a system that helps people with writing, for example. But instead of doing the writing for you, it engages you, like a good colleague or editor, by critiquing your writing, and occasionally suggesting that you approach something from a different angle or strengthen a claim.  

It’s important that AI design engages people in contributing to a solution rather that automating things for them.  

Sounds like great advice for building content management systems. 

Today’s off-the-shelf systems use an interface that encourages people to say: “write me an essay on Y, make sure it’s this long and includes these points of view.”  

These systems are designed to provide a complete result. We have grammar and spelling correctors in our editing systems, but we could have AI built into editing software that says, “over here your evidence or argument is weak.”  

It could encourage the person to use their own brain and be creative. I believe we can design systems that let us benefit from human and artificial intelligence.  

But isn’t the genie already out of the bottle? If I encouraged students who use ChatGPT to use a version that challenges them, they’d probably say: “yeah, next time when I don’t have all these deadlines”.   

We should design AI systems that are optimised for different goals and contexts, like an AI that is designed like a great editor, or an AI that acts like a great teacher.  

A teacher doesn’t give you the answers to all the problems, because the whole point is not the output the person produces, it is that they have learned something in the process.  

But certainly, if you have access to one AI that makes you work harder and another AI that just does the work for you, it is tempting to use that second one. 

Agentic AI is a huge topic. You did research on AI and agents as early as 1995. How has your view on this evolved since? 

Back when I developed software agents that help you with tasks, we didn’t have anything like today’s large language models. They were built by hand for a specific application domain and were able to do some minimal learning from the user.  

Today’s systems are supposedly AGI (artificial general intelligence) or close to it and are billed as systems that can do everything and anything for us.  

But what we are discovering in our studies is that they do not behave the way people behave. They don’t make the same choices, don’t have that deeper knowledge of the context, that self-awareness and self-critical reflection on their actions that people have.  

A huge problem with agentic systems will be that we think they are intelligent and behave like us, but that they don’t. And it’s not just because they hallucinate. 

But we want to believe they behave like humans? 

Let me give you an example. When I hired a new administrative assistant, I didn’t immediately give him full autonomy to do things on my behalf.  

I formed a mental model of him based on the original interview and his résumé. I saw “oh, he has done a lot of stuff with finance, but he doesn’t have much experience with travel planning.” So when some travel had to be booked, I would tell him, “Let me know the available choices so that I can tell you what I value and help you make a choice.”  

Over time my mental model of the assistant develops, and his model about my needs and preferences. We basically learn about each other. It is a much more interactive type of experience than with AI agents.  

These agents are not built to check and say, “I’m not so confident making this decision. So, let me get some input from my user.” It’s a little bit naïve that AI agents are being portrayed as “they are ready to be deployed, and they will be wonderful and will be able to do anything.”  

It might be possible to build agents that have the right level of self-awareness, reflection and judgment, but I have not heard many developers openly think about those issues. And it will require a lot of research to get it right.  

Is there anything else your research reveals about the difficulties with just letting AI do things for us? 

We have done studies on decision making with AI. What you expect is that humans make better decisions if they are supported by an AI that is trained on a lot of data in a particular domain.  

But studies showed that was not what happened. In our study, we let people decide whether some newspaper headline was fake news or real news. What we found was when it’s literally just a click of a button to get the AI’s opinion, many people just use the AI’s output.  

There’s less deep engagement and thinking about the problem because it’s so convenient. Other researchers got similar results with experiments on doctors evaluating medical diagnoses supported by AI, for example. 

You are telling us that expectations in AI-support are overblown? 

I am an AI optimist. I do think it is possible to integrate AI into our lives in a way that it has positive effects. But we need to reflect more about the right ways to integrate it.  

In the case of the newspaper headlines we did a study that showed that if AI first engages you in thinking about a headline and asks you a question about it, it improves people’s accuracy, and they don’t accept the AI advice blindly.  

The interface can help with encouraging people to be a little bit more mindful and critical.  

This sounds like it would just need a little technical fix.  

It is also about how AI is portrayed. We talk about these systems as artificial forms of intelligence. We constantly are told that we’re so close to AGI. These systems don’t just converse in a human-like ways, but with an abundance of confidence.  

All of these factors trick us into perceiving them as more intelligent, more capable and more human than they really are. But they are more what Emily Bender, a professor at the University of Washington, called “stochastic parrots”.  

LLMs (large language models) are like a parrot that has just heard a lot of natural language by hearing people speak and can predict and imitate it pretty well. But that parrot doesn’t understand what it’s talking about.  

Presenting these systems as parrots rather than smart assistants would already help by reminding people to constantly think “Oh, I have to be mindful. These systems hallucinate. They don’t really understand. They don’t know everything.”  

We work with some AI companies on some of these issues. For example, we are doing a study with OpenAI on companion bots and how many people risk becoming overly attached to chat bots.  

These companies are in a race to get to AGI first, by raising the most money and building the biggest models. But I think awareness is growing that if we want AI to ultimately be successful, we have to think carefully about the way we integrate it in people’s lives.  

In the media industry there’s a lot of hope that AI could help journalism to become more inclusive and reach broader audiences. Do you see a chance for this to happen? 

These hopes are well-founded. We built an AI-based system for kids and older adults who may have trouble processing language that the average adult can process.  

The system works like an intra-language translator – it takes a video and translates it into simpler language while still preserving the meaning.  

There are wonderful opportunities to customize content to the abilities and needs of the particular user. But at the same time, we need to keep in mind that the more we personalize things, the more everybody would be in their own bubble, especially if we also bias the reporting to their particular values or interests.  

It’s important that we still have some shared media, shared news and a shared language, rather than creating this audience of one where people can no longer converse with others about things in the world that we should be talking about. 

This connects to your earlier argument: customisation could make our brains lazy.  

It is possible to build AI systems that have the opposite effect and challenge the user a little bit. This would be like being a parent who unconsciously adjusts their language for the current ability of their child and gradually introduces more complex language and ideas over time.  

We don’t have to simplify everything for everybody. We need to think about what AI will do to people and their social and emotional health and what artificial intelligence will do to natural human intelligence, and ultimately to our society.  

And we should have talks about this with everybody. Right now, our AI future is decided by AI engineers and entrepreneurs, which in the long run will prove to be a mistake. 

The interview was first published by the EBU on 1st April 2025.

Peter Archer, BBC: “What AI doesn’t change is who we are and what we are here to do”

The BBC’s Director of Generative AI talks about the approach of his organization to developing AI tools, experiences with their usage and the rampant inaccuracies AI assistants produce – and what is needed to remedy them. This interview was conducted for the EBU News Report “Leading Newsrooms in the Age of Generative AI” that will be published by the European Broadcasting Union.

BBC research recently revealed disturbing inaccuracies when AI agents provided news content and drew on BBC material. About every second piece had issues. Did you expect this?  

We expected to see a degree of inaccuracy, but perhaps not as high as we found. We were also interested in the range of different errors where AI assistants struggle including factual errors, but also lack of context, and the conflation of opinion and fact.

It was also interesting that none of the four assistants that we looked at – ChatGPT, Copilot, Gemini, and Perplexity – were much better or worse than any of the others, which suggests that there is an issue with the underlying technology.  

Has this outcome changed your view on AI as a tool for journalism?  

With respect to our own use of AI, it demonstrates the need to be aware of the limitations of AI tools.

We’re being conservative about the use of generative AI tools in the newsroom and our internal guidance is that generative AI should not be used directly for creating content for news, current affairs or factual content.

But we have identified specific use cases like summaries and reformatting that we think can bring real value.

We are not currently allowing third parties to scrape our content to be included in AI applications. We allowed ChatGPT and the other AI assistants to access our site solely for the purpose of this research. But, as our findings show, making content available can lead to distortion of that content.  

You emphasised working with the AI platforms was critical to tackle this challenge. Will you implement internal consequences, too? 

Generative AI poses a new challenge – because AI is being used by third parties to create content, like summaries of the news.

I think this new intersection of technology and content will require close working between publishers and technology companies to both help ensure the accuracy of content but also to make the most of the immense potential of generative AI technology.  

So, you think the industry should have more self-confidence? 

Publishers, and the creative and media industries more broadly, are critical to ensuring generative AI is used responsibly. The two sectors – AI and creative industries – can work together positively, combining editorial expertise and understanding of the audience with the technology itself.

More broadly, the media industry should develop an industry position – what it thinks on key issues. The EBU can be a really helpful part of that. In the UK, regulators like Ofcom are interested in the AI space.

We need a constructive conversation on how we collectively make sure that our information ecosystem is robust and trusted. The media sector is central to that.

On the research, we will repeat the study, hopefully including other newsrooms. Because I’m fascinated to see two things: Do the assistants’ performances change over time? And do newsrooms of smaller languages see the same issues or maybe more? 

Do you think the media industry in general is behaving responsibly towards AI? Or what do you observe when you look outside of your BBC world?  

On the whole yes, and it’s great to see different perspective as well as areas of common interest. For example, I think everybody is now looking at experiences like chat assistants.

There’s so much to do it would be fantastic to identify common priorities across the EBU group, because working on AI can be hard and costly and where we can collaborate we should.

That said, we have seen some pretty high-profile mistakes in the industry – certainly in the first 12 to 18 months after ChatGPT launched – and excitement occasionally outpaced responsible use.

It’s also very helpful to see other organizations testing some of the boundaries because it helps us and other public service media organizations calibrate where we are and what we should be doing.  

There are huge hopes in the industry to use generative AI to make journalism more inclusive, transcend format boundaries to attract different audiences. Are these hopes justified?  

I’m pretty bullish. The critical thing is that we stay totally aligned to our mission, our standards, and our values. AI changes a lot, but what it doesn’t change is who we are and what we’re here to do.

One of the pilots that we’re looking at how to scale is taking audio content, in this example, a football broadcast, and using AI to transcribe and create a summary and then a live text page.

Live text updates and pages on football games are incredibly popular with our audiences, but currently there’s only so many games we can create a live page for. The ability to use AI to scale that so we can provide a live text page for every football game we cover on radio would be amazing.

One of the other things that we’re doing is going to the next level with our own BBC large language model that reflects the BBC style and standards. This approach to constitutional AI is really exciting. It’s being led out of the BBC’s R&D team – we’re incredibly lucky to have them.  

Do you have anything fully implemented yet?  

The approach that we’ve taken with generative AI is to do it in stages. In a number of areas, like the football example, we are starting small with working, tactical solutions that we can increase the use of while we work on productionised versions in parallel.

Another example is using AI to create subtitles on BBC Sounds. Again, here we’ve got an interim solution that we will use to provide more subtitles to programmes while in parallel we create a productionised version that is that is much more robust and easier to scale across all audio.

A key consideration is creating capabilities that can work across multiple use cases not just one, and that takes time.  

What is your position towards labelling?  

We have a very clear position: We will label the use of AI where there is any risk that the audience might be materially misled.

This means any AI output that could be mistaken for real is clearly labelled. This is particularly important in news where we will also be transparent about where AI has a material or significant impact on the content or in its production – for example if an article is translated using AI.

We’re being conservative because the trust of our audience is critical.  

What’s the internal mood towards AI? The BBC is a huge organization, and you are probably working in an AI bubble. But do you have any feel for how people are coming on board?  

One of the key parts of my role is speaking to teams and divisions and explaining what AI is and isn’t and the BBC’s approach.

Over the last 12 months, we’ve seen a significant increase in uptake of AI tools like Microsoft Copilot and many staff are positive about how AI can help them in their day-to-day work.

There are of course lots of questions and concerns, particularly as things move quickly in AI.

A key thing is encouraging staff to play with the tools we have so they can understand the opportunities and limitations. Things like Microsoft Copilot are now available across the business, also Adobe Firefly, GitHub Copilot, very shortly ChatGPT.

But it’s important we get the balance right and listen carefully to those who have concerns about the use of AI.

We are proceeding very carefully because at the heart of the BBC is creativity and human-led journalism with very high standards of editorial. We are not going to put that at risk.  

What’s not talked about enough in the context of generative AI and journalism? 

We shouldn’t underestimate the extent to which the world is changing around us. AI assistants, AI overviews are here to stay.

That is a fundamental shift in our information landscape. In two or three years’ time, many may be getting their news directly from Google or Perplexity.

As our research showed, there are real reasons for concern. And there is this broader point around disinformation. We’ve all seen the Pope in a puffer jacket, right? And we’ve all seen AI images of floods in Europe and conflict in Gaza.

But we’re also starting to see the use of AI at a very local level that doesn’t get much exposure but could nevertheless ruin lives.

As journalists, we need to be attuned to the potential misinformation on our doorstep that is hard to spot.  

This interview was published by the EBU on 26th March 2025.

Tav Klitgaard, CEO Zetland: “We don’t like perfect, because perfect is not trustworthy”

The Danish news media Zetland belongs among the few big success stories in European digital media brands. It was profitable three years after being launched, attracts a comparatively young audience and is set to launch a new brand in Finland in January 2025. I spoke to their CEO Tav Klitgaard about how to engage audiences, working business models and the future of journalism in an AI-supported word.    

Tav, interviews shouldn’t begin with praise, but Zetland is an outstanding success story in digital media. Your team founded it in 2016, it was profitable three years later. Today you have more than 40.000 digital subscribers. What do you do that others don’t

An advantage was that we did not have any print legacy when we started. We had the privilege of sitting down and thinking really hard about what does news media mean. Among other things, we found out that it means journalism is an experience. You have the content and then you have the distribution. Those two together create an experience. The value does not lie in the journalism. The value lies in the moment when the journalism becomes an experience that changes something in your head.

But you seem to be very proud of your journalism?

Sure, we are! But existing companies way too often produce journalism from a sender’s perspective. We always try to have a receiver perspective. I would see this as the key reason for our success.

Zetland doesn’t do breaking news but publishes just a few in-depth stories a day, it focuses on explanation and analysis and has offered everything in audio format from the beginning.

Our first principle is that we are our members. This is why we came up with audio, because we asked them and they said: ‘Well, I really would want to consume your articles, but I’ve been looking into a screen for 10 hours today and I’m tired of it.’ We said, then audio could be a thing for you. And it turned out we were right.

In the age of generative AI, converting stuff to audio will be very, very easy. Won’t you lose your competitive advantage when everyone can just press the audio button everywhere?

I believe the last frontier against AI is personality. Audio is awesome at creating an intimate relationship. So, when we create a human audio product, we don’t use an AI robot voice, because the problem with that is that it’s too good. It’s perfect. We don’t like perfect because perfect is not trustworthy. You should not be perfect, you should be a human. And that’s what we are doing in all our products, creating something that is human.

Managers from traditional news outlets envy you because your audience skews young.

We are not a news outlet for young people, but we do have a pretty young demographic. About 50 percent of our audience is in their 20s and 30s. And we believe that the way that you build trust within a younger audience is to be human. It’s a giga trend in the world that that the trust is moving from authorities to persons. That’s also the reason behind the success of Instagram or TikTok. That’s why we always focus on the tone of voice and the storytelling. We imagine ourselves to be your friend and get into the car with you and tell you the story from the passenger seat. The world is super interesting. But there needs to be energy and engagement behind the stories we tell.

Part of your distribution model is people need to pay for a membership, but they can share the story with as many people as they like to. Don’t you fear that many free riders are taking advantage of you?

That’s right, our readers can share everything for free. Actually, the more members share our content, the happier we are. It proves to us and themselves that it has value to them, and it means more people get to know us. Journalism is great when it is discussed, and it should be easy for our members to get someone to discuss it with. It’s also great for our sources that they can freely share what they told us in their own network.

A Zetland membership is pretty expensive compared to other digital subscriptions though. 

Yeah, it costs around 18 or 19 euros per month. I keep hearing: Young people don’t want to pay for news. That is not true. You have to look at the user needs. If people don’t want to pay, it’s because your product is not valuable to them. If you look at, let’s say, a person who is 25 years old. She has a strong need to understand the world. Who am I in this world? What does society mean for me? What do I mean for society? The key is to not require a whole lot of prior knowledge for her to understand the world but to tell her super interesting stories about the world. Younger audiences are underserved by the media, at least in Denmark. If you’re 60 and a doctor and live in Copenhagen, well, you have a plethora of options. If you’re 26 and a nurse working at a rural hospital, you don’t have a lot of places to go to in the media world. So, what happens? You end up at TikTok. The right price is whatever value the product gives to the user. Our average member spends more than seven hours per month with us. I think €18,50 is actually very cheap for seven hours of value.

Are you still growing or have you reached a ceiling with your particular audience?

We are growing very much. On the group level, we will have a revenue growth of at least 40 percent this year and I pretty conservatively project that to be the case next year, too. It’s not a 40 percent growth in Denmark, but it’s a 40 percent for the group which consists of journalism outlets in Denmark and now in Finland. And then we also sell other things, for instance, we sell books and technology.

So, you’re not only a media and journalism company, but also a tech company.

Exactly. The day before ChatGPT was launched, we launched our transcription service. That means very early on, we have been working with large language models and generative AI. The number one use case people think about when thinking about AI and journalism is transcription. So, we built a transcription service that for the first time ever has worked in Danish. That is basically contributing almost a quarter of our revenue this year. We also sell our distribution technology. We license the website and app and CMS that we built for Zetland to other media companies. It’s not something that we do to become filthy rich, but we need to be tech-savvy. Spotify is spending a gazillion dollars on tech development, and we need to be able to compete with Spotify.

You are planning to scale the Zetland concept internationally? Tell us about the Finnish project that you made headlines with recently.

The Finland case is super exciting for us. Three or four years ago we decided that we would begin the international journey. My background is within tech and in the tech industry, we always say if you have a product market fit, the next thing you need to do is scale. It’s not as easy as translating something, but we asked ourselves if the concept was replicable outside of Denmark. In the beginning of 2024, we hired a founding team in Finland and tasked them with creating a splash in the market to test whether our assumptions were right: that there is no big difference between Finnish people and Danish people in terms of what user needs they have. We talked about our mission of quality journalism and then said: If you’re willing to pay for this, we’re willing to build it. And that’s what we told them in September and October. What happened was that 10,000 Finns decided to prepay a subscription worth around 100 euros, which was much more than we had anticipated. We got 10,000 Finns to pay for something that does not exist!

When will it start to exist?

We are currently hiring a ton of people in Helsinki, a lot of journalists, and then we will start publishing the Finnish version of Zetland on 15th January.

What will you name it?

Zetland in Finland is called Uusi Juttu, meaning something like “The new thing”. Check it at uusijuttu.fi.

Do you have other markets where you have these kinds of assumptions or is this a Nordic thing? After all, the willingness to pay for journalism is much lower in other regions of Europe.

I think what we have learned to do in Denmark is very usable in a lot of different markets in Europe. It could also be outside of Europe, but it’s going to take us some time, some partners, and some money to be able to prove that I’m right.

Of course, I have to ask you about Germany now.

Well, Germany is definitely interesting, and it’s close to Denmark. If anyone who reads this thinks they want to build that in Germany, please reach out, because it’s also obvious for us that we are not going to be able to do it alone. We would need German partners who agree to our mission and are awesome journalists, tech people, and businesspeople.

Is there still some advice you could give to legacy media, or do you think they’re just lost?

If you have a print paper, you have to really, really think about why do you have a print paper? Most managers say: because it’s profitable. This means they do not focus 100 percent on the future and will innovate at a much slower pace.

What is the future of journalism in the age of AI?

I think there is a golden future for journalism. I think that the user needs that journalism fills are very much there, also among younger audiences. People need someone with feelings and with human intent to tell them about what’s going on. Plus, I believe that besides information, people want community and a sense of belonging. And I think journalism is wonderful at filling these needs. That’s why I believe that that there is a golden future.

So, it will be a golden future for less journalism, a lower volume at least.

Yes, I think that there has been a lot of work within journalism that has really been not super creative and that will go away.

Interview: Alexandra Borchardt

This text was published in German and English by the industry publication Medieninsider on 5th January 2025. 

 

Nieman Lab Prediction 2025: Newsrooms Reinvent Their Political Journalism

In traditional newsrooms, political journalists tend to be those who call the shots. Even in the absence of statistics, it’s safe to bet that the majority of editors-in-chief used to cover politics before rising to the top job. This has shaped pretty much all of journalism. The “he said, she said” variety of news coverage that makes for a large part of political reporting has pervaded other subject areas as well. The attempt to give opposing parties a voice led to the so-called “both-sides journalism” which operates under the assumption that on the marketplace of ideas and opinions those will survive that serve the people best.

But the past few years have already demonstrated that this kind of journalism is not sustainable. First and foremost, it doesn’t serve humanity well in the case of imminent and severe threats like climate change or attacks on democratic institutions where bothsidesism is not an option. Also, newsroom metrics have shown again and again that audiences tend to be put off by news content that just amplifies opinions and intentions of decision makers without linking it to people’s lives. News avoidance is real and has been growing.

“What if reporting on racist, misogynist, dehumanizing opinions and comments has the opposite effect from what most journalists intend — normalizing propaganda and even making political candidates seem interesting?”

The result of the 2024 U.S. election and the rise of authoritarian leaning extremists in other democracies should have served as the final wakeup call for political journalism. What if the media’s calling out those who don’t respect democracy and its institutions doesn’t deter people from voting exactly those politicians into office? What if reporting on racist, misogynist, dehumanizing opinions and comments has the opposite effect from what most journalists intend — normalizing propaganda and even making political candidates seem interesting? And what if newsrooms who complain about political polarization have contributed their fair share to it themselves? Polarization has been a successful business model for journalism after all. These are hard questions that demand answers.

If they want to stay relevant in serving the public, newsrooms will have to double down on studying the impact of their political journalism and think about consequences. Otherwise, they will continue to preach to the converted and fail in their mission to inform people about real threats to their livelihoods. While there is no quick recipe to disrupt and reinvent political journalism, some of the following ingredients might help to develop an strategy and improve the result:

First, studying human behavior. There is plenty of research and evidence out there on how propaganda works, how those in or aspiring to power use the media to amplify it, and how people react to it. If journalists don’t want to be tools in the hands of those ready to abolish press freedom and erode democratic institutions, they better familiarize themselves with these mechanisms. Insights from communication and behavioral psychology should be part of all journalism education and shape newsroom debates. It has become obvious that values and emotions like a sense of justice, pride, shame, and fear shape people’s voting decisions often more than rational choice theory would suggest. Newsrooms must account for that.

Second, chasing data, not just quotes. For political journalists, quotes are data, for other people not so much. They deserve to know what happened, not what someone says they might want to see happening or intends to make happen once in power. Data journalism — increasingly improved by the capabilities of artificial intelligence — provides plenty of opportunities to paint pictures of the real world instead of the world of intentions and declarations. Political journalism can be more interesting when people see how politicians have actually performed in contexts where they were responsible. Needless to say that data journalism needs to be made engaging to appeal to a variety of audiences.

Third, connecting reporting to people’s everyday lives. Politicians have an agenda and journalists are often swayed by it; people are likely to have different ones. Observers might have been baffled that voters didn’t give the Biden administration credit for the strong state of the American economy, but apparently all many people saw before casting their vote was their rising cost of living. Most people care deeply about issues like housing, personal security, the education of their children, health, and care for aging relatives. Only, most of these issues are linked to citizens’ immediate surroundings, their communities. Unsurprisingly, local news tops the list of interests in all age groups when asked for their journalism preferences, as the 2024 Digital News Report revealed. But with diminishing investment in local journalism, many of these topics have been under covered in recent years. A disconnect between political journalism and people’s lives has emerged that needs to be remedied.

Fourth, choosing appropriate formats. Modern newsrooms target different audiences with different formats on the platforms these audiences engage with. Political journalism is still too focused on the audiences that they have traditionally served. It is often made for well-educated groups and decision makers. If newsrooms really want to reach people beyond the community of like-minded news consumers, they need to explore how these audiences can be attracted. There are high hopes in the industry that artificial intelligence can assist in making journalism more appealing and inclusive by transcending formats — converting content to text, video, audio, interactive chat, or even graphic novel by the push of a button. It is too early to tell how this will affect news consumption and audience figures in the real world, but many media leaders expect opportunities for stronger news uptake.

Fifth, learning from other fields of journalism. Political journalists tend to be aware of their importance in the internal hierarchy. Many of them feel proud to do “the real thing” instead of covering entertainment, sports, personal finance, and the like. This might help them to digest the fact that colleagues in other fields score higher in the audience metrics department. But it’s exactly these colleagues political journalists could learn from to improve their own game. They could ask the science desk how to best deal with data and how to break down complex matters in digestible formats. They might get some advice on humanizing stories from those reporting on sports or celebrities. They could learn from investigative reporters how to pace oneself when seemingly sensational material is at hand and how to cooperate with others. And they could practice churning out one or the other service story. In fact, the whole newsroom should be interested in improving political journalism, since at times politics is part of most subject matters.

If journalism wants to maintain its legitimacy, relevance, and impact — particularly in an age when artificial intelligence will make content production ubiquitous — it needs to urgently rethink political journalism. Making it appealing to broader audiences and attracting them to engage with it might be no less than a matter of its survival. Many media leaders are aware of this. Chances are that in 2025 newsrooms will finally rethink the paradigm of political journalism.

This text was published by Harvard University’s Nieman Lab in their Journalism Predictions for 2025 series. 

AI Labels in Journalism: Why Transparency Doesn’t Always Build Trust

The use of artificial intelligence in journalism requires sensitivity toward the audience. Trust is lost quickly. Transparency is supposed to remedy this. But labeling could even have a negative impact. This column discusses what to do.

In the case of Sports Illustrated, the issue was obvious. When it leaked out that some columns and reports at the renowned American sports magazine were not produced by clever minds but large language models, it cost the publication plenty of subscriptions and ultimately CEO Ross Levinsohn his job. Newsrooms that use journalist imitations made by artificial intelligence are therefore better off doing this confidently; a clear transparency notice is needed. The Cologne-based Express, for example, uses a disclaimer for its avatar reporter Klara Indernach. And even when stated openly, things can go wrong. The radio station Off Radio in Krakow, which had proudly announced that it would be presenting its listeners with a program controlled solely by AI, had to abandon the experiment after a short time. An avatar presenter had conducted a fictitious interview with literature Nobel Prize winner Wislawa Szymborska and asked her about current affairs – only the author had passed away in 2012. The audience was horrified. 

Nevertheless, transparency and an open debate about whether, when and to what extent newsrooms use AI when creating content is currently seen as a kind of silver bullet in the industry. Most ethical guidelines on the editorial use of AI are likely to contain a paragraph or two on the subject. There is a great fear of damaging one’s own brand through careless use of AI and further undermining the media trust that has been eroding in many places. So, it feels safer to point out that this or that summary or translation was generated by language models. How this is received by readers and users, however, has hardly been researched – and is also controversial among industry experts. While some are in favor of labels similar to those used for food, others point out that alerts like these could make the public even more suspicious. After all, the words “AI-assisted” could also be interpreted as editors wanting to ditch their responsibility in case of mistakes. 

We also know from other areas that too much transparency can diminish trust just as much as too little. A complete list of all mishaps and malpractice displayed in the foyer of a hospital would probably deter patients rather than inspire confidence. If you read a warning everywhere, you either flee or stop looking. Rachel Botsman, a leading expert on the subject, defines trust as “a confident relationship with the unknown”. Transparency and control do not strengthen trust, but rather make it less necessary because they reduce the unknown, she argues.  

Much more important for building trust are good experiences with the brand or individuals who represent it. To do this, an organization needs to communicate openly about the steps it takes and the processes it has in place to prevent mishaps. In airplanes, this includes redundancy of technology, double manning of the cockpit and fixed procedures; in newsrooms, the four-eyes and two-source principle. When people trust a media brand, they simply assume that this company structures and regularly checks all processes to the best of its knowledge, experience, and competence. If AI is highlighted as a special case, the impression could creep in that the newsroom doesn’t really trust the matter itself.

Felix Simon, a researcher at the Reuters Institute in Oxford therefore considers general transparency rules to be just as impractical as the widely used principle “human in the loop”, meaning that a person must always do the final check. He writes in a recent essay that it is a misconception that the public’s trust can be won back with these measures alone. 

Many journalists also do not realize how strongly their organization’s reporting on artificial intelligence shapes their audience’s relationship with it. Anyone who constantly reads and hears in interviews, essays and podcasts about what kind of devilish stuff humanity is being exposed to will hardly be open-minded about the technology if the otherwise esteemed newsroom suddenly starts to place AI references everywhere. As expected, respondents in surveys tend to be skeptical when asked about the use of AI in journalism – just as a consequence of the media’s reporting. 

It is therefore important to strengthen the skills of reporters so that they approach the topic of AI in a multi-layered way and provide constructive insights instead of meandering between hype and doomsday scenarios. The humanization of AI – whether through avatar reporters or just in the use of words does not exactly help to give the audience a realistic picture of what language and computing models can and cannot do.

People’s impression of AI will also be strongly influenced by their own experiences with it. Even today, there is hardly anyone among students who does not use tools such as ChatGPT from time to time. Even those who program for a living make use of the lightning-fast calculation models, and AI is increasingly becoming an everyday tool for office workers, just like spell checking, Excel calculations or voice input. However, it will become less and less obvious which AI is behind which tools, as tech providers will include them in the service package like the autofocus when taking a picture with a smartphone. AI labels could therefore soon seem like a relic from a bygone era.  

At a recent conference in Brussels hosted by the Washington-based Center for News, Technology & Innovation, one participant suggested that media organizations should consider labeling man-made journalism. What at first sounds like a joke actually has a serious background. The industry needs to quickly realize how journalism can retain its uniqueness and relevance in a world of rapidly scaling automated content production. Otherwise, it will soon have bigger problems than the question of how to characterize AI-supported journalism in individual cases.   

This text was published in German in the industry publication Medieninsider, translated by DeepL and edited by the author – who secretly thinks that this disclaimer might make her less vulnerable to criticism of her mastery of the English language.

Trusted Journalism in the Age of Generative AI

Media strategist Lucy Küng regards generative AI as quite a challenge for media organizations, particularly since many of them haven’t even yet mastered digital transformation to the full extent. But she also has some advice in store: “The media industry gave away the keys to the kingdom once –  that shouldn’t happen again”, she said in an interview led for the 2024 EBU News Report “Trusted Journalism in the Age of Generative AI”. Ezra Eeman, Director for Strategy and Innovation at the Netherland’s public broadcaster NPO, thinks that media organizations have a moral duty to be optimists around the technology. It will increase the opportunities for them to fulfill their public service mission better. These are just two voices, many more are to come. 

The report that is based on about 40 extensive interviews with international media leaders and experts will discuss the opportunities and risks of generative AI with a special focus on practical applications, management challenges, and ethical considerations. The team of authors includes Felix Simon (Oxford Internet Institute), Kati Bremme (France Television), and Olle Zachrison (Sveriges Radio), Alexandra is the lead author. In the run-up to and following publication, the EBU will publish some interviews. They will be shared here:

Nic Newman, Senior Research Associate, Reuters Institute: “Transparency is important, but the public does not want AI labels everywhere“, published on 28th June 2024.

Sarah Spiekermann, Professor WU Wien: “We need to seriously think about the total cost of digitazation“, published on 13th June 2024. 

Kai Gniffke, Director General SWR, Chair ARD: “AI is an incredible accelerator of change ..It’s up to us to use this technology responsibly“, published on 3rd June 2024.

Jane Barrett, Global Editor at Reuters: “We have to educate ourselves about AI and then report the hell out of it“, published on 16th May 2024. 

Ezra Eeman, Strategy and Innovation Director NPO, “We have a moral duty to be optimists“, published on 17th April 2024.  

Lucy Küng, independent Media Strategist: “The media industry gave away the keys to the kingdom once – that shouldn’t happen again“, published on 27th March 2024.

Nieman Lab Prediction 2024: Everyone in the Newsroom Gets Training

Up to now, the world’s newsrooms have been populated by roughly two phenotypes. On the one hand, there have been the content people (many of whom would never call their journalism “content,” of course). These include seasoned reporters, investigators, or commentators who spend their time deep diving into subjects, research, analysis, and cultivating sources and usually don’t want to be bothered by “the rest.”

On the other hand, there has been “the rest.” These are the people who understand formats, channel management, metrics, editing, products, and audiences, and are ever on the lookout for new trends to help the content people’s journalism thrive and sell. But with the advent of generative AI, taking refuge in the old and surprisingly stable world of traditional journalism roles will not be an option any longer. Everyone in the newsroom has to understand how large language models work and how to use them — and then actually use them. This is why 2024 will be the year when media organizations will get serious about education and training.

“We have to bridge the digital divide in our newsrooms,” says Anne Lagercrantz, vice CEO of Swedish public broadcaster SVT. This requires educating and training all staff, even those who until now have shied away from observing what is new in the industry. While in the past it was perfectly acceptable for, say, an investigative reporter not to know the first thing about SEO, TikTok algorithms, or newsletter open rates, now everyone involved with content needs to be aware of the capabilities, deficiencies, and mechanics of large language models, reliable fact-checking tools, and the legal and ethical responsibilities that come with their use. Additionally, AI has all the potential to transform good researchers and reporters into outstanding ones, serving as powerful extensions to the human brain. Research from Harvard Business School suggested that consultants who extensively used AI finished their tasks about 25% faster and outperformed their peers by 40% in quality. It will be in the interest of everyone, individuals and their employers, that no one falls behind.

But making newsrooms fit for these new challenges will be demanding. First, training requires resources and time. But leadership might be reluctant to free up both or tempted to invest in flashy new tools instead. Many managers still fall short of understanding that digital transformation is more a cultural challenge than it is a tech challenge.

Second, training needs trainers who understand their stuff. These are rare finds at a time when AI is evolving as rapidly as it is over-hyped. You will see plenty of consultants out there, of course. But it will be hard to tell those who really know things from those who just pretend in order to get a share of the pie. Be wary when someone flashes something like the ten must-have tools in AI, warns Charlie Beckett, founder of the JournalismAI project at the London School of Economics. Third, training can be a futile exercise when it is not paired with doing. With AI in particular, the goal should be to implement a culture of experimentation, collaboration, and transparency rather than making it some mechanical exercise. Technological advances will come much faster than the most proficient trainer could ever foresee.

Establishing a learning culture around the newsroom should therefore be a worthwhile goal for 2024 and an investment that will pay off in other areas as well. Anyone who is infected with the spirit of testing and learning will likely stretch their minds in areas other than AI, from product development to climate journalism. So many of today’s challenges for newsrooms require constant adaptation, working with data, and building connections with audiences who are more demanding, volatile, and impatient than they used to be. It is important that every journalist embraces at least some responsibility for the impact of their journalism.

It is also time that those editorial innovators who tend to run into each other at the same conferences open their circles to include all of the newsroom. Some might discover that a few of their older colleagues of the content-creator-phenotype could teach them a thing or two as well — for example, how to properly use a telephone. In an age when artificial fabrication of text, voice, or image documents is predicted to evolve at a rapid pace, the comeback of old-style research methods and verification techniques might become a thing. But let’s leave this as a prediction for 2025.

This post was published in Harvard’s Nieman Lab’s Journalism Predictions 2024 series on 7th December 2023.