Anne Lagercrantz, SVT: “Journalism has to move up the value chain”

Anne Lagercrantz is the Director General of SVT Swedish Television. Alexandra talked to her about how generative AI has created more value for audiences, SVTs network of super users, and what will make journalism unique as opposed to automated content generation. 

Anne, many in the industry have high hopes that AI can do a lot to improve journalism, for example by making it more inclusive and appealing to broader audiences. Looking at SVT, do you see evidence for this?  

I can see some evidence in the creative workflows. We just won an award for our Verify Desk, which uses face recognition and geo positioning for verification.  

Then, of course, we provide automated subtitles and AI-driven content recommendations. In investigative journalism, we use synthetic voices to ensure anonymity.  

I don’t think we reach a broader audience. But it’s really being inclusive and engaging. 

In our interview for the 2024 report, you said AI hadn’t been transformative yet for SVT. What about one year later? 

We’re one step further towards the transformative. For example, when I look at kids’ content, we now use text to video tools that are good enough for real productions. We used AI tools to develop games then we built a whole show around it.  

So, we have transformative use cases but it hasn’t transformed our company yet.  

What would your vision be? 

Our vision is to use AI tools to create more value for the audience and to be more effective.  

However – and I hear this a lot from the industry – we’re increasing individual efficiency and creativity, but we’re not saving any money. Right now, everything is more expensive.  

Opinions are split on AI and creativity. Some say that the tools help people to be more creative, others say they are making users lazy. What are your observations?  

I think people are truly more creative. Take the Antiques Roadshow as an example, an international format that originated at the BBC.  

We’ve run it for 36 years. People present their antiques and have experts estimate their value. The producers used to work with still pictures but with AI support they can animate them.  

But again, it’s not the machine, it’s the human and the machine together.  

You were a newsroom leader for many, many years. What has helped to bring colleagues along and have them work with AI?  

I think we cracked the code. What we’ve done is, we created four small hubs: one for news, one for programmes, one for the back office and one for product. And the head of AI is holding it all together.  

The hubs consist of devoted experts who have designated time for coaching and experimenting with new tools. And then there’s a network of super users, we have 200 alone in the news department.  

It has been such a great experience to have colleagues learn from each other.  

It’s a top-down movement but bottom-up as well. We combine that with training, AI learning days with open demos. Everyone has access and possibility.  

We’ve tried to democratize learning. What has really helped to change attitudes and culture was when we created our own SVTGPT, a safe environment for people to play around in. 

What are the biggest conflicts about the usage of AI in the newsroom? 

The greatest friction is to have enthusiastic teams and co-workers who want to explore AI tools, but then there are no legal or financial frameworks in place.  

It’s like curiosity and enthusiasm meeting GDPR or privacy. And that’s difficult because we want people to explore, but we also want to do it in a safe manner. 

Would you say there’s too much regulation?  

No, I just think the AI is developing at a speed we’re not used to. And we need to find the time to have our legal and security department on board.  

Also, the market is flooded with new tools. And of course, some people want to try them all. But it’s not possible to assess fast that they’re safe enough. That’s when people feel limited. 

No one seems to be eager to talk about ethics any longer because everyone is so busy keeping up and afraid of missing the boat. 

Maybe we are in a good spot because we can experiment with animated kids’ content first. That’s different from experimenting with news where we are a lot more careful.  

Do you get audience reaction when using AI?  

There are some reactions, more curious than sceptical.  

What also helps is that the Swedish media industry has agreed upon AI transparency recommendations, saying that we will tell the audience that is AI when it has a substantial influence on the content. It could be confusing to label every tiny thing.  

Where do you see the future of journalism in the AI age now with reasoning models coming up and everyone thinking, oh, AI can do much of the news work that has been done by humans before? 

I’m certain that journalism has to move up in the value chain to investigation, verification and premium content.  

And we need to be better in providing context and accountability.  

Accountability is so valuable because it will become a rare commodity. If I want to contact Facebook or Instagram, it’s almost impossible. And how do you hold an algorithm accountable?  

But it is quite easy to reach an editor or reporter. We are close to home and accountable. Journalists will need to shift from being content creators and curators to meaning makers.  

We need to become more constructive and foster trust and optimism.  

Being an optimist is not always easy these days. Do you have fears in the face of the new AI world? 

Of course. One is that an overreliance on AI will lead to a decline in critical thinking and originality.  

We’re also super aware that there are a lot of hallucinations. Also, that misinformation could undermine public trust, and that it is difficult to balance innovation with an ethical AI governance.  

Another fear is that we are blinded by all the shiny new things and that we’re not looking at the big picture.  

What do you think is not talked about enough in the context of journalism and AI? 

We need to talk more about soft values: How are we as human beings affected by new technology?  

If we all stare at our own devices instead of looking at things together, we will see loneliness and isolation rise further.  

Someone recently said we used to talk about physical health then about mental health, and now we need to talk about social health, because you don’t ever need to meet anyone, you can just interact with your device. I think that’s super scary.  

And public service has such a meaningful role in sparking conversations, getting people together across generations.  

Another issue we need to talk more about is: if there is so much personalization and everyone has their own version of reality, what will we put in the archives? We need a shared record.

This interview was published by the EBU on 16th April as an appetizer for the EBU News Report “Leading Newsrooms in the Age of Generative AI”. 

Kasper Lindskow, JP Politiken Media Group: “Generative AI can Give Journalists Superpowers”

Kasper Lindskow is the Head of AI at the Danish Politiken Media Group, one of the front runners in implementing GenAI based solutions in the industry. He is also co-founder of the Nordic AI in Media Summit, a leading industry conference on AI. Alexandra spoke to him about how to bring people along with new technologies, conflicts in the newsroom, and how to get the right tone of voice in the journalism. 

Kasper, industry insiders regard JP/Politiken as a role model in implementing AI in its newsrooms. Which tools have been the most attractive for your employees so far?  

We rolled out a basic ChatGPT clone in a safe environment to all employees in March 2024 and are in the process of rolling out more advanced tools. The key for us has been to “toolify” AI so that it can be used broadly across the organization, also for the more advanced stuff.  

Now, the front runners are using it in all sorts of different creative ways. But we are seeing the classic cases being used most widely, like proofreading and adaptation to the writing guides of our different news brands, for example suggesting headlines.  

We’ve seen growing use of AI also for searching the news archive and writing text boxes.  

Roughly estimated, what’s the share of people in your organization who feel comfortable using AI tools on a daily basis? 

Well, the front runners are experimenting with them regardless of whether we make tools available. I’d estimate this group to be between 10 and 15 percent of newsroom staff. I’d say we have an equally small group who are not interested in interacting with AI at all.  

And then we have the most interesting group, between 70 and 80 percent or so of journalists who are interested and having tried to work with AI a little bit.  

From our perspective, the most important part of rolling out AI is to build tools that fit that group to ensure a wider adoption. The potential is not in the front runners but in the normal, ordinary journalists. 

This sounds like a huge, expensive effort. How large is your team?  

We are an organization of roughly 3,000 people. Currently we are 11 people working full-time on AI development in the centralized AI unit plus two PhDs. That’s not a lot. But we also work for local AI hubs in different newsrooms, so people there spend time working with us.  

This is costly. It does take time and effort, in particular if you want high quality and you want to ensure everything aligns with the journalism.  

I do see a risk here of companies underinvesting and only doing the efficiency part and not aligning it with the journalism. 

Do you have public-facing tools and products? 

In recommender systems we do, because that’s about personalizing the news flow. That’s public facing and enabled by metadata.  

We’re also activating metadata in ways that are public facing just for example in “read more” lists that are not personalized.  

But in general, we’re not doing anything really public facing with generative AI that does not have humans in the loop yet. 

What are the biggest conflicts around AI in your organization or in the newsroom? 

Most debates are about automated recommender systems. Because sometimes they churn out stuff that colleagues don’t find relevant.  

But our journalists have very different reading profiles from the general public. They read everything and then they criticize when something very old turns up.  

And then, of course, you have people thinking: “What will this do to my job?”  

But all in all, there hasn’t been much criticism. We are getting a lot more requests like: “Can you please build this for me?” 

What do you think the advancement of generative AI will do to the news industry as a whole? 

Let’s talk about risks first. There’s definitely a risk of things being rolled out too fast. This is very new technology. We know some limitations, others we don’t.  

So, it is important to roll it out responsibly at a pace that people can handle and with the proper education along the way.  

If you roll it out too fast there will be mistakes that would both hurt the rollout of AI and the potential you could create with it, impacting the trustworthiness of news.  

Another risk is not taking the need to align these systems with your initial mission seriously enough. 

Some organizations struggle with strategic alignment, could you explain this a bit, please?  

Generative AI has a well-known tendency to gravitate towards the median in its output – meaning that if you have that fast prototype with a small prompt and roll it out then your articles tend to become dull, ordinary and average.  

It’s not necessarily a tool for excellence. It can be but you really need to do it right. You need to align it with the news brand and its particular tone of voice, for example. That requires extensive work, user testing and fine-tuning of the systems underneath.  

If we don’t take the journalistic work seriously, either because we don’t have resources to do it or because we don’t know it or move too fast, it could have a bad impact on what we’re trying to achieve. Those are the risk factors that we can impact ourselves. 

The other risks depend on what happens in the tech industry? 

A big one is when other types of companies begin using AI to do journalism. 

You mean companies that are not bound by journalistic values? 

If you’re not a public service broadcaster but a private media company, for the past 20 years you’ve experienced a structural decline.  

If tech giants begin de-bundling the news product even further by competing with journalists, this could accelerate the structural decline of news media.  

But we should talk about opportunities now. Because if done properly, generative AI in particular has massive potential. It can give journalists superpowers.  

Because it helps to enrich storytelling and to automate the boring tasks? 

We are not there yet. But generative AI is close to having the potential for, once you have done your news work with finding the story, telling that story across different modalities.  

And to me that is strong positive potential for addressing different types of readers and audiences. 

We included a case study on Magna in the first EBU News Report which was published in June 2024. What have your biggest surprises been since then? 

My biggest positive surprise is the level of feedback we are getting from our journalists. They’re really engaging with these tools. It’s extremely exciting for us as an AI unit that we are no longer working from assumptions but we are getting this direct feedback.  

I am positively surprised but also cautious about the extent to which we have been able to adapt these systems to our individual news brands. Our tool Magna is a shared infrastructure framework for everyone.  

But when you ask it to perform a task it gives very different output depending on the brand you request it for. You get, for example, a more tabloid-style response for Ekstra Bladet and a more sophisticated one for our upmarket Politiken.  

A lot of work went into writing very different prompts for the different brands.  

What about the hallucinations everyone is so afraid of? 

This was another surprise. We thought that factuality was going to be the big issue. We had many tests and found out that when we use it correctly and ground it in external facts, we are seeing very few factual errors and hallucinations.  

Usually, they stem from an article in the archive that is outdated because something new happened, not because of any hallucinations inside the model.  

The issue is more getting the feel right in the output, the tone of voice, the angles that are chosen in this publication that we’re working with – everything that has to do with the identity of the news brand.  

This interview was published by the EBU as an appetizer for the News Report “Leading Newsrooms in the Age of Generative AI” on .8th April 2025.

Peter Archer, BBC: “What AI doesn’t change is who we are and what we are here to do”

The BBC’s Director of Generative AI talks about the approach of his organization to developing AI tools, experiences with their usage and the rampant inaccuracies AI assistants produce – and what is needed to remedy them. This interview was conducted for the EBU News Report “Leading Newsrooms in the Age of Generative AI” that will be published by the European Broadcasting Union.

BBC research recently revealed disturbing inaccuracies when AI agents provided news content and drew on BBC material. About every second piece had issues. Did you expect this?  

We expected to see a degree of inaccuracy, but perhaps not as high as we found. We were also interested in the range of different errors where AI assistants struggle including factual errors, but also lack of context, and the conflation of opinion and fact.

It was also interesting that none of the four assistants that we looked at – ChatGPT, Copilot, Gemini, and Perplexity – were much better or worse than any of the others, which suggests that there is an issue with the underlying technology.  

Has this outcome changed your view on AI as a tool for journalism?  

With respect to our own use of AI, it demonstrates the need to be aware of the limitations of AI tools.

We’re being conservative about the use of generative AI tools in the newsroom and our internal guidance is that generative AI should not be used directly for creating content for news, current affairs or factual content.

But we have identified specific use cases like summaries and reformatting that we think can bring real value.

We are not currently allowing third parties to scrape our content to be included in AI applications. We allowed ChatGPT and the other AI assistants to access our site solely for the purpose of this research. But, as our findings show, making content available can lead to distortion of that content.  

You emphasised working with the AI platforms was critical to tackle this challenge. Will you implement internal consequences, too? 

Generative AI poses a new challenge – because AI is being used by third parties to create content, like summaries of the news.

I think this new intersection of technology and content will require close working between publishers and technology companies to both help ensure the accuracy of content but also to make the most of the immense potential of generative AI technology.  

So, you think the industry should have more self-confidence? 

Publishers, and the creative and media industries more broadly, are critical to ensuring generative AI is used responsibly. The two sectors – AI and creative industries – can work together positively, combining editorial expertise and understanding of the audience with the technology itself.

More broadly, the media industry should develop an industry position – what it thinks on key issues. The EBU can be a really helpful part of that. In the UK, regulators like Ofcom are interested in the AI space.

We need a constructive conversation on how we collectively make sure that our information ecosystem is robust and trusted. The media sector is central to that.

On the research, we will repeat the study, hopefully including other newsrooms. Because I’m fascinated to see two things: Do the assistants’ performances change over time? And do newsrooms of smaller languages see the same issues or maybe more? 

Do you think the media industry in general is behaving responsibly towards AI? Or what do you observe when you look outside of your BBC world?  

On the whole yes, and it’s great to see different perspective as well as areas of common interest. For example, I think everybody is now looking at experiences like chat assistants.

There’s so much to do it would be fantastic to identify common priorities across the EBU group, because working on AI can be hard and costly and where we can collaborate we should.

That said, we have seen some pretty high-profile mistakes in the industry – certainly in the first 12 to 18 months after ChatGPT launched – and excitement occasionally outpaced responsible use.

It’s also very helpful to see other organizations testing some of the boundaries because it helps us and other public service media organizations calibrate where we are and what we should be doing.  

There are huge hopes in the industry to use generative AI to make journalism more inclusive, transcend format boundaries to attract different audiences. Are these hopes justified?  

I’m pretty bullish. The critical thing is that we stay totally aligned to our mission, our standards, and our values. AI changes a lot, but what it doesn’t change is who we are and what we’re here to do.

One of the pilots that we’re looking at how to scale is taking audio content, in this example, a football broadcast, and using AI to transcribe and create a summary and then a live text page.

Live text updates and pages on football games are incredibly popular with our audiences, but currently there’s only so many games we can create a live page for. The ability to use AI to scale that so we can provide a live text page for every football game we cover on radio would be amazing.

One of the other things that we’re doing is going to the next level with our own BBC large language model that reflects the BBC style and standards. This approach to constitutional AI is really exciting. It’s being led out of the BBC’s R&D team – we’re incredibly lucky to have them.  

Do you have anything fully implemented yet?  

The approach that we’ve taken with generative AI is to do it in stages. In a number of areas, like the football example, we are starting small with working, tactical solutions that we can increase the use of while we work on productionised versions in parallel.

Another example is using AI to create subtitles on BBC Sounds. Again, here we’ve got an interim solution that we will use to provide more subtitles to programmes while in parallel we create a productionised version that is that is much more robust and easier to scale across all audio.

A key consideration is creating capabilities that can work across multiple use cases not just one, and that takes time.  

What is your position towards labelling?  

We have a very clear position: We will label the use of AI where there is any risk that the audience might be materially misled.

This means any AI output that could be mistaken for real is clearly labelled. This is particularly important in news where we will also be transparent about where AI has a material or significant impact on the content or in its production – for example if an article is translated using AI.

We’re being conservative because the trust of our audience is critical.  

What’s the internal mood towards AI? The BBC is a huge organization, and you are probably working in an AI bubble. But do you have any feel for how people are coming on board?  

One of the key parts of my role is speaking to teams and divisions and explaining what AI is and isn’t and the BBC’s approach.

Over the last 12 months, we’ve seen a significant increase in uptake of AI tools like Microsoft Copilot and many staff are positive about how AI can help them in their day-to-day work.

There are of course lots of questions and concerns, particularly as things move quickly in AI.

A key thing is encouraging staff to play with the tools we have so they can understand the opportunities and limitations. Things like Microsoft Copilot are now available across the business, also Adobe Firefly, GitHub Copilot, very shortly ChatGPT.

But it’s important we get the balance right and listen carefully to those who have concerns about the use of AI.

We are proceeding very carefully because at the heart of the BBC is creativity and human-led journalism with very high standards of editorial. We are not going to put that at risk.  

What’s not talked about enough in the context of generative AI and journalism? 

We shouldn’t underestimate the extent to which the world is changing around us. AI assistants, AI overviews are here to stay.

That is a fundamental shift in our information landscape. In two or three years’ time, many may be getting their news directly from Google or Perplexity.

As our research showed, there are real reasons for concern. And there is this broader point around disinformation. We’ve all seen the Pope in a puffer jacket, right? And we’ve all seen AI images of floods in Europe and conflict in Gaza.

But we’re also starting to see the use of AI at a very local level that doesn’t get much exposure but could nevertheless ruin lives.

As journalists, we need to be attuned to the potential misinformation on our doorstep that is hard to spot.  

This interview was published by the EBU on 26th March 2025.

Trusted Journalism in the Age of Generative AI

Media strategist Lucy Küng regards generative AI as quite a challenge for media organizations, particularly since many of them haven’t even yet mastered digital transformation to the full extent. But she also has some advice in store: “The media industry gave away the keys to the kingdom once –  that shouldn’t happen again”, she said in an interview led for the 2024 EBU News Report “Trusted Journalism in the Age of Generative AI”. Ezra Eeman, Director for Strategy and Innovation at the Netherland’s public broadcaster NPO, thinks that media organizations have a moral duty to be optimists around the technology. It will increase the opportunities for them to fulfill their public service mission better. These are just two voices, many more are to come. 

The report that is based on about 40 extensive interviews with international media leaders and experts will discuss the opportunities and risks of generative AI with a special focus on practical applications, management challenges, and ethical considerations. The team of authors includes Felix Simon (Oxford Internet Institute), Kati Bremme (France Television), and Olle Zachrison (Sveriges Radio), Alexandra is the lead author. In the run-up to and following publication, the EBU will publish some interviews. They will be shared here:

Nic Newman, Senior Research Associate, Reuters Institute: “Transparency is important, but the public does not want AI labels everywhere“, published on 28th June 2024.

Sarah Spiekermann, Professor WU Wien: “We need to seriously think about the total cost of digitazation“, published on 13th June 2024. 

Kai Gniffke, Director General SWR, Chair ARD: “AI is an incredible accelerator of change ..It’s up to us to use this technology responsibly“, published on 3rd June 2024.

Jane Barrett, Global Editor at Reuters: “We have to educate ourselves about AI and then report the hell out of it“, published on 16th May 2024. 

Ezra Eeman, Strategy and Innovation Director NPO, “We have a moral duty to be optimists“, published on 17th April 2024.  

Lucy Küng, independent Media Strategist: “The media industry gave away the keys to the kingdom once – that shouldn’t happen again“, published on 27th March 2024.