Anne Lagercrantz is the Director General of SVT Swedish Television. Alexandra talked to her about how generative AI has created more value for audiences, SVTs network of super users, and what will make journalism unique as opposed to automated content generation.
Anne, many in the industry have high hopes that AI can do a lot to improve journalism, for example by making it more inclusive and appealing to broader audiences. Looking at SVT, do you see evidence for this?
I can see some evidence in the creative workflows. We just won an award for our Verify Desk, which uses face recognition and geo positioning for verification.
Then, of course, we provide automated subtitles and AI-driven content recommendations. In investigative journalism, we use synthetic voices to ensure anonymity.
I don’t think we reach a broader audience. But it’s really being inclusive and engaging.
In our interview for the 2024 report, you said AI hadn’t been transformative yet for SVT. What about one year later?
We’re one step further towards the transformative. For example, when I look at kids’ content, we now use text to video tools that are good enough for real productions. We used AI tools to develop games then we built a whole show around it.
So, we have transformative use cases but it hasn’t transformed our company yet.
What would your vision be?
Our vision is to use AI tools to create more value for the audience and to be more effective.
However – and I hear this a lot from the industry – we’re increasing individual efficiency and creativity, but we’re not saving any money. Right now, everything is more expensive.
Opinions are split on AI and creativity. Some say that the tools help people to be more creative, others say they are making users lazy. What are your observations?
I think people are truly more creative. Take the Antiques Roadshow as an example, an international format that originated at the BBC.
We’ve run it for 36 years. People present their antiques and have experts estimate their value. The producers used to work with still pictures but with AI support they can animate them.
But again, it’s not the machine, it’s the human and the machine together.
You were a newsroom leader for many, many years. What has helped to bring colleagues along and have them work with AI?
I think we cracked the code. What we’ve done is, we created four small hubs: one for news, one for programmes, one for the back office and one for product. And the head of AI is holding it all together.
The hubs consist of devoted experts who have designated time for coaching and experimenting with new tools. And then there’s a network of super users, we have 200 alone in the news department.
It has been such a great experience to have colleagues learn from each other.
It’s a top-down movement but bottom-up as well. We combine that with training, AI learning days with open demos. Everyone has access and possibility.
We’ve tried to democratize learning. What has really helped to change attitudes and culture was when we created our own SVTGPT, a safe environment for people to play around in.
What are the biggest conflicts about the usage of AI in the newsroom?
The greatest friction is to have enthusiastic teams and co-workers who want to explore AI tools, but then there are no legal or financial frameworks in place.
It’s like curiosity and enthusiasm meeting GDPR or privacy. And that’s difficult because we want people to explore, but we also want to do it in a safe manner.
Would you say there’s too much regulation?
No, I just think the AI is developing at a speed we’re not used to. And we need to find the time to have our legal and security department on board.
Also, the market is flooded with new tools. And of course, some people want to try them all. But it’s not possible to assess fast that they’re safe enough. That’s when people feel limited.
No one seems to be eager to talk about ethics any longer because everyone is so busy keeping up and afraid of missing the boat.
Maybe we are in a good spot because we can experiment with animated kids’ content first. That’s different from experimenting with news where we are a lot more careful.
Do you get audience reaction when using AI?
There are some reactions, more curious than sceptical.
What also helps is that the Swedish media industry has agreed upon AI transparency recommendations, saying that we will tell the audience that is AI when it has a substantial influence on the content. It could be confusing to label every tiny thing.
Where do you see the future of journalism in the AI age now with reasoning models coming up and everyone thinking, oh, AI can do much of the news work that has been done by humans before?
I’m certain that journalism has to move up in the value chain to investigation, verification and premium content.
And we need to be better in providing context and accountability.
Accountability is so valuable because it will become a rare commodity. If I want to contact Facebook or Instagram, it’s almost impossible. And how do you hold an algorithm accountable?
But it is quite easy to reach an editor or reporter. We are close to home and accountable. Journalists will need to shift from being content creators and curators to meaning makers.
We need to become more constructive and foster trust and optimism.
Being an optimist is not always easy these days. Do you have fears in the face of the new AI world?
Of course. One is that an overreliance on AI will lead to a decline in critical thinking and originality.
We’re also super aware that there are a lot of hallucinations. Also, that misinformation could undermine public trust, and that it is difficult to balance innovation with an ethical AI governance.
Another fear is that we are blinded by all the shiny new things and that we’re not looking at the big picture.
What do you think is not talked about enough in the context of journalism and AI?
We need to talk more about soft values: How are we as human beings affected by new technology?
If we all stare at our own devices instead of looking at things together, we will see loneliness and isolation rise further.
Someone recently said we used to talk about physical health then about mental health, and now we need to talk about social health, because you don’t ever need to meet anyone, you can just interact with your device. I think that’s super scary.
And public service has such a meaningful role in sparking conversations, getting people together across generations.
Another issue we need to talk more about is: if there is so much personalization and everyone has their own version of reality, what will we put in the archives? We need a shared record.
This interview was published by the EBU on 16th April as an appetizer for the EBU News Report “Leading Newsrooms in the Age of Generative AI”.