Kasper Lindskow is the Head of AI at the Danish Politiken Media Group, one of the front runners in implementing GenAI based solutions in the industry. He is also co-founder of the Nordic AI in Media Summit, a leading industry conference on AI. Alexandra spoke to him about how to bring people along with new technologies, conflicts in the newsroom, and how to get the right tone of voice in the journalism.
Kasper, industry insiders regard JP/Politiken as a role model in implementing AI in its newsrooms. Which tools have been the most attractive for your employees so far?
We rolled out a basic ChatGPT clone in a safe environment to all employees in March 2024 and are in the process of rolling out more advanced tools. The key for us has been to “toolify” AI so that it can be used broadly across the organization, also for the more advanced stuff.
Now, the front runners are using it in all sorts of different creative ways. But we are seeing the classic cases being used most widely, like proofreading and adaptation to the writing guides of our different news brands, for example suggesting headlines.
We’ve seen growing use of AI also for searching the news archive and writing text boxes.
Roughly estimated, what’s the share of people in your organization who feel comfortable using AI tools on a daily basis?
Well, the front runners are experimenting with them regardless of whether we make tools available. I’d estimate this group to be between 10 and 15 percent of newsroom staff. I’d say we have an equally small group who are not interested in interacting with AI at all.
And then we have the most interesting group, between 70 and 80 percent or so of journalists who are interested and having tried to work with AI a little bit.
From our perspective, the most important part of rolling out AI is to build tools that fit that group to ensure a wider adoption. The potential is not in the front runners but in the normal, ordinary journalists.
This sounds like a huge, expensive effort. How large is your team?
We are an organization of roughly 3,000 people. Currently we are 11 people working full-time on AI development in the centralized AI unit plus two PhDs. That’s not a lot. But we also work for local AI hubs in different newsrooms, so people there spend time working with us.
This is costly. It does take time and effort, in particular if you want high quality and you want to ensure everything aligns with the journalism.
I do see a risk here of companies underinvesting and only doing the efficiency part and not aligning it with the journalism.
Do you have public-facing tools and products?
In recommender systems we do, because that’s about personalizing the news flow. That’s public facing and enabled by metadata.
We’re also activating metadata in ways that are public facing just for example in “read more” lists that are not personalized.
But in general, we’re not doing anything really public facing with generative AI that does not have humans in the loop yet.
What are the biggest conflicts around AI in your organization or in the newsroom?
Most debates are about automated recommender systems. Because sometimes they churn out stuff that colleagues don’t find relevant.
But our journalists have very different reading profiles from the general public. They read everything and then they criticize when something very old turns up.
And then, of course, you have people thinking: “What will this do to my job?”
But all in all, there hasn’t been much criticism. We are getting a lot more requests like: “Can you please build this for me?”
What do you think the advancement of generative AI will do to the news industry as a whole?
Let’s talk about risks first. There’s definitely a risk of things being rolled out too fast. This is very new technology. We know some limitations, others we don’t.
So, it is important to roll it out responsibly at a pace that people can handle and with the proper education along the way.
If you roll it out too fast there will be mistakes that would both hurt the rollout of AI and the potential you could create with it, impacting the trustworthiness of news.
Another risk is not taking the need to align these systems with your initial mission seriously enough.
Some organizations struggle with strategic alignment, could you explain this a bit, please?
Generative AI has a well-known tendency to gravitate towards the median in its output – meaning that if you have that fast prototype with a small prompt and roll it out then your articles tend to become dull, ordinary and average.
It’s not necessarily a tool for excellence. It can be but you really need to do it right. You need to align it with the news brand and its particular tone of voice, for example. That requires extensive work, user testing and fine-tuning of the systems underneath.
If we don’t take the journalistic work seriously, either because we don’t have resources to do it or because we don’t know it or move too fast, it could have a bad impact on what we’re trying to achieve. Those are the risk factors that we can impact ourselves.
The other risks depend on what happens in the tech industry?
A big one is when other types of companies begin using AI to do journalism.
You mean companies that are not bound by journalistic values?
If you’re not a public service broadcaster but a private media company, for the past 20 years you’ve experienced a structural decline.
If tech giants begin de-bundling the news product even further by competing with journalists, this could accelerate the structural decline of news media.
But we should talk about opportunities now. Because if done properly, generative AI in particular has massive potential. It can give journalists superpowers.
Because it helps to enrich storytelling and to automate the boring tasks?
We are not there yet. But generative AI is close to having the potential for, once you have done your news work with finding the story, telling that story across different modalities.
And to me that is strong positive potential for addressing different types of readers and audiences.
We included a case study on Magna in the first EBU News Report which was published in June 2024. What have your biggest surprises been since then?
My biggest positive surprise is the level of feedback we are getting from our journalists. They’re really engaging with these tools. It’s extremely exciting for us as an AI unit that we are no longer working from assumptions but we are getting this direct feedback.
I am positively surprised but also cautious about the extent to which we have been able to adapt these systems to our individual news brands. Our tool Magna is a shared infrastructure framework for everyone.
But when you ask it to perform a task it gives very different output depending on the brand you request it for. You get, for example, a more tabloid-style response for Ekstra Bladet and a more sophisticated one for our upmarket Politiken.
A lot of work went into writing very different prompts for the different brands.
What about the hallucinations everyone is so afraid of?
This was another surprise. We thought that factuality was going to be the big issue. We had many tests and found out that when we use it correctly and ground it in external facts, we are seeing very few factual errors and hallucinations.
Usually, they stem from an article in the archive that is outdated because something new happened, not because of any hallucinations inside the model.
The issue is more getting the feel right in the output, the tone of voice, the angles that are chosen in this publication that we’re working with – everything that has to do with the identity of the news brand.
This interview was published by the EBU as an appetizer for the News Report “Leading Newsrooms in the Age of Generative AI” on .8th April 2025.