Prof. Pattie Maes, MIT: “We don’t have to simplify everything for everybody”

Prof. Pattie Maes and her team at the MIT Media Lab conduct research on the impact of generative AI on creativity and human decision-making. Their aim is to advice AI companies on designing systems that enhance critical thinking and creativity rather than encourage cognitive offloading. The interview was led for the upcoming EBU News Report “Leading Newsrooms in the Age of Generative AI”.  

It is often said that AI can enhance people’s creativity. Research you led seems to suggest the opposite. Can you tell us about it?  

You’re referring to a study where we asked college students to write an essay and had them solve a programming problem.  

We had three different conditions: One group could use ChatGPT. Another group could only use search without the AI results at the top. And the third group did not have any tool.  

What we noticed was that the group that used ChatGPT wrote good essays, but they expressed less diversity of thought, were more similar to one another and less original. 

Because people put less effort into the task at hand? 

We have seen that in other experiments as well: people are inherently lazy. When they use AI, they don’t think as much for themselves. And as a result, you get less creative outcomes.  

It could be a problem if, say, programmers at a company all use the same co-pilot to help them with coding, they won’t come up with new ways of doing things.  

As AI data increasingly feeds new AI models, you will get more and more convergence and less improvement and innovation.  

Journalism thrives on originality. What would be your advice to media managers? 

Raising awareness can help. But it would be more useful if we built these systems differently.  

We have been building a system that helps people with writing, for example. But instead of doing the writing for you, it engages you, like a good colleague or editor, by critiquing your writing, and occasionally suggesting that you approach something from a different angle or strengthen a claim.  

It’s important that AI design engages people in contributing to a solution rather that automating things for them.  

Sounds like great advice for building content management systems. 

Today’s off-the-shelf systems use an interface that encourages people to say: “write me an essay on Y, make sure it’s this long and includes these points of view.”  

These systems are designed to provide a complete result. We have grammar and spelling correctors in our editing systems, but we could have AI built into editing software that says, “over here your evidence or argument is weak.”  

It could encourage the person to use their own brain and be creative. I believe we can design systems that let us benefit from human and artificial intelligence.  

But isn’t the genie already out of the bottle? If I encouraged students who use ChatGPT to use a version that challenges them, they’d probably say: “yeah, next time when I don’t have all these deadlines”.   

We should design AI systems that are optimised for different goals and contexts, like an AI that is designed like a great editor, or an AI that acts like a great teacher.  

A teacher doesn’t give you the answers to all the problems, because the whole point is not the output the person produces, it is that they have learned something in the process.  

But certainly, if you have access to one AI that makes you work harder and another AI that just does the work for you, it is tempting to use that second one. 

Agentic AI is a huge topic. You did research on AI and agents as early as 1995. How has your view on this evolved since? 

Back when I developed software agents that help you with tasks, we didn’t have anything like today’s large language models. They were built by hand for a specific application domain and were able to do some minimal learning from the user.  

Today’s systems are supposedly AGI (artificial general intelligence) or close to it and are billed as systems that can do everything and anything for us.  

But what we are discovering in our studies is that they do not behave the way people behave. They don’t make the same choices, don’t have that deeper knowledge of the context, that self-awareness and self-critical reflection on their actions that people have.  

A huge problem with agentic systems will be that we think they are intelligent and behave like us, but that they don’t. And it’s not just because they hallucinate. 

But we want to believe they behave like humans? 

Let me give you an example. When I hired a new administrative assistant, I didn’t immediately give him full autonomy to do things on my behalf.  

I formed a mental model of him based on the original interview and his résumé. I saw “oh, he has done a lot of stuff with finance, but he doesn’t have much experience with travel planning.” So when some travel had to be booked, I would tell him, “Let me know the available choices so that I can tell you what I value and help you make a choice.”  

Over time my mental model of the assistant develops, and his model about my needs and preferences. We basically learn about each other. It is a much more interactive type of experience than with AI agents.  

These agents are not built to check and say, “I’m not so confident making this decision. So, let me get some input from my user.” It’s a little bit naïve that AI agents are being portrayed as “they are ready to be deployed, and they will be wonderful and will be able to do anything.”  

It might be possible to build agents that have the right level of self-awareness, reflection and judgment, but I have not heard many developers openly think about those issues. And it will require a lot of research to get it right.  

Is there anything else your research reveals about the difficulties with just letting AI do things for us? 

We have done studies on decision making with AI. What you expect is that humans make better decisions if they are supported by an AI that is trained on a lot of data in a particular domain.  

But studies showed that was not what happened. In our study, we let people decide whether some newspaper headline was fake news or real news. What we found was when it’s literally just a click of a button to get the AI’s opinion, many people just use the AI’s output.  

There’s less deep engagement and thinking about the problem because it’s so convenient. Other researchers got similar results with experiments on doctors evaluating medical diagnoses supported by AI, for example. 

You are telling us that expectations in AI-support are overblown? 

I am an AI optimist. I do think it is possible to integrate AI into our lives in a way that it has positive effects. But we need to reflect more about the right ways to integrate it.  

In the case of the newspaper headlines we did a study that showed that if AI first engages you in thinking about a headline and asks you a question about it, it improves people’s accuracy, and they don’t accept the AI advice blindly.  

The interface can help with encouraging people to be a little bit more mindful and critical.  

This sounds like it would just need a little technical fix.  

It is also about how AI is portrayed. We talk about these systems as artificial forms of intelligence. We constantly are told that we’re so close to AGI. These systems don’t just converse in a human-like ways, but with an abundance of confidence.  

All of these factors trick us into perceiving them as more intelligent, more capable and more human than they really are. But they are more what Emily Bender, a professor at the University of Washington, called “stochastic parrots”.  

LLMs (large language models) are like a parrot that has just heard a lot of natural language by hearing people speak and can predict and imitate it pretty well. But that parrot doesn’t understand what it’s talking about.  

Presenting these systems as parrots rather than smart assistants would already help by reminding people to constantly think “Oh, I have to be mindful. These systems hallucinate. They don’t really understand. They don’t know everything.”  

We work with some AI companies on some of these issues. For example, we are doing a study with OpenAI on companion bots and how many people risk becoming overly attached to chat bots.  

These companies are in a race to get to AGI first, by raising the most money and building the biggest models. But I think awareness is growing that if we want AI to ultimately be successful, we have to think carefully about the way we integrate it in people’s lives.  

In the media industry there’s a lot of hope that AI could help journalism to become more inclusive and reach broader audiences. Do you see a chance for this to happen? 

These hopes are well-founded. We built an AI-based system for kids and older adults who may have trouble processing language that the average adult can process.  

The system works like an intra-language translator – it takes a video and translates it into simpler language while still preserving the meaning.  

There are wonderful opportunities to customize content to the abilities and needs of the particular user. But at the same time, we need to keep in mind that the more we personalize things, the more everybody would be in their own bubble, especially if we also bias the reporting to their particular values or interests.  

It’s important that we still have some shared media, shared news and a shared language, rather than creating this audience of one where people can no longer converse with others about things in the world that we should be talking about. 

This connects to your earlier argument: customisation could make our brains lazy.  

It is possible to build AI systems that have the opposite effect and challenge the user a little bit. This would be like being a parent who unconsciously adjusts their language for the current ability of their child and gradually introduces more complex language and ideas over time.  

We don’t have to simplify everything for everybody. We need to think about what AI will do to people and their social and emotional health and what artificial intelligence will do to natural human intelligence, and ultimately to our society.  

And we should have talks about this with everybody. Right now, our AI future is decided by AI engineers and entrepreneurs, which in the long run will prove to be a mistake. 

The interview was first published by the EBU on 1st April 2025.

Peter Archer, BBC: “What AI doesn’t change is who we are and what we are here to do”

The BBC’s Director of Generative AI talks about the approach of his organization to developing AI tools, experiences with their usage and the rampant inaccuracies AI assistants produce – and what is needed to remedy them. This interview was conducted for the EBU News Report “Leading Newsrooms in the Age of Generative AI” that will be published by the European Broadcasting Union.

BBC research recently revealed disturbing inaccuracies when AI agents provided news content and drew on BBC material. About every second piece had issues. Did you expect this?  

We expected to see a degree of inaccuracy, but perhaps not as high as we found. We were also interested in the range of different errors where AI assistants struggle including factual errors, but also lack of context, and the conflation of opinion and fact.

It was also interesting that none of the four assistants that we looked at – ChatGPT, Copilot, Gemini, and Perplexity – were much better or worse than any of the others, which suggests that there is an issue with the underlying technology.  

Has this outcome changed your view on AI as a tool for journalism?  

With respect to our own use of AI, it demonstrates the need to be aware of the limitations of AI tools.

We’re being conservative about the use of generative AI tools in the newsroom and our internal guidance is that generative AI should not be used directly for creating content for news, current affairs or factual content.

But we have identified specific use cases like summaries and reformatting that we think can bring real value.

We are not currently allowing third parties to scrape our content to be included in AI applications. We allowed ChatGPT and the other AI assistants to access our site solely for the purpose of this research. But, as our findings show, making content available can lead to distortion of that content.  

You emphasised working with the AI platforms was critical to tackle this challenge. Will you implement internal consequences, too? 

Generative AI poses a new challenge – because AI is being used by third parties to create content, like summaries of the news.

I think this new intersection of technology and content will require close working between publishers and technology companies to both help ensure the accuracy of content but also to make the most of the immense potential of generative AI technology.  

So, you think the industry should have more self-confidence? 

Publishers, and the creative and media industries more broadly, are critical to ensuring generative AI is used responsibly. The two sectors – AI and creative industries – can work together positively, combining editorial expertise and understanding of the audience with the technology itself.

More broadly, the media industry should develop an industry position – what it thinks on key issues. The EBU can be a really helpful part of that. In the UK, regulators like Ofcom are interested in the AI space.

We need a constructive conversation on how we collectively make sure that our information ecosystem is robust and trusted. The media sector is central to that.

On the research, we will repeat the study, hopefully including other newsrooms. Because I’m fascinated to see two things: Do the assistants’ performances change over time? And do newsrooms of smaller languages see the same issues or maybe more? 

Do you think the media industry in general is behaving responsibly towards AI? Or what do you observe when you look outside of your BBC world?  

On the whole yes, and it’s great to see different perspective as well as areas of common interest. For example, I think everybody is now looking at experiences like chat assistants.

There’s so much to do it would be fantastic to identify common priorities across the EBU group, because working on AI can be hard and costly and where we can collaborate we should.

That said, we have seen some pretty high-profile mistakes in the industry – certainly in the first 12 to 18 months after ChatGPT launched – and excitement occasionally outpaced responsible use.

It’s also very helpful to see other organizations testing some of the boundaries because it helps us and other public service media organizations calibrate where we are and what we should be doing.  

There are huge hopes in the industry to use generative AI to make journalism more inclusive, transcend format boundaries to attract different audiences. Are these hopes justified?  

I’m pretty bullish. The critical thing is that we stay totally aligned to our mission, our standards, and our values. AI changes a lot, but what it doesn’t change is who we are and what we’re here to do.

One of the pilots that we’re looking at how to scale is taking audio content, in this example, a football broadcast, and using AI to transcribe and create a summary and then a live text page.

Live text updates and pages on football games are incredibly popular with our audiences, but currently there’s only so many games we can create a live page for. The ability to use AI to scale that so we can provide a live text page for every football game we cover on radio would be amazing.

One of the other things that we’re doing is going to the next level with our own BBC large language model that reflects the BBC style and standards. This approach to constitutional AI is really exciting. It’s being led out of the BBC’s R&D team – we’re incredibly lucky to have them.  

Do you have anything fully implemented yet?  

The approach that we’ve taken with generative AI is to do it in stages. In a number of areas, like the football example, we are starting small with working, tactical solutions that we can increase the use of while we work on productionised versions in parallel.

Another example is using AI to create subtitles on BBC Sounds. Again, here we’ve got an interim solution that we will use to provide more subtitles to programmes while in parallel we create a productionised version that is that is much more robust and easier to scale across all audio.

A key consideration is creating capabilities that can work across multiple use cases not just one, and that takes time.  

What is your position towards labelling?  

We have a very clear position: We will label the use of AI where there is any risk that the audience might be materially misled.

This means any AI output that could be mistaken for real is clearly labelled. This is particularly important in news where we will also be transparent about where AI has a material or significant impact on the content or in its production – for example if an article is translated using AI.

We’re being conservative because the trust of our audience is critical.  

What’s the internal mood towards AI? The BBC is a huge organization, and you are probably working in an AI bubble. But do you have any feel for how people are coming on board?  

One of the key parts of my role is speaking to teams and divisions and explaining what AI is and isn’t and the BBC’s approach.

Over the last 12 months, we’ve seen a significant increase in uptake of AI tools like Microsoft Copilot and many staff are positive about how AI can help them in their day-to-day work.

There are of course lots of questions and concerns, particularly as things move quickly in AI.

A key thing is encouraging staff to play with the tools we have so they can understand the opportunities and limitations. Things like Microsoft Copilot are now available across the business, also Adobe Firefly, GitHub Copilot, very shortly ChatGPT.

But it’s important we get the balance right and listen carefully to those who have concerns about the use of AI.

We are proceeding very carefully because at the heart of the BBC is creativity and human-led journalism with very high standards of editorial. We are not going to put that at risk.  

What’s not talked about enough in the context of generative AI and journalism? 

We shouldn’t underestimate the extent to which the world is changing around us. AI assistants, AI overviews are here to stay.

That is a fundamental shift in our information landscape. In two or three years’ time, many may be getting their news directly from Google or Perplexity.

As our research showed, there are real reasons for concern. And there is this broader point around disinformation. We’ve all seen the Pope in a puffer jacket, right? And we’ve all seen AI images of floods in Europe and conflict in Gaza.

But we’re also starting to see the use of AI at a very local level that doesn’t get much exposure but could nevertheless ruin lives.

As journalists, we need to be attuned to the potential misinformation on our doorstep that is hard to spot.  

This interview was published by the EBU on 26th March 2025.

Tav Klitgaard, CEO Zetland: “We don’t like perfect, because perfect is not trustworthy”

The Danish news media Zetland belongs among the few big success stories in European digital media brands. It was profitable three years after being launched, attracts a comparatively young audience and is set to launch a new brand in Finland in January 2025. I spoke to their CEO Tav Klitgaard about how to engage audiences, working business models and the future of journalism in an AI-supported word.    

Tav, interviews shouldn’t begin with praise, but Zetland is an outstanding success story in digital media. Your team founded it in 2016, it was profitable three years later. Today you have more than 40.000 digital subscribers. What do you do that others don’t

An advantage was that we did not have any print legacy when we started. We had the privilege of sitting down and thinking really hard about what does news media mean. Among other things, we found out that it means journalism is an experience. You have the content and then you have the distribution. Those two together create an experience. The value does not lie in the journalism. The value lies in the moment when the journalism becomes an experience that changes something in your head.

But you seem to be very proud of your journalism?

Sure, we are! But existing companies way too often produce journalism from a sender’s perspective. We always try to have a receiver perspective. I would see this as the key reason for our success.

Zetland doesn’t do breaking news but publishes just a few in-depth stories a day, it focuses on explanation and analysis and has offered everything in audio format from the beginning.

Our first principle is that we are our members. This is why we came up with audio, because we asked them and they said: ‘Well, I really would want to consume your articles, but I’ve been looking into a screen for 10 hours today and I’m tired of it.’ We said, then audio could be a thing for you. And it turned out we were right.

In the age of generative AI, converting stuff to audio will be very, very easy. Won’t you lose your competitive advantage when everyone can just press the audio button everywhere?

I believe the last frontier against AI is personality. Audio is awesome at creating an intimate relationship. So, when we create a human audio product, we don’t use an AI robot voice, because the problem with that is that it’s too good. It’s perfect. We don’t like perfect because perfect is not trustworthy. You should not be perfect, you should be a human. And that’s what we are doing in all our products, creating something that is human.

Managers from traditional news outlets envy you because your audience skews young.

We are not a news outlet for young people, but we do have a pretty young demographic. About 50 percent of our audience is in their 20s and 30s. And we believe that the way that you build trust within a younger audience is to be human. It’s a giga trend in the world that that the trust is moving from authorities to persons. That’s also the reason behind the success of Instagram or TikTok. That’s why we always focus on the tone of voice and the storytelling. We imagine ourselves to be your friend and get into the car with you and tell you the story from the passenger seat. The world is super interesting. But there needs to be energy and engagement behind the stories we tell.

Part of your distribution model is people need to pay for a membership, but they can share the story with as many people as they like to. Don’t you fear that many free riders are taking advantage of you?

That’s right, our readers can share everything for free. Actually, the more members share our content, the happier we are. It proves to us and themselves that it has value to them, and it means more people get to know us. Journalism is great when it is discussed, and it should be easy for our members to get someone to discuss it with. It’s also great for our sources that they can freely share what they told us in their own network.

A Zetland membership is pretty expensive compared to other digital subscriptions though. 

Yeah, it costs around 18 or 19 euros per month. I keep hearing: Young people don’t want to pay for news. That is not true. You have to look at the user needs. If people don’t want to pay, it’s because your product is not valuable to them. If you look at, let’s say, a person who is 25 years old. She has a strong need to understand the world. Who am I in this world? What does society mean for me? What do I mean for society? The key is to not require a whole lot of prior knowledge for her to understand the world but to tell her super interesting stories about the world. Younger audiences are underserved by the media, at least in Denmark. If you’re 60 and a doctor and live in Copenhagen, well, you have a plethora of options. If you’re 26 and a nurse working at a rural hospital, you don’t have a lot of places to go to in the media world. So, what happens? You end up at TikTok. The right price is whatever value the product gives to the user. Our average member spends more than seven hours per month with us. I think €18,50 is actually very cheap for seven hours of value.

Are you still growing or have you reached a ceiling with your particular audience?

We are growing very much. On the group level, we will have a revenue growth of at least 40 percent this year and I pretty conservatively project that to be the case next year, too. It’s not a 40 percent growth in Denmark, but it’s a 40 percent for the group which consists of journalism outlets in Denmark and now in Finland. And then we also sell other things, for instance, we sell books and technology.

So, you’re not only a media and journalism company, but also a tech company.

Exactly. The day before ChatGPT was launched, we launched our transcription service. That means very early on, we have been working with large language models and generative AI. The number one use case people think about when thinking about AI and journalism is transcription. So, we built a transcription service that for the first time ever has worked in Danish. That is basically contributing almost a quarter of our revenue this year. We also sell our distribution technology. We license the website and app and CMS that we built for Zetland to other media companies. It’s not something that we do to become filthy rich, but we need to be tech-savvy. Spotify is spending a gazillion dollars on tech development, and we need to be able to compete with Spotify.

You are planning to scale the Zetland concept internationally? Tell us about the Finnish project that you made headlines with recently.

The Finland case is super exciting for us. Three or four years ago we decided that we would begin the international journey. My background is within tech and in the tech industry, we always say if you have a product market fit, the next thing you need to do is scale. It’s not as easy as translating something, but we asked ourselves if the concept was replicable outside of Denmark. In the beginning of 2024, we hired a founding team in Finland and tasked them with creating a splash in the market to test whether our assumptions were right: that there is no big difference between Finnish people and Danish people in terms of what user needs they have. We talked about our mission of quality journalism and then said: If you’re willing to pay for this, we’re willing to build it. And that’s what we told them in September and October. What happened was that 10,000 Finns decided to prepay a subscription worth around 100 euros, which was much more than we had anticipated. We got 10,000 Finns to pay for something that does not exist!

When will it start to exist?

We are currently hiring a ton of people in Helsinki, a lot of journalists, and then we will start publishing the Finnish version of Zetland on 15th January.

What will you name it?

Zetland in Finland is called Uusi Juttu, meaning something like “The new thing”. Check it at uusijuttu.fi.

Do you have other markets where you have these kinds of assumptions or is this a Nordic thing? After all, the willingness to pay for journalism is much lower in other regions of Europe.

I think what we have learned to do in Denmark is very usable in a lot of different markets in Europe. It could also be outside of Europe, but it’s going to take us some time, some partners, and some money to be able to prove that I’m right.

Of course, I have to ask you about Germany now.

Well, Germany is definitely interesting, and it’s close to Denmark. If anyone who reads this thinks they want to build that in Germany, please reach out, because it’s also obvious for us that we are not going to be able to do it alone. We would need German partners who agree to our mission and are awesome journalists, tech people, and businesspeople.

Is there still some advice you could give to legacy media, or do you think they’re just lost?

If you have a print paper, you have to really, really think about why do you have a print paper? Most managers say: because it’s profitable. This means they do not focus 100 percent on the future and will innovate at a much slower pace.

What is the future of journalism in the age of AI?

I think there is a golden future for journalism. I think that the user needs that journalism fills are very much there, also among younger audiences. People need someone with feelings and with human intent to tell them about what’s going on. Plus, I believe that besides information, people want community and a sense of belonging. And I think journalism is wonderful at filling these needs. That’s why I believe that that there is a golden future.

So, it will be a golden future for less journalism, a lower volume at least.

Yes, I think that there has been a lot of work within journalism that has really been not super creative and that will go away.

Interview: Alexandra Borchardt

This text was published in German and English by the industry publication Medieninsider on 5th January 2025. 

 

Nieman Lab Prediction 2025: Newsrooms Reinvent Their Political Journalism

In traditional newsrooms, political journalists tend to be those who call the shots. Even in the absence of statistics, it’s safe to bet that the majority of editors-in-chief used to cover politics before rising to the top job. This has shaped pretty much all of journalism. The “he said, she said” variety of news coverage that makes for a large part of political reporting has pervaded other subject areas as well. The attempt to give opposing parties a voice led to the so-called “both-sides journalism” which operates under the assumption that on the marketplace of ideas and opinions those will survive that serve the people best.

But the past few years have already demonstrated that this kind of journalism is not sustainable. First and foremost, it doesn’t serve humanity well in the case of imminent and severe threats like climate change or attacks on democratic institutions where bothsidesism is not an option. Also, newsroom metrics have shown again and again that audiences tend to be put off by news content that just amplifies opinions and intentions of decision makers without linking it to people’s lives. News avoidance is real and has been growing.

“What if reporting on racist, misogynist, dehumanizing opinions and comments has the opposite effect from what most journalists intend — normalizing propaganda and even making political candidates seem interesting?”

The result of the 2024 U.S. election and the rise of authoritarian leaning extremists in other democracies should have served as the final wakeup call for political journalism. What if the media’s calling out those who don’t respect democracy and its institutions doesn’t deter people from voting exactly those politicians into office? What if reporting on racist, misogynist, dehumanizing opinions and comments has the opposite effect from what most journalists intend — normalizing propaganda and even making political candidates seem interesting? And what if newsrooms who complain about political polarization have contributed their fair share to it themselves? Polarization has been a successful business model for journalism after all. These are hard questions that demand answers.

If they want to stay relevant in serving the public, newsrooms will have to double down on studying the impact of their political journalism and think about consequences. Otherwise, they will continue to preach to the converted and fail in their mission to inform people about real threats to their livelihoods. While there is no quick recipe to disrupt and reinvent political journalism, some of the following ingredients might help to develop an strategy and improve the result:

First, studying human behavior. There is plenty of research and evidence out there on how propaganda works, how those in or aspiring to power use the media to amplify it, and how people react to it. If journalists don’t want to be tools in the hands of those ready to abolish press freedom and erode democratic institutions, they better familiarize themselves with these mechanisms. Insights from communication and behavioral psychology should be part of all journalism education and shape newsroom debates. It has become obvious that values and emotions like a sense of justice, pride, shame, and fear shape people’s voting decisions often more than rational choice theory would suggest. Newsrooms must account for that.

Second, chasing data, not just quotes. For political journalists, quotes are data, for other people not so much. They deserve to know what happened, not what someone says they might want to see happening or intends to make happen once in power. Data journalism — increasingly improved by the capabilities of artificial intelligence — provides plenty of opportunities to paint pictures of the real world instead of the world of intentions and declarations. Political journalism can be more interesting when people see how politicians have actually performed in contexts where they were responsible. Needless to say that data journalism needs to be made engaging to appeal to a variety of audiences.

Third, connecting reporting to people’s everyday lives. Politicians have an agenda and journalists are often swayed by it; people are likely to have different ones. Observers might have been baffled that voters didn’t give the Biden administration credit for the strong state of the American economy, but apparently all many people saw before casting their vote was their rising cost of living. Most people care deeply about issues like housing, personal security, the education of their children, health, and care for aging relatives. Only, most of these issues are linked to citizens’ immediate surroundings, their communities. Unsurprisingly, local news tops the list of interests in all age groups when asked for their journalism preferences, as the 2024 Digital News Report revealed. But with diminishing investment in local journalism, many of these topics have been under covered in recent years. A disconnect between political journalism and people’s lives has emerged that needs to be remedied.

Fourth, choosing appropriate formats. Modern newsrooms target different audiences with different formats on the platforms these audiences engage with. Political journalism is still too focused on the audiences that they have traditionally served. It is often made for well-educated groups and decision makers. If newsrooms really want to reach people beyond the community of like-minded news consumers, they need to explore how these audiences can be attracted. There are high hopes in the industry that artificial intelligence can assist in making journalism more appealing and inclusive by transcending formats — converting content to text, video, audio, interactive chat, or even graphic novel by the push of a button. It is too early to tell how this will affect news consumption and audience figures in the real world, but many media leaders expect opportunities for stronger news uptake.

Fifth, learning from other fields of journalism. Political journalists tend to be aware of their importance in the internal hierarchy. Many of them feel proud to do “the real thing” instead of covering entertainment, sports, personal finance, and the like. This might help them to digest the fact that colleagues in other fields score higher in the audience metrics department. But it’s exactly these colleagues political journalists could learn from to improve their own game. They could ask the science desk how to best deal with data and how to break down complex matters in digestible formats. They might get some advice on humanizing stories from those reporting on sports or celebrities. They could learn from investigative reporters how to pace oneself when seemingly sensational material is at hand and how to cooperate with others. And they could practice churning out one or the other service story. In fact, the whole newsroom should be interested in improving political journalism, since at times politics is part of most subject matters.

If journalism wants to maintain its legitimacy, relevance, and impact — particularly in an age when artificial intelligence will make content production ubiquitous — it needs to urgently rethink political journalism. Making it appealing to broader audiences and attracting them to engage with it might be no less than a matter of its survival. Many media leaders are aware of this. Chances are that in 2025 newsrooms will finally rethink the paradigm of political journalism.

This text was published by Harvard University’s Nieman Lab in their Journalism Predictions for 2025 series. 

AI Labels in Journalism: Why Transparency Doesn’t Always Build Trust

The use of artificial intelligence in journalism requires sensitivity toward the audience. Trust is lost quickly. Transparency is supposed to remedy this. But labeling could even have a negative impact. This column discusses what to do.

In the case of Sports Illustrated, the issue was obvious. When it leaked out that some columns and reports at the renowned American sports magazine were not produced by clever minds but large language models, it cost the publication plenty of subscriptions and ultimately CEO Ross Levinsohn his job. Newsrooms that use journalist imitations made by artificial intelligence are therefore better off doing this confidently; a clear transparency notice is needed. The Cologne-based Express, for example, uses a disclaimer for its avatar reporter Klara Indernach. And even when stated openly, things can go wrong. The radio station Off Radio in Krakow, which had proudly announced that it would be presenting its listeners with a program controlled solely by AI, had to abandon the experiment after a short time. An avatar presenter had conducted a fictitious interview with literature Nobel Prize winner Wislawa Szymborska and asked her about current affairs – only the author had passed away in 2012. The audience was horrified. 

Nevertheless, transparency and an open debate about whether, when and to what extent newsrooms use AI when creating content is currently seen as a kind of silver bullet in the industry. Most ethical guidelines on the editorial use of AI are likely to contain a paragraph or two on the subject. There is a great fear of damaging one’s own brand through careless use of AI and further undermining the media trust that has been eroding in many places. So, it feels safer to point out that this or that summary or translation was generated by language models. How this is received by readers and users, however, has hardly been researched – and is also controversial among industry experts. While some are in favor of labels similar to those used for food, others point out that alerts like these could make the public even more suspicious. After all, the words “AI-assisted” could also be interpreted as editors wanting to ditch their responsibility in case of mistakes. 

We also know from other areas that too much transparency can diminish trust just as much as too little. A complete list of all mishaps and malpractice displayed in the foyer of a hospital would probably deter patients rather than inspire confidence. If you read a warning everywhere, you either flee or stop looking. Rachel Botsman, a leading expert on the subject, defines trust as “a confident relationship with the unknown”. Transparency and control do not strengthen trust, but rather make it less necessary because they reduce the unknown, she argues.  

Much more important for building trust are good experiences with the brand or individuals who represent it. To do this, an organization needs to communicate openly about the steps it takes and the processes it has in place to prevent mishaps. In airplanes, this includes redundancy of technology, double manning of the cockpit and fixed procedures; in newsrooms, the four-eyes and two-source principle. When people trust a media brand, they simply assume that this company structures and regularly checks all processes to the best of its knowledge, experience, and competence. If AI is highlighted as a special case, the impression could creep in that the newsroom doesn’t really trust the matter itself.

Felix Simon, a researcher at the Reuters Institute in Oxford therefore considers general transparency rules to be just as impractical as the widely used principle “human in the loop”, meaning that a person must always do the final check. He writes in a recent essay that it is a misconception that the public’s trust can be won back with these measures alone. 

Many journalists also do not realize how strongly their organization’s reporting on artificial intelligence shapes their audience’s relationship with it. Anyone who constantly reads and hears in interviews, essays and podcasts about what kind of devilish stuff humanity is being exposed to will hardly be open-minded about the technology if the otherwise esteemed newsroom suddenly starts to place AI references everywhere. As expected, respondents in surveys tend to be skeptical when asked about the use of AI in journalism – just as a consequence of the media’s reporting. 

It is therefore important to strengthen the skills of reporters so that they approach the topic of AI in a multi-layered way and provide constructive insights instead of meandering between hype and doomsday scenarios. The humanization of AI – whether through avatar reporters or just in the use of words does not exactly help to give the audience a realistic picture of what language and computing models can and cannot do.

People’s impression of AI will also be strongly influenced by their own experiences with it. Even today, there is hardly anyone among students who does not use tools such as ChatGPT from time to time. Even those who program for a living make use of the lightning-fast calculation models, and AI is increasingly becoming an everyday tool for office workers, just like spell checking, Excel calculations or voice input. However, it will become less and less obvious which AI is behind which tools, as tech providers will include them in the service package like the autofocus when taking a picture with a smartphone. AI labels could therefore soon seem like a relic from a bygone era.  

At a recent conference in Brussels hosted by the Washington-based Center for News, Technology & Innovation, one participant suggested that media organizations should consider labeling man-made journalism. What at first sounds like a joke actually has a serious background. The industry needs to quickly realize how journalism can retain its uniqueness and relevance in a world of rapidly scaling automated content production. Otherwise, it will soon have bigger problems than the question of how to characterize AI-supported journalism in individual cases.   

This text was published in German in the industry publication Medieninsider, translated by DeepL and edited by the author – who secretly thinks that this disclaimer might make her less vulnerable to criticism of her mastery of the English language.

Trusted Journalism in the Age of Generative AI

Media strategist Lucy Küng regards generative AI as quite a challenge for media organizations, particularly since many of them haven’t even yet mastered digital transformation to the full extent. But she also has some advice in store: “The media industry gave away the keys to the kingdom once –  that shouldn’t happen again”, she said in an interview led for the 2024 EBU News Report “Trusted Journalism in the Age of Generative AI”. Ezra Eeman, Director for Strategy and Innovation at the Netherland’s public broadcaster NPO, thinks that media organizations have a moral duty to be optimists around the technology. It will increase the opportunities for them to fulfill their public service mission better. These are just two voices, many more are to come. 

The report that is based on about 40 extensive interviews with international media leaders and experts will discuss the opportunities and risks of generative AI with a special focus on practical applications, management challenges, and ethical considerations. The team of authors includes Felix Simon (Oxford Internet Institute), Kati Bremme (France Television), and Olle Zachrison (Sveriges Radio), Alexandra is the lead author. In the run-up to and following publication, the EBU will publish some interviews. They will be shared here:

Nic Newman, Senior Research Associate, Reuters Institute: “Transparency is important, but the public does not want AI labels everywhere“, published on 28th June 2024.

Sarah Spiekermann, Professor WU Wien: “We need to seriously think about the total cost of digitazation“, published on 13th June 2024. 

Kai Gniffke, Director General SWR, Chair ARD: “AI is an incredible accelerator of change ..It’s up to us to use this technology responsibly“, published on 3rd June 2024.

Jane Barrett, Global Editor at Reuters: “We have to educate ourselves about AI and then report the hell out of it“, published on 16th May 2024. 

Ezra Eeman, Strategy and Innovation Director NPO, “We have a moral duty to be optimists“, published on 17th April 2024.  

Lucy Küng, independent Media Strategist: “The media industry gave away the keys to the kingdom once – that shouldn’t happen again“, published on 27th March 2024.

Nieman Lab Prediction 2024: Everyone in the Newsroom Gets Training

Up to now, the world’s newsrooms have been populated by roughly two phenotypes. On the one hand, there have been the content people (many of whom would never call their journalism “content,” of course). These include seasoned reporters, investigators, or commentators who spend their time deep diving into subjects, research, analysis, and cultivating sources and usually don’t want to be bothered by “the rest.”

On the other hand, there has been “the rest.” These are the people who understand formats, channel management, metrics, editing, products, and audiences, and are ever on the lookout for new trends to help the content people’s journalism thrive and sell. But with the advent of generative AI, taking refuge in the old and surprisingly stable world of traditional journalism roles will not be an option any longer. Everyone in the newsroom has to understand how large language models work and how to use them — and then actually use them. This is why 2024 will be the year when media organizations will get serious about education and training.

“We have to bridge the digital divide in our newsrooms,” says Anne Lagercrantz, vice CEO of Swedish public broadcaster SVT. This requires educating and training all staff, even those who until now have shied away from observing what is new in the industry. While in the past it was perfectly acceptable for, say, an investigative reporter not to know the first thing about SEO, TikTok algorithms, or newsletter open rates, now everyone involved with content needs to be aware of the capabilities, deficiencies, and mechanics of large language models, reliable fact-checking tools, and the legal and ethical responsibilities that come with their use. Additionally, AI has all the potential to transform good researchers and reporters into outstanding ones, serving as powerful extensions to the human brain. Research from Harvard Business School suggested that consultants who extensively used AI finished their tasks about 25% faster and outperformed their peers by 40% in quality. It will be in the interest of everyone, individuals and their employers, that no one falls behind.

But making newsrooms fit for these new challenges will be demanding. First, training requires resources and time. But leadership might be reluctant to free up both or tempted to invest in flashy new tools instead. Many managers still fall short of understanding that digital transformation is more a cultural challenge than it is a tech challenge.

Second, training needs trainers who understand their stuff. These are rare finds at a time when AI is evolving as rapidly as it is over-hyped. You will see plenty of consultants out there, of course. But it will be hard to tell those who really know things from those who just pretend in order to get a share of the pie. Be wary when someone flashes something like the ten must-have tools in AI, warns Charlie Beckett, founder of the JournalismAI project at the London School of Economics. Third, training can be a futile exercise when it is not paired with doing. With AI in particular, the goal should be to implement a culture of experimentation, collaboration, and transparency rather than making it some mechanical exercise. Technological advances will come much faster than the most proficient trainer could ever foresee.

Establishing a learning culture around the newsroom should therefore be a worthwhile goal for 2024 and an investment that will pay off in other areas as well. Anyone who is infected with the spirit of testing and learning will likely stretch their minds in areas other than AI, from product development to climate journalism. So many of today’s challenges for newsrooms require constant adaptation, working with data, and building connections with audiences who are more demanding, volatile, and impatient than they used to be. It is important that every journalist embraces at least some responsibility for the impact of their journalism.

It is also time that those editorial innovators who tend to run into each other at the same conferences open their circles to include all of the newsroom. Some might discover that a few of their older colleagues of the content-creator-phenotype could teach them a thing or two as well — for example, how to properly use a telephone. In an age when artificial fabrication of text, voice, or image documents is predicted to evolve at a rapid pace, the comeback of old-style research methods and verification techniques might become a thing. But let’s leave this as a prediction for 2025.

This post was published in Harvard’s Nieman Lab’s Journalism Predictions 2024 series on 7th December 2023.  

Interview with Prof. Charlie Beckett on AI: “Frankly, I’ve never seen industry executives so worried before”

LSE-Professor Charlie Beckett, founder and director of the JournalismAI project, talks about what AI means for journalism, how to tell advice from rubbish, and how the news industry adjusts to the new challenges.

Medieninsider: Since the launch of ChatGPT, new AI applications relevant to journalism have been announced almost every day. Which one intrigues you the most?

Charlie Beckett: A small newsroom in Malawi that is participating in our AI course for small newsrooms, recently built a generative AI-based tool that is practically a whole toolbox, It can be used to simplify newsroom workflows. The idea is to quickly process information and cast it into formats, a kind of super-efficient editorial manager. It’s not one of those sensational applications that help discover deep fakes or unearth the next Watergate as an investigative tool. But I think it’s great: an African newsroom that quickly develops something that makes day-to-day operations easier. I think the immediate future lies in these more mechanical applications. That often gets lost in the media hype. People would rather discuss topics like killer robots.

 

Do you think small newsrooms will benefit most from AI, or will the big players be the winners once again?

The answer is: I don’t know! So far, when it comes to innovation, large newsrooms have benefited the most because they can invest more. But if small newsrooms can find a few tools to help them automate newsletters or analyze data for an investigative project, for example, it can help them tremendously. A ten percent gain in efficiency can be an existential question for them. For local newsrooms AI could prove to be a bridge technology. At least that’s what I hear in conversations.

Because they can do more with fewer people? There is this example from Sweden of a tool that automatically evaluates real estate prices; it has been successful generating subscriptions, because readers love that kind of stuff – just like weather and traffic reports.

At least, that’s what editors at small newsrooms hope. They say they could use AI to produce at least sufficient content to justify the existence of their brand. Reporters could then focus on researching real local stories. We’ll see if that happens. But AI will definitely shape the industry at least as much as online journalism and the rise of social media have.

AI seems to unleash enthusiasm and a spirit of experimentation in the industry, unlike back in the early days of online journalism, when many were sceptical.

The speed of the development is almost breath-taking. In the beginning, we looked at artificially generated images and thought, well, that looks a bit wobbly. Three months later, there were already impressively realistic images. We’re moving through this hype cycle right now. No matter which newsroom in the world I talk to everyone is at least playing around with AI; by the end of the year at the latest, many will have implemented something.

But you say it’s too early to make predictions?

We’re seeing an extremely fluid development right now. Advertisers don’t yet know what to do, and in the relationship between platform groups and publishers, a lot is out in the open again. In fact, I’ve never experienced anything like this before. It’s clear to everyone that we’re facing a big change.

But isn’t it risky to just wait, and see?

Automation is still very unstable. Setting up new processes at the current level would be like building a house on a volcano. The right process is: let employees experiment, learn, and definitely think about potential impacts. If you’re asking me now, what are the ten tools I need to know, that’s the wrong question.

That’s exactly what I wanted to ask, of course. That’s what a lot of people want to know at the moment, after all. And everyone wants to be the first to publish the ultimate AI manual for newsrooms. So, do you have to be suspicious when someone confidently claims to have solutions?

We are currently collecting who is using which tools and what experiences are being made with them. But we are not making recommendations about what is the best tool. I just spoke to the CEO of a major broadcaster. They are doing it this way: In addition to regular meetings and information sessions, they take half an hour a day to simply play around with new tools. If you’re a CEO, of course you must budget for AI. But it should be flexible.

Many newsrooms are currently establishing rules for the responsible use of AI. Bayerischer Rundfunk is one example; the person who pushed this was in one of the first cohorts of your LSE Journalism and AI Project.

Establishing rules is a good thing, but it should read at the very beginning: All this could change. It’s also important to start such a set of rules with a message of encouragement. Any CEO who immediately says we don’t do this, and we don’t do that is making a big mistake. The best guidelines are the ones that say, these are our limits, and these are the important questions we should be asking about all applications. Transparency is an important issue: who do I tell what I’m experimenting with? My supervisors, my colleagues, the users? And, of course, a general caution is in order. Currently there are swarms of company representatives out there, trying to sell you miracle tools. 90 percent of them are nonsense.

How transparent should you be to the public?

Bloomberg, for example, writes under its texts: This is 100 percent AI-generated. That’s not meant as a warning signal, but as a sign of pride. It’s meant to say: we can handle this technology; you can trust us. I think editors are a bit too worried about that. Today it doesn’t read under texts: „Some of the information came from news agencies“ or „The intern helped with the research.” Newsrooms should confidently use transparency notices to show consumers that they want to give them more value. Some brands will continue to have clickbait pages and now fill them with a lot of AI rubbish without disclosing that. But these have probably always produced a lot of garbage.

How does journalism education need to change? Should those who enter the profession because they like to write now be discouraged from doing so because AI will soon be extremely good at it?

The first thing I would say is that not much will change. The qualities and skills we foster in education are deeply human: curiosity, creativity, competencies. in the past 15 years, of course, technical skills have been added. Then again fundamental things have changed. Today, more than ever, it’s about building relationships with users, it is not just about product development. Journalism is a data-driven, structured process of information delivery. With generative AI, technology fades into the background. You don’t have to learn how to code any longer. But a key skill will be to learn how to write excellent prompts. Writing prompts will be like coding, but without the math.

Journalists may feel their core skills challenged by these AI tools, but couldn’t they be a great opportunity to democratize anything that requires language fluency? For example, my students, many of whom are not native speakers, use ChatGPT to edit their resumes.

Maybe we shouldn’t use that big word democratization, but AI could lower barriers and remove obstacles. The lines between disciplines are likely to blur. I used to need data scientists or graphic designers to do certain tasks, now I can do a lot of stuff myself with the right prompts. On the other hand, I’m sceptical. We often underestimate the ways in which inequalities and injustices persist online.

We’ve talked a lot about the opportunities of AI for journalism. What are the biggest risks?

There is, of course, the great dependence on tech companies, and the risk of discrimination. Journalism has to be fact-based and accurate, generative AI can’t deliver that to the same extent. But the biggest risk is probably that the role of the media as an intermediator will continue to dwindle. Already the Internet has weakened that role; people can go directly to those offering information. But AI that is based on language models will answer all questions without people ever encountering the source of the information. This is a massive problem for business models. What kind of regulation will be needed, what commercial agreements, what about copyright? Frankly, I’ve never seen industry executives so worried before.

This is indeed threatening.

It’s existential. First, they said, oh my God, the Internet has stolen our ad revenue. Then they said, oh my God, Twitter has taken attention away from us. And now they’re staring at this thing thinking, why in the world would anyone ever come to my website again? And they have to find an answer to that.

Do journalists have to fear for their jobs?

Media organisations won’t disappear overnight. But there will be more products that will look like good journalism. We have a toxic cocktail here that is fascinating, but also scary. This cocktail consists of uncertainty, which journalists always love. It also consists of complexity, which is exciting for all intelligent people. The third ingredient is speed, and the old rule applies here: we usually overestimate the short-term consequences and underestimate the long-term effects. Over the 15 years that I’ve been doing this, there have been people who have said, 80 percent of media brands will disappear, or 60 percent of journalists will no longer be needed or things like that. But today we have more journalism than ever before.

But the dependence on the big tech companies will grow rather than shrink. 

On the one hand, yes. You definitely need friends from this tech world to help you understand these things. On the other hand, suddenly there’s new competition. Google may no longer be this great power we thought it was. New competition always opens opportunities to renegotiate your own position. The media industry must take advantage of these opportunities. I’m on shaky grounds here because the JournalismAI initiative is funded by Google. But I think neither Google nor politicians really care about how the media is doing. Probably quite a few politicians would be happy if journalism disappeared. We therefore need to redefine and communicate as an industry what the added value of journalism is for people and society – regardless of previous ideas about journalism as an institution.

Quite a few colleagues in the industry say behind closed doors, „Fortunately, I’m approaching the end of my career, the best years of journalism are behind us.“ Would you want to be a journalist again under the current conditions and perspectives?

Absolutely. It’s an empirical fact that with all the possibilities today, you can produce better journalism than ever before.


The interview was first published in German by Medieninsider on 9th September 2023 and in English on 14th September 2023.

Humor is constructive – Why laughing about climate change can open paths to solutions

Is it okay to laugh heartily even when the situation is serious? Yes, because it is precisely in these situations that humor can help journalism to make formats interesting even for people who might not care otherwise. A plea for more humor – in everyday life and at work.

Doom-scrolling rarely works. Research shows that journalism on climate change is more likely to have an impact if it not only highlights the many different issues involved, but also offers a few solutions. People who report that they regularly avoid the news would like to see more offerings that give them hope and explain things, rather than having to digest the same drama over and over again. This is also confirmed by the Reuters Institute’s latest Digital News Report. But what about humor? Is it okay to laugh heartily, even when the situation is serious?

One might seek permission posthumously from great humorists. In the 1942 comedy “To Be or Not to Be,” director Ernst Lubitsch even had his actors joke about concentration camps while World War II raged outside. But it’s not just about being allowed to joke – subject to a few rules, of course. Evidence suggests that humor is particularly effective at spurring people to action. This is, because jokes convey unpleasant truths the light way. They hold up a mirror to people without making them feel guilty, and for that very reason invite them to reflect about their behavior.

Laughing at yourself instead of feeling guilty

This also works when it comes to climate change. Matt Winning is a Scottish environmental economist. After work, he often climbs up London stages as a stand-up comedian; for a few years now, he has been combining hobby and profession. “We have to make content for people we don’t make content for,” he said in an interview for the report “Climate Journalism That Works: Between Knowledge and Impact.”

His shows, he says, are not so much for environmental professionals, activists and policy experts, as for those people who have been more peripherally involved with climate action. He says he is touched when such guests linger around at the end of the show to tell him that they have now got rid of their car, given up on flying for their summer vacations or found out about heat pumps. In his book “Hot Mess: What on earth can we do about climate change?” Winning tries to get people to understand the topic in a playful way.

Maxwell Boykoff and his colleague Beth Osnes are trying out something similar at the University of Colorado in Boulder. They had initiated the ” Inside the Greenhouse” project as a collaboration between the departments of theater and environmental policy. They published their first findings from it in an academic article in 2019: A light approach to the issues around climate change helps students confront their own feelings, especially fears, deal with them creatively and become better climate communicators, they said.

Why humor can help at the working place

Professors Jennifer Aaker and Naomi Bagdonas teach humor in management at Stanford Business School. In their book “Humour, Seriously – Why Humour is a Secret Weapon in Work and in Life” they describe the role that cheerfulness can play in achieving (business) goals. Humor builds community, strengthens problem-solving skills and resilience. Managers who can laugh at themselves appear close and authentic.

In journalism, young people in particular appreciate humorous formats. It is important to them that content is useful, but they also like it to be fun. A study published in 2021 by the Annenberg School for Communication at the University of Pennsylvania found that young consumers remembered news better when it was presented in a humorous way. More brain regions would be activated during laughter. The rise of TikTok as a channel for news delivery – also documented in the recent Digital News Report – shows how quickly a platform specializing in lighter fare can catch on.

Of course, humor will always be just a complementary form of communication. This is the case also because only a few people have mastered the subject to perfection. For example, one basic rule is: Humor works when you punch up or among your peers. Anyone who makes fun of those conceived to be less powerful is most likely to miss the mark – which is why joking is a tightrope walk for leaders. In any case, what someone laughs at and what jokes he or she makes depends on the cultural context but also reveals a lot about character. As Aaker and Bagdonas write, “Humor is a kind of intelligence you can’t fake.”

This text first appeared in German as an Op Ed on Focus online on June 23, 2023. It was translated with www.DeepL.com/Translator and edited. 

Why climate change should be at the heart of modern journalism

The best insurance against misinformation is strong journalism. Professor Alexandra Borchardt explains how climate journalism and the data and verification skills we need to do this properly can transform our newsrooms.

It is often said that an abundance of questionable information drowns out facts. In climate journalism, the strategy should be to do the opposite: make journalism about global warming, its causes, and its remedies, so pervasive, that everybody everywhere can tell facts and reality from greenwashing and wishful thinking; drown out the misinformation with factual journalism.

This requires rethinking climate journalism from it being a “beat” or “specialist subject” to something that frames all our storytelling, particularly business reporting. This is a tough call, of course. Many obstacles hold media organisations back from prioritising investment in climate journalism. Climate issues often lack a newsy angle. They may be complicated and difficult to understand Coverage may mean expensive travel, and stories can be depressing, politically polarising, and if the journalism is delivered in a less than spectacular way, may fail to attract big audiences. All of which makes the commitment even harder.

Nevertheless, climate journalism is not optional. Journalists have an ethical responsibility, even a mandate to inform the public of threats and help them to make better decisions for themselves, their children, and their communities. Media has the duty to hold power to account and investigate wrongdoing. And a lot has gone wrong. Far too often publishers and broadcasters have kept global warming in the silo of science journalism, rather than at the heart of wider business and news coverage, even though it has been known for decades that the core issues are primarily economic, with powerful interests at play.

The good news is it might help editors and media managers to know that an investment in climate journalism will generate all sorts of benefits for their organisation. Precisely because climate journalism is so complex, the lessons that newsrooms can learn from doing it well can also be applied to other fields. To put it differently: sustainability journalism can make media more sustainable. This is the major conclusion of a report recently published by the European Broadcasting Union: “Climate Journalism That Works – Between Knowledge and Impact”.

It identified seven such benefits:

  • First, climate journalism is about the future. Today’s journalism is too often stuck in the now. It needs to develop strategies to increase its legitimacy in the attention economy. This is especially true for public service media, which is under attack from various political camps. Who else should have a clearer mandate to contribute to the protection of humankind through better journalism? This way, public service media would also meet the needs of younger generations they are struggling to reach. Above all, it is their future.
  • Second, climate protection needs hope. People only act if they believe they can make a difference. In contrast, today’s journalism focuses on conflict, shortcomings, and wrongdoing. Constructive and solutions-oriented journalism offer a way forward. A project called Drive, in which 21 German regional publishers pool their data, recently proved that inspirational pieces were the most valuable digital content when it came to subscriptions.
  • Third, in climate change, it’s what’s done that counts. Today’s journalism still focuses too much on what has been said. The “he said, she said” type of journalism that dominates political reporting tends to be highly unpopular with users though. Modern journalism should be based more on data than on quotes. Fact-checking and verification come in right here: both need to become second nature for any journalist. Climate journalism is an excellent training ground.
  • Fourth, climate journalism that works approaches a variety of audiences with respect and in languages they understand. It explains. Today’s journalism often elevates itself above its audience in a know-it-all manner. Journalism must become more diverse and inclusive if it is to reach people, inspire them, and move them to action. This applies to formats and protagonists.
  • Fifth, climate journalism must be rooted in the local. In contrast, today’s journalism too often strives for reach, neglecting the specific needs of communities. To make itself indispensable, journalism should reclaim its importance as a community-building institution. Those who use or even subscribe to a media product often do so because it makes them feel they belong.
  • Sixth, climate journalism must have an impact, otherwise it is meaningless. It should therefore reflect on its own practices and use insights from research, especially from communication sciences and psychology. Today’s journalism does this far too rarely. Journalists tend to be curious but often surprisingly resistant to change. Media companies could gain a lot if their managers and employees developed more of a learning mindset and trained their strategic thinking.
  • Seventh, climate journalism benefits from collaboration. In today’s journalism, old-fashioned competitive thinking still dominates far too often. Yet so much potential could be leveraged through cooperation. This applies to networking within organizations among desks and regional bureaus, as well as to devoping links with external partners from within the industry and beyond. The journalism of the future is collaborative.

This blog post was published in March 2023 by the BBC’s Trusted News Initiative.