AI Labels in Journalism: Why Transparency Doesn’t Always Build Trust

The use of artificial intelligence in journalism requires sensitivity toward the audience. Trust is lost quickly. Transparency is supposed to remedy this. But labeling could even have a negative impact. This column discusses what to do.

In the case of Sports Illustrated, the issue was obvious. When it leaked out that some columns and reports at the renowned American sports magazine were not produced by clever minds but large language models, it cost the publication plenty of subscriptions and ultimately CEO Ross Levinsohn his job. Newsrooms that use journalist imitations made by artificial intelligence are therefore better off doing this confidently; a clear transparency notice is needed. The Cologne-based Express, for example, uses a disclaimer for its avatar reporter Klara Indernach. And even when stated openly, things can go wrong. The radio station Off Radio in Krakow, which had proudly announced that it would be presenting its listeners with a program controlled solely by AI, had to abandon the experiment after a short time. An avatar presenter had conducted a fictitious interview with literature Nobel Prize winner Wislawa Szymborska and asked her about current affairs – only the author had passed away in 2012. The audience was horrified. 

Nevertheless, transparency and an open debate about whether, when and to what extent newsrooms use AI when creating content is currently seen as a kind of silver bullet in the industry. Most ethical guidelines on the editorial use of AI are likely to contain a paragraph or two on the subject. There is a great fear of damaging one’s own brand through careless use of AI and further undermining the media trust that has been eroding in many places. So, it feels safer to point out that this or that summary or translation was generated by language models. How this is received by readers and users, however, has hardly been researched – and is also controversial among industry experts. While some are in favor of labels similar to those used for food, others point out that alerts like these could make the public even more suspicious. After all, the words “AI-assisted” could also be interpreted as editors wanting to ditch their responsibility in case of mistakes. 

We also know from other areas that too much transparency can diminish trust just as much as too little. A complete list of all mishaps and malpractice displayed in the foyer of a hospital would probably deter patients rather than inspire confidence. If you read a warning everywhere, you either flee or stop looking. Rachel Botsman, a leading expert on the subject, defines trust as “a confident relationship with the unknown”. Transparency and control do not strengthen trust, but rather make it less necessary because they reduce the unknown, she argues.  

Much more important for building trust are good experiences with the brand or individuals who represent it. To do this, an organization needs to communicate openly about the steps it takes and the processes it has in place to prevent mishaps. In airplanes, this includes redundancy of technology, double manning of the cockpit and fixed procedures; in newsrooms, the four-eyes and two-source principle. When people trust a media brand, they simply assume that this company structures and regularly checks all processes to the best of its knowledge, experience, and competence. If AI is highlighted as a special case, the impression could creep in that the newsroom doesn’t really trust the matter itself.

Felix Simon, a researcher at the Reuters Institute in Oxford therefore considers general transparency rules to be just as impractical as the widely used principle “human in the loop”, meaning that a person must always do the final check. He writes in a recent essay that it is a misconception that the public’s trust can be won back with these measures alone. 

Many journalists also do not realize how strongly their organization’s reporting on artificial intelligence shapes their audience’s relationship with it. Anyone who constantly reads and hears in interviews, essays and podcasts about what kind of devilish stuff humanity is being exposed to will hardly be open-minded about the technology if the otherwise esteemed newsroom suddenly starts to place AI references everywhere. As expected, respondents in surveys tend to be skeptical when asked about the use of AI in journalism – just as a consequence of the media’s reporting. 

It is therefore important to strengthen the skills of reporters so that they approach the topic of AI in a multi-layered way and provide constructive insights instead of meandering between hype and doomsday scenarios. The humanization of AI – whether through avatar reporters or just in the use of words does not exactly help to give the audience a realistic picture of what language and computing models can and cannot do.

People’s impression of AI will also be strongly influenced by their own experiences with it. Even today, there is hardly anyone among students who does not use tools such as ChatGPT from time to time. Even those who program for a living make use of the lightning-fast calculation models, and AI is increasingly becoming an everyday tool for office workers, just like spell checking, Excel calculations or voice input. However, it will become less and less obvious which AI is behind which tools, as tech providers will include them in the service package like the autofocus when taking a picture with a smartphone. AI labels could therefore soon seem like a relic from a bygone era.  

At a recent conference in Brussels hosted by the Washington-based Center for News, Technology & Innovation, one participant suggested that media organizations should consider labeling man-made journalism. What at first sounds like a joke actually has a serious background. The industry needs to quickly realize how journalism can retain its uniqueness and relevance in a world of rapidly scaling automated content production. Otherwise, it will soon have bigger problems than the question of how to characterize AI-supported journalism in individual cases.   

This text was published in German in the industry publication Medieninsider, translated by DeepL and edited by the author – who secretly thinks that this disclaimer might make her less vulnerable to criticism of her mastery of the English language.

Nieman Lab Prediction 2024: Everyone in the Newsroom Gets Training

Up to now, the world’s newsrooms have been populated by roughly two phenotypes. On the one hand, there have been the content people (many of whom would never call their journalism “content,” of course). These include seasoned reporters, investigators, or commentators who spend their time deep diving into subjects, research, analysis, and cultivating sources and usually don’t want to be bothered by “the rest.”

On the other hand, there has been “the rest.” These are the people who understand formats, channel management, metrics, editing, products, and audiences, and are ever on the lookout for new trends to help the content people’s journalism thrive and sell. But with the advent of generative AI, taking refuge in the old and surprisingly stable world of traditional journalism roles will not be an option any longer. Everyone in the newsroom has to understand how large language models work and how to use them — and then actually use them. This is why 2024 will be the year when media organizations will get serious about education and training.

“We have to bridge the digital divide in our newsrooms,” says Anne Lagercrantz, vice CEO of Swedish public broadcaster SVT. This requires educating and training all staff, even those who until now have shied away from observing what is new in the industry. While in the past it was perfectly acceptable for, say, an investigative reporter not to know the first thing about SEO, TikTok algorithms, or newsletter open rates, now everyone involved with content needs to be aware of the capabilities, deficiencies, and mechanics of large language models, reliable fact-checking tools, and the legal and ethical responsibilities that come with their use. Additionally, AI has all the potential to transform good researchers and reporters into outstanding ones, serving as powerful extensions to the human brain. Research from Harvard Business School suggested that consultants who extensively used AI finished their tasks about 25% faster and outperformed their peers by 40% in quality. It will be in the interest of everyone, individuals and their employers, that no one falls behind.

But making newsrooms fit for these new challenges will be demanding. First, training requires resources and time. But leadership might be reluctant to free up both or tempted to invest in flashy new tools instead. Many managers still fall short of understanding that digital transformation is more a cultural challenge than it is a tech challenge.

Second, training needs trainers who understand their stuff. These are rare finds at a time when AI is evolving as rapidly as it is over-hyped. You will see plenty of consultants out there, of course. But it will be hard to tell those who really know things from those who just pretend in order to get a share of the pie. Be wary when someone flashes something like the ten must-have tools in AI, warns Charlie Beckett, founder of the JournalismAI project at the London School of Economics. Third, training can be a futile exercise when it is not paired with doing. With AI in particular, the goal should be to implement a culture of experimentation, collaboration, and transparency rather than making it some mechanical exercise. Technological advances will come much faster than the most proficient trainer could ever foresee.

Establishing a learning culture around the newsroom should therefore be a worthwhile goal for 2024 and an investment that will pay off in other areas as well. Anyone who is infected with the spirit of testing and learning will likely stretch their minds in areas other than AI, from product development to climate journalism. So many of today’s challenges for newsrooms require constant adaptation, working with data, and building connections with audiences who are more demanding, volatile, and impatient than they used to be. It is important that every journalist embraces at least some responsibility for the impact of their journalism.

It is also time that those editorial innovators who tend to run into each other at the same conferences open their circles to include all of the newsroom. Some might discover that a few of their older colleagues of the content-creator-phenotype could teach them a thing or two as well — for example, how to properly use a telephone. In an age when artificial fabrication of text, voice, or image documents is predicted to evolve at a rapid pace, the comeback of old-style research methods and verification techniques might become a thing. But let’s leave this as a prediction for 2025.

This post was published in Harvard’s Nieman Lab’s Journalism Predictions 2024 series on 7th December 2023.  

Interview with Prof. Charlie Beckett on AI: “Frankly, I’ve never seen industry executives so worried before”

LSE-Professor Charlie Beckett, founder and director of the JournalismAI project, talks about what AI means for journalism, how to tell advice from rubbish, and how the news industry adjusts to the new challenges.

Medieninsider: Since the launch of ChatGPT, new AI applications relevant to journalism have been announced almost every day. Which one intrigues you the most?

Charlie Beckett: A small newsroom in Malawi that is participating in our AI course for small newsrooms, recently built a generative AI-based tool that is practically a whole toolbox, It can be used to simplify newsroom workflows. The idea is to quickly process information and cast it into formats, a kind of super-efficient editorial manager. It’s not one of those sensational applications that help discover deep fakes or unearth the next Watergate as an investigative tool. But I think it’s great: an African newsroom that quickly develops something that makes day-to-day operations easier. I think the immediate future lies in these more mechanical applications. That often gets lost in the media hype. People would rather discuss topics like killer robots.

 

Do you think small newsrooms will benefit most from AI, or will the big players be the winners once again?

The answer is: I don’t know! So far, when it comes to innovation, large newsrooms have benefited the most because they can invest more. But if small newsrooms can find a few tools to help them automate newsletters or analyze data for an investigative project, for example, it can help them tremendously. A ten percent gain in efficiency can be an existential question for them. For local newsrooms AI could prove to be a bridge technology. At least that’s what I hear in conversations.

Because they can do more with fewer people? There is this example from Sweden of a tool that automatically evaluates real estate prices; it has been successful generating subscriptions, because readers love that kind of stuff – just like weather and traffic reports.

At least, that’s what editors at small newsrooms hope. They say they could use AI to produce at least sufficient content to justify the existence of their brand. Reporters could then focus on researching real local stories. We’ll see if that happens. But AI will definitely shape the industry at least as much as online journalism and the rise of social media have.

AI seems to unleash enthusiasm and a spirit of experimentation in the industry, unlike back in the early days of online journalism, when many were sceptical.

The speed of the development is almost breath-taking. In the beginning, we looked at artificially generated images and thought, well, that looks a bit wobbly. Three months later, there were already impressively realistic images. We’re moving through this hype cycle right now. No matter which newsroom in the world I talk to everyone is at least playing around with AI; by the end of the year at the latest, many will have implemented something.

But you say it’s too early to make predictions?

We’re seeing an extremely fluid development right now. Advertisers don’t yet know what to do, and in the relationship between platform groups and publishers, a lot is out in the open again. In fact, I’ve never experienced anything like this before. It’s clear to everyone that we’re facing a big change.

But isn’t it risky to just wait, and see?

Automation is still very unstable. Setting up new processes at the current level would be like building a house on a volcano. The right process is: let employees experiment, learn, and definitely think about potential impacts. If you’re asking me now, what are the ten tools I need to know, that’s the wrong question.

That’s exactly what I wanted to ask, of course. That’s what a lot of people want to know at the moment, after all. And everyone wants to be the first to publish the ultimate AI manual for newsrooms. So, do you have to be suspicious when someone confidently claims to have solutions?

We are currently collecting who is using which tools and what experiences are being made with them. But we are not making recommendations about what is the best tool. I just spoke to the CEO of a major broadcaster. They are doing it this way: In addition to regular meetings and information sessions, they take half an hour a day to simply play around with new tools. If you’re a CEO, of course you must budget for AI. But it should be flexible.

Many newsrooms are currently establishing rules for the responsible use of AI. Bayerischer Rundfunk is one example; the person who pushed this was in one of the first cohorts of your LSE Journalism and AI Project.

Establishing rules is a good thing, but it should read at the very beginning: All this could change. It’s also important to start such a set of rules with a message of encouragement. Any CEO who immediately says we don’t do this, and we don’t do that is making a big mistake. The best guidelines are the ones that say, these are our limits, and these are the important questions we should be asking about all applications. Transparency is an important issue: who do I tell what I’m experimenting with? My supervisors, my colleagues, the users? And, of course, a general caution is in order. Currently there are swarms of company representatives out there, trying to sell you miracle tools. 90 percent of them are nonsense.

How transparent should you be to the public?

Bloomberg, for example, writes under its texts: This is 100 percent AI-generated. That’s not meant as a warning signal, but as a sign of pride. It’s meant to say: we can handle this technology; you can trust us. I think editors are a bit too worried about that. Today it doesn’t read under texts: „Some of the information came from news agencies“ or „The intern helped with the research.” Newsrooms should confidently use transparency notices to show consumers that they want to give them more value. Some brands will continue to have clickbait pages and now fill them with a lot of AI rubbish without disclosing that. But these have probably always produced a lot of garbage.

How does journalism education need to change? Should those who enter the profession because they like to write now be discouraged from doing so because AI will soon be extremely good at it?

The first thing I would say is that not much will change. The qualities and skills we foster in education are deeply human: curiosity, creativity, competencies. in the past 15 years, of course, technical skills have been added. Then again fundamental things have changed. Today, more than ever, it’s about building relationships with users, it is not just about product development. Journalism is a data-driven, structured process of information delivery. With generative AI, technology fades into the background. You don’t have to learn how to code any longer. But a key skill will be to learn how to write excellent prompts. Writing prompts will be like coding, but without the math.

Journalists may feel their core skills challenged by these AI tools, but couldn’t they be a great opportunity to democratize anything that requires language fluency? For example, my students, many of whom are not native speakers, use ChatGPT to edit their resumes.

Maybe we shouldn’t use that big word democratization, but AI could lower barriers and remove obstacles. The lines between disciplines are likely to blur. I used to need data scientists or graphic designers to do certain tasks, now I can do a lot of stuff myself with the right prompts. On the other hand, I’m sceptical. We often underestimate the ways in which inequalities and injustices persist online.

We’ve talked a lot about the opportunities of AI for journalism. What are the biggest risks?

There is, of course, the great dependence on tech companies, and the risk of discrimination. Journalism has to be fact-based and accurate, generative AI can’t deliver that to the same extent. But the biggest risk is probably that the role of the media as an intermediator will continue to dwindle. Already the Internet has weakened that role; people can go directly to those offering information. But AI that is based on language models will answer all questions without people ever encountering the source of the information. This is a massive problem for business models. What kind of regulation will be needed, what commercial agreements, what about copyright? Frankly, I’ve never seen industry executives so worried before.

This is indeed threatening.

It’s existential. First, they said, oh my God, the Internet has stolen our ad revenue. Then they said, oh my God, Twitter has taken attention away from us. And now they’re staring at this thing thinking, why in the world would anyone ever come to my website again? And they have to find an answer to that.

Do journalists have to fear for their jobs?

Media organisations won’t disappear overnight. But there will be more products that will look like good journalism. We have a toxic cocktail here that is fascinating, but also scary. This cocktail consists of uncertainty, which journalists always love. It also consists of complexity, which is exciting for all intelligent people. The third ingredient is speed, and the old rule applies here: we usually overestimate the short-term consequences and underestimate the long-term effects. Over the 15 years that I’ve been doing this, there have been people who have said, 80 percent of media brands will disappear, or 60 percent of journalists will no longer be needed or things like that. But today we have more journalism than ever before.

But the dependence on the big tech companies will grow rather than shrink. 

On the one hand, yes. You definitely need friends from this tech world to help you understand these things. On the other hand, suddenly there’s new competition. Google may no longer be this great power we thought it was. New competition always opens opportunities to renegotiate your own position. The media industry must take advantage of these opportunities. I’m on shaky grounds here because the JournalismAI initiative is funded by Google. But I think neither Google nor politicians really care about how the media is doing. Probably quite a few politicians would be happy if journalism disappeared. We therefore need to redefine and communicate as an industry what the added value of journalism is for people and society – regardless of previous ideas about journalism as an institution.

Quite a few colleagues in the industry say behind closed doors, „Fortunately, I’m approaching the end of my career, the best years of journalism are behind us.“ Would you want to be a journalist again under the current conditions and perspectives?

Absolutely. It’s an empirical fact that with all the possibilities today, you can produce better journalism than ever before.


The interview was first published in German by Medieninsider on 9th September 2023 and in English on 14th September 2023.

Don’t mind the gap: Automated translation could revolutionize journalism – but how?

Newsrooms can fight “fake news” by identifying it, warning about it and correcting it. But they can also fight it with so much trustworthy, factual and well researched journalism that it drowns out the lies. For most of them it’s not an either/or decision, of course; they try to do both. The European Broadcasting Union has recently unveiled a project that caters to the latter: It wants to deliver class en masse and will do so by scaling content across countries and languages using automated translation.

The project promises quite a bit: starting in July, ten public broadcasters from Europe will feed in particularly good pieces on globally important topics such as Covid-19, climate change and migration, which will then be translated by artificial intelligence and made available across Europe. In an eight-month pilot phase, 14 institutions had shared more than 120,000 articles this way. This worked so well that the EU is now helping with a grant. So in the future, citizens could benefit not only from more reliable information, but also from more diversity, if things go well.

In fact, automated translations could revolutionize journalism. If you haven’t struggled with texts translated by software into other languages for a while because you found the results rather unsatisfactory, you might want to try it again. Artificial intelligence, which works on the principle of deep learning, now translates texts like this one into English within seconds. With a little editing, they read – this needs to be said – much better than what one used to get back from translators who knew a foreign language but not necessarily the journalistic form. The AI products are, in the truest sense of the word, frighteningly good.

Admittedly, robots work reliably in a few languages only, but they are learning as we read. And the result will shape journalism – but in different directions. On the one hand, the tools open up new possibilities for publishers. Whereas until now only newsrooms from English-speaking countries were able to offer their journalism worldwide, in the future everyone will be able to do so for whom it makes sense commercially or qua mission. Not every media company will be able to turn itself into a New York Times or a Guardian, but the options for Europe-wide news portals are growing rapidly. At the start-up Forum.eu, for example, AI now handles 60 percent of the total translation work, according to co-founder Paul Ostwald’s estimate. The platfom makes quality journalism from different countries accessible all over Europe. 

Editors could also reach people with other native languages more easily in their own countries via automated translation, for example hard to connect with migrant communities. And international research should become much easier if reporters have better access to original documents this way. The whole thing does not only work for written but also for spoken material (which stillmakes for funny subtitles on TV).

However, newsrooms have already realized that there is not only a huge potential for expansion, but also for savings. Reuters news agency has long been redeploying resources, for example from its German-language service to parts of the world where citizens are in greater need for journalistic scrutiny. And of course this makes sense: Instead of sending a German- and an English-speaking colleague to the same press conference in Berlin, an additional colleague in, for example, the Philippines can create real added value.

However, it is precisely at this point that things become critical. After all, language is only ever a packaging for content that arises in the context of a culture. The exact same fact can read completely differently depending on who is describing it. When, for example, star conductor Simon Rattle recently announced that he would be joining the Bavarian Radio Symphony Orchestra as chief conductor in 2023, German culture reporters were thrilled. Reading the British Guardian on the same day, one learned that Rattle had extended his contract with the London Symphony Orchestra until 2022, oh, and at some point he would go to Munich. One event, two reporters, two worlds, a translation would not have helped in this case.

A translation tool will not replace a foreign correspondent – but it will make his or her work easier. This is bad news for all those fixers and local journalists around the globe who make sure that journalists get the right information, contacts and access without which they would often be lost on foreign territory. If they are not needed as translators any longer, they might soon be out of their jobs. Already, only a few newsrooms can afford a network of reporters far from home. Easier access to all the world’s languages is likely to accelerate this development – but it has not caused it. 

As with many things that new technology offers, there is one temptation, and it has to be resisted: That you have to do what you can do. Translating content via AI just because it works is not a strategy. What audience do you want to reach with what content, and what do you want it to achieve? Do you have a mission, a business model, or just fun with it? There they are again, these questions that no AI can answer. Meanwhile, beware: AI is increasingly used to translate “fake news” as well. 

This column was published on 4th February 2021 with Hamburg Media School in German, then translated with www.DeepL.com/Translator (free version) and edited.

Job Title: Robot Reporter – How Automation Could Help Newsrooms Survive


This text was originally written in German for Hamburg Media School. United Robots translated and published it on Medium in April 2020.

Get out of the office and talk to people!

Every year Nieman Lab at Harvard University asks journalists and journalism researchers around the globe about trends in the industry and what they predict for the year to come. This is what I envisioned for 2020:

News deserts were yesterday. In the year to come, journalism will rediscover the communities it’s meant to serve.

Several factors will contribute to this. One is the ever more urgent need for media organizations to engage with real people in the real world. Journalism has to regain the trust of the citizens it’s made for. And trust develops best through direct engagement. It works particularly well if you can see that the person on the other side is a human being like yourself, making an honest effort to do a difficult, sometimes risky job that’s not even tremendously lucrative.

The other factor is that international journalism has become a winner-take-all environment. For a while, everyone was enthralled with The New York Times and its progress in growing revenue through digital subscriptions, or The Washington Post with its reputation for being at the forefront of tech innovation. But the glamour has worn off. Now even comparatively big news organizations have realized that their successes are not replicable. They’re not the Times or the Post; they can’t build an international audience and invest in all the tech others are craving for. They have come to understand that there’s no one-size-fits-all solution — just bits and pieces one can adapt to one’s own needs.

The way forward is to make the best use of the unique position each organization finds itself in. And in many cases, this is the local environment. It’s the place where your audience lives that you’re best equipped to listen to, to engage with, and to serve — the citizens whose lives you can have a real impact on. It’s the place for community building, for creating shared debates and experiences.

While many traditional local news organizations are still struggling for a lack of revenues and resources, there’s also some hope that the act of serving one’s communities will become easier and cheaper if the right approaches are used. First, within the over-abundance of information, it becomes more and more acceptable to focus on what one can do best and leave out the rest. Modern news organizations don’t have to be “the paper of record” any longer, because people are recording everything all the time and search engines help them to find much of the information they need anyway. Consequently, local newsrooms can afford to develop strategies that center around the needs of their audiences.

Second, there are now more formats than ever available to help to build a relationship with these audiences, from newsletters or podcasts with a personal touch to reader events. Some of these formats also help new market entrants: news startups that don’t have to launch as a full-blown effort with a large newsroom, but maybe start instead with a newsletter that builds engagement and loyalty.

Thirdly, there will be AI-solutions and automated news production to cater to the appetite for data-based, locally relevant stories, like the development of real estate prices or updates of local weather forecasts. Fourthly, we will see a lot of investments along these lines, particularly since big players like Google and Facebook have also discovered local markets as grounds for support, so have foundations.

Hopefully, the focus on local journalism will also bring more talent back into the equation. The future of journalism will be in unique quality reporting and research. A generation of young journalists was raised in front of computer screens, copying and pasting stories for quick successes in clicks and reach. Now many are savvy in SEO and a variety of storytelling formats. But this prevented them from learning the ropes of doing in-depth investigations. Those require patience, persistence, and communication skills, because they’re about building trust with sources. Picking up the phone and meeting people away from the office might experience a revival. By the way, a video is best shot at the scene, not at the desk.

A new focus on local journalism will bring it back to its core. Let the international winners grab the high-hanging fruit. The low-hanging ones could be right there in front of your doorstep.

This text was published by Nieman Lab on January 3, 2020