AI Labels in Journalism: Why Transparency Doesn’t Always Build Trust

The use of artificial intelligence in journalism requires sensitivity toward the audience. Trust is lost quickly. Transparency is supposed to remedy this. But labeling could even have a negative impact. This column discusses what to do.

In the case of Sports Illustrated, the issue was obvious. When it leaked out that some columns and reports at the renowned American sports magazine were not produced by clever minds but large language models, it cost the publication plenty of subscriptions and ultimately CEO Ross Levinsohn his job. Newsrooms that use journalist imitations made by artificial intelligence are therefore better off doing this confidently; a clear transparency notice is needed. The Cologne-based Express, for example, uses a disclaimer for its avatar reporter Klara Indernach. And even when stated openly, things can go wrong. The radio station Off Radio in Krakow, which had proudly announced that it would be presenting its listeners with a program controlled solely by AI, had to abandon the experiment after a short time. An avatar presenter had conducted a fictitious interview with literature Nobel Prize winner Wislawa Szymborska and asked her about current affairs – only the author had passed away in 2012. The audience was horrified. 

Nevertheless, transparency and an open debate about whether, when and to what extent newsrooms use AI when creating content is currently seen as a kind of silver bullet in the industry. Most ethical guidelines on the editorial use of AI are likely to contain a paragraph or two on the subject. There is a great fear of damaging one’s own brand through careless use of AI and further undermining the media trust that has been eroding in many places. So, it feels safer to point out that this or that summary or translation was generated by language models. How this is received by readers and users, however, has hardly been researched – and is also controversial among industry experts. While some are in favor of labels similar to those used for food, others point out that alerts like these could make the public even more suspicious. After all, the words “AI-assisted” could also be interpreted as editors wanting to ditch their responsibility in case of mistakes. 

We also know from other areas that too much transparency can diminish trust just as much as too little. A complete list of all mishaps and malpractice displayed in the foyer of a hospital would probably deter patients rather than inspire confidence. If you read a warning everywhere, you either flee or stop looking. Rachel Botsman, a leading expert on the subject, defines trust as “a confident relationship with the unknown”. Transparency and control do not strengthen trust, but rather make it less necessary because they reduce the unknown, she argues.  

Much more important for building trust are good experiences with the brand or individuals who represent it. To do this, an organization needs to communicate openly about the steps it takes and the processes it has in place to prevent mishaps. In airplanes, this includes redundancy of technology, double manning of the cockpit and fixed procedures; in newsrooms, the four-eyes and two-source principle. When people trust a media brand, they simply assume that this company structures and regularly checks all processes to the best of its knowledge, experience, and competence. If AI is highlighted as a special case, the impression could creep in that the newsroom doesn’t really trust the matter itself.

Felix Simon, a researcher at the Reuters Institute in Oxford therefore considers general transparency rules to be just as impractical as the widely used principle “human in the loop”, meaning that a person must always do the final check. He writes in a recent essay that it is a misconception that the public’s trust can be won back with these measures alone. 

Many journalists also do not realize how strongly their organization’s reporting on artificial intelligence shapes their audience’s relationship with it. Anyone who constantly reads and hears in interviews, essays and podcasts about what kind of devilish stuff humanity is being exposed to will hardly be open-minded about the technology if the otherwise esteemed newsroom suddenly starts to place AI references everywhere. As expected, respondents in surveys tend to be skeptical when asked about the use of AI in journalism – just as a consequence of the media’s reporting. 

It is therefore important to strengthen the skills of reporters so that they approach the topic of AI in a multi-layered way and provide constructive insights instead of meandering between hype and doomsday scenarios. The humanization of AI – whether through avatar reporters or just in the use of words does not exactly help to give the audience a realistic picture of what language and computing models can and cannot do.

People’s impression of AI will also be strongly influenced by their own experiences with it. Even today, there is hardly anyone among students who does not use tools such as ChatGPT from time to time. Even those who program for a living make use of the lightning-fast calculation models, and AI is increasingly becoming an everyday tool for office workers, just like spell checking, Excel calculations or voice input. However, it will become less and less obvious which AI is behind which tools, as tech providers will include them in the service package like the autofocus when taking a picture with a smartphone. AI labels could therefore soon seem like a relic from a bygone era.  

At a recent conference in Brussels hosted by the Washington-based Center for News, Technology & Innovation, one participant suggested that media organizations should consider labeling man-made journalism. What at first sounds like a joke actually has a serious background. The industry needs to quickly realize how journalism can retain its uniqueness and relevance in a world of rapidly scaling automated content production. Otherwise, it will soon have bigger problems than the question of how to characterize AI-supported journalism in individual cases.   

This text was published in German in the industry publication Medieninsider, translated by DeepL and edited by the author – who secretly thinks that this disclaimer might make her less vulnerable to criticism of her mastery of the English language.

Nieman Lab Prediction 2024: Everyone in the Newsroom Gets Training

Up to now, the world’s newsrooms have been populated by roughly two phenotypes. On the one hand, there have been the content people (many of whom would never call their journalism “content,” of course). These include seasoned reporters, investigators, or commentators who spend their time deep diving into subjects, research, analysis, and cultivating sources and usually don’t want to be bothered by “the rest.”

On the other hand, there has been “the rest.” These are the people who understand formats, channel management, metrics, editing, products, and audiences, and are ever on the lookout for new trends to help the content people’s journalism thrive and sell. But with the advent of generative AI, taking refuge in the old and surprisingly stable world of traditional journalism roles will not be an option any longer. Everyone in the newsroom has to understand how large language models work and how to use them — and then actually use them. This is why 2024 will be the year when media organizations will get serious about education and training.

“We have to bridge the digital divide in our newsrooms,” says Anne Lagercrantz, vice CEO of Swedish public broadcaster SVT. This requires educating and training all staff, even those who until now have shied away from observing what is new in the industry. While in the past it was perfectly acceptable for, say, an investigative reporter not to know the first thing about SEO, TikTok algorithms, or newsletter open rates, now everyone involved with content needs to be aware of the capabilities, deficiencies, and mechanics of large language models, reliable fact-checking tools, and the legal and ethical responsibilities that come with their use. Additionally, AI has all the potential to transform good researchers and reporters into outstanding ones, serving as powerful extensions to the human brain. Research from Harvard Business School suggested that consultants who extensively used AI finished their tasks about 25% faster and outperformed their peers by 40% in quality. It will be in the interest of everyone, individuals and their employers, that no one falls behind.

But making newsrooms fit for these new challenges will be demanding. First, training requires resources and time. But leadership might be reluctant to free up both or tempted to invest in flashy new tools instead. Many managers still fall short of understanding that digital transformation is more a cultural challenge than it is a tech challenge.

Second, training needs trainers who understand their stuff. These are rare finds at a time when AI is evolving as rapidly as it is over-hyped. You will see plenty of consultants out there, of course. But it will be hard to tell those who really know things from those who just pretend in order to get a share of the pie. Be wary when someone flashes something like the ten must-have tools in AI, warns Charlie Beckett, founder of the JournalismAI project at the London School of Economics. Third, training can be a futile exercise when it is not paired with doing. With AI in particular, the goal should be to implement a culture of experimentation, collaboration, and transparency rather than making it some mechanical exercise. Technological advances will come much faster than the most proficient trainer could ever foresee.

Establishing a learning culture around the newsroom should therefore be a worthwhile goal for 2024 and an investment that will pay off in other areas as well. Anyone who is infected with the spirit of testing and learning will likely stretch their minds in areas other than AI, from product development to climate journalism. So many of today’s challenges for newsrooms require constant adaptation, working with data, and building connections with audiences who are more demanding, volatile, and impatient than they used to be. It is important that every journalist embraces at least some responsibility for the impact of their journalism.

It is also time that those editorial innovators who tend to run into each other at the same conferences open their circles to include all of the newsroom. Some might discover that a few of their older colleagues of the content-creator-phenotype could teach them a thing or two as well — for example, how to properly use a telephone. In an age when artificial fabrication of text, voice, or image documents is predicted to evolve at a rapid pace, the comeback of old-style research methods and verification techniques might become a thing. But let’s leave this as a prediction for 2025.

This post was published in Harvard’s Nieman Lab’s Journalism Predictions 2024 series on 7th December 2023.  

Job Title: Robot Reporter – How Automation Could Help Newsrooms Survive


This text was originally written in German for Hamburg Media School. United Robots translated and published it on Medium in April 2020.