The use of artificial intelligence in journalism requires sensitivity toward the audience. Trust is lost quickly. Transparency is supposed to remedy this. But labeling could even have a negative impact. This column discusses what to do.
In the case of Sports Illustrated, the issue was obvious. When it leaked out that some columns and reports at the renowned American sports magazine were not produced by clever minds but large language models, it cost the publication plenty of subscriptions and ultimately CEO Ross Levinsohn his job. Newsrooms that use journalist imitations made by artificial intelligence are therefore better off doing this confidently; a clear transparency notice is needed. The Cologne-based Express, for example, uses a disclaimer for its avatar reporter Klara Indernach. And even when stated openly, things can go wrong. The radio station Off Radio in Krakow, which had proudly announced that it would be presenting its listeners with a program controlled solely by AI, had to abandon the experiment after a short time. An avatar presenter had conducted a fictitious interview with literature Nobel Prize winner Wislawa Szymborska and asked her about current affairs – only the author had passed away in 2012. The audience was horrified.
Nevertheless, transparency and an open debate about whether, when and to what extent newsrooms use AI when creating content is currently seen as a kind of silver bullet in the industry. Most ethical guidelines on the editorial use of AI are likely to contain a paragraph or two on the subject. There is a great fear of damaging one’s own brand through careless use of AI and further undermining the media trust that has been eroding in many places. So, it feels safer to point out that this or that summary or translation was generated by language models. How this is received by readers and users, however, has hardly been researched – and is also controversial among industry experts. While some are in favor of labels similar to those used for food, others point out that alerts like these could make the public even more suspicious. After all, the words “AI-assisted” could also be interpreted as editors wanting to ditch their responsibility in case of mistakes.
We also know from other areas that too much transparency can diminish trust just as much as too little. A complete list of all mishaps and malpractice displayed in the foyer of a hospital would probably deter patients rather than inspire confidence. If you read a warning everywhere, you either flee or stop looking. Rachel Botsman, a leading expert on the subject, defines trust as “a confident relationship with the unknown”. Transparency and control do not strengthen trust, but rather make it less necessary because they reduce the unknown, she argues.
Much more important for building trust are good experiences with the brand or individuals who represent it. To do this, an organization needs to communicate openly about the steps it takes and the processes it has in place to prevent mishaps. In airplanes, this includes redundancy of technology, double manning of the cockpit and fixed procedures; in newsrooms, the four-eyes and two-source principle. When people trust a media brand, they simply assume that this company structures and regularly checks all processes to the best of its knowledge, experience, and competence. If AI is highlighted as a special case, the impression could creep in that the newsroom doesn’t really trust the matter itself.
Felix Simon, a researcher at the Reuters Institute in Oxford therefore considers general transparency rules to be just as impractical as the widely used principle “human in the loop”, meaning that a person must always do the final check. He writes in a recent essay that it is a misconception that the public’s trust can be won back with these measures alone.
Many journalists also do not realize how strongly their organization’s reporting on artificial intelligence shapes their audience’s relationship with it. Anyone who constantly reads and hears in interviews, essays and podcasts about what kind of devilish stuff humanity is being exposed to will hardly be open-minded about the technology if the otherwise esteemed newsroom suddenly starts to place AI references everywhere. As expected, respondents in surveys tend to be skeptical when asked about the use of AI in journalism – just as a consequence of the media’s reporting.
It is therefore important to strengthen the skills of reporters so that they approach the topic of AI in a multi-layered way and provide constructive insights instead of meandering between hype and doomsday scenarios. The humanization of AI – whether through avatar reporters or just in the use of words does not exactly help to give the audience a realistic picture of what language and computing models can and cannot do.
People’s impression of AI will also be strongly influenced by their own experiences with it. Even today, there is hardly anyone among students who does not use tools such as ChatGPT from time to time. Even those who program for a living make use of the lightning-fast calculation models, and AI is increasingly becoming an everyday tool for office workers, just like spell checking, Excel calculations or voice input. However, it will become less and less obvious which AI is behind which tools, as tech providers will include them in the service package like the autofocus when taking a picture with a smartphone. AI labels could therefore soon seem like a relic from a bygone era.
At a recent conference in Brussels hosted by the Washington-based Center for News, Technology & Innovation, one participant suggested that media organizations should consider labeling man-made journalism. What at first sounds like a joke actually has a serious background. The industry needs to quickly realize how journalism can retain its uniqueness and relevance in a world of rapidly scaling automated content production. Otherwise, it will soon have bigger problems than the question of how to characterize AI-supported journalism in individual cases.
This text was published in German in the industry publication Medieninsider, translated by DeepL and edited by the author – who secretly thinks that this disclaimer might make her less vulnerable to criticism of her mastery of the English language.