Interview with Prof. Charlie Beckett on AI: “Frankly, I’ve never seen industry executives so worried before”

LSE-Professor Charlie Beckett, founder and director of the JournalismAI project, talks about what AI means for journalism, how to tell advice from rubbish, and how the news industry adjusts to the new challenges.

Medieninsider: Since the launch of ChatGPT, new AI applications relevant to journalism have been announced almost every day. Which one intrigues you the most?

Charlie Beckett: A small newsroom in Malawi that is participating in our AI course for small newsrooms, recently built a generative AI-based tool that is practically a whole toolbox, It can be used to simplify newsroom workflows. The idea is to quickly process information and cast it into formats, a kind of super-efficient editorial manager. It’s not one of those sensational applications that help discover deep fakes or unearth the next Watergate as an investigative tool. But I think it’s great: an African newsroom that quickly develops something that makes day-to-day operations easier. I think the immediate future lies in these more mechanical applications. That often gets lost in the media hype. People would rather discuss topics like killer robots.

 

Do you think small newsrooms will benefit most from AI, or will the big players be the winners once again?

The answer is: I don’t know! So far, when it comes to innovation, large newsrooms have benefited the most because they can invest more. But if small newsrooms can find a few tools to help them automate newsletters or analyze data for an investigative project, for example, it can help them tremendously. A ten percent gain in efficiency can be an existential question for them. For local newsrooms AI could prove to be a bridge technology. At least that’s what I hear in conversations.

Because they can do more with fewer people? There is this example from Sweden of a tool that automatically evaluates real estate prices; it has been successful generating subscriptions, because readers love that kind of stuff – just like weather and traffic reports.

At least, that’s what editors at small newsrooms hope. They say they could use AI to produce at least sufficient content to justify the existence of their brand. Reporters could then focus on researching real local stories. We’ll see if that happens. But AI will definitely shape the industry at least as much as online journalism and the rise of social media have.

AI seems to unleash enthusiasm and a spirit of experimentation in the industry, unlike back in the early days of online journalism, when many were sceptical.

The speed of the development is almost breath-taking. In the beginning, we looked at artificially generated images and thought, well, that looks a bit wobbly. Three months later, there were already impressively realistic images. We’re moving through this hype cycle right now. No matter which newsroom in the world I talk to everyone is at least playing around with AI; by the end of the year at the latest, many will have implemented something.

But you say it’s too early to make predictions?

We’re seeing an extremely fluid development right now. Advertisers don’t yet know what to do, and in the relationship between platform groups and publishers, a lot is out in the open again. In fact, I’ve never experienced anything like this before. It’s clear to everyone that we’re facing a big change.

But isn’t it risky to just wait, and see?

Automation is still very unstable. Setting up new processes at the current level would be like building a house on a volcano. The right process is: let employees experiment, learn, and definitely think about potential impacts. If you’re asking me now, what are the ten tools I need to know, that’s the wrong question.

That’s exactly what I wanted to ask, of course. That’s what a lot of people want to know at the moment, after all. And everyone wants to be the first to publish the ultimate AI manual for newsrooms. So, do you have to be suspicious when someone confidently claims to have solutions?

We are currently collecting who is using which tools and what experiences are being made with them. But we are not making recommendations about what is the best tool. I just spoke to the CEO of a major broadcaster. They are doing it this way: In addition to regular meetings and information sessions, they take half an hour a day to simply play around with new tools. If you’re a CEO, of course you must budget for AI. But it should be flexible.

Many newsrooms are currently establishing rules for the responsible use of AI. Bayerischer Rundfunk is one example; the person who pushed this was in one of the first cohorts of your LSE Journalism and AI Project.

Establishing rules is a good thing, but it should read at the very beginning: All this could change. It’s also important to start such a set of rules with a message of encouragement. Any CEO who immediately says we don’t do this, and we don’t do that is making a big mistake. The best guidelines are the ones that say, these are our limits, and these are the important questions we should be asking about all applications. Transparency is an important issue: who do I tell what I’m experimenting with? My supervisors, my colleagues, the users? And, of course, a general caution is in order. Currently there are swarms of company representatives out there, trying to sell you miracle tools. 90 percent of them are nonsense.

How transparent should you be to the public?

Bloomberg, for example, writes under its texts: This is 100 percent AI-generated. That’s not meant as a warning signal, but as a sign of pride. It’s meant to say: we can handle this technology; you can trust us. I think editors are a bit too worried about that. Today it doesn’t read under texts: „Some of the information came from news agencies“ or „The intern helped with the research.” Newsrooms should confidently use transparency notices to show consumers that they want to give them more value. Some brands will continue to have clickbait pages and now fill them with a lot of AI rubbish without disclosing that. But these have probably always produced a lot of garbage.

How does journalism education need to change? Should those who enter the profession because they like to write now be discouraged from doing so because AI will soon be extremely good at it?

The first thing I would say is that not much will change. The qualities and skills we foster in education are deeply human: curiosity, creativity, competencies. in the past 15 years, of course, technical skills have been added. Then again fundamental things have changed. Today, more than ever, it’s about building relationships with users, it is not just about product development. Journalism is a data-driven, structured process of information delivery. With generative AI, technology fades into the background. You don’t have to learn how to code any longer. But a key skill will be to learn how to write excellent prompts. Writing prompts will be like coding, but without the math.

Journalists may feel their core skills challenged by these AI tools, but couldn’t they be a great opportunity to democratize anything that requires language fluency? For example, my students, many of whom are not native speakers, use ChatGPT to edit their resumes.

Maybe we shouldn’t use that big word democratization, but AI could lower barriers and remove obstacles. The lines between disciplines are likely to blur. I used to need data scientists or graphic designers to do certain tasks, now I can do a lot of stuff myself with the right prompts. On the other hand, I’m sceptical. We often underestimate the ways in which inequalities and injustices persist online.

We’ve talked a lot about the opportunities of AI for journalism. What are the biggest risks?

There is, of course, the great dependence on tech companies, and the risk of discrimination. Journalism has to be fact-based and accurate, generative AI can’t deliver that to the same extent. But the biggest risk is probably that the role of the media as an intermediator will continue to dwindle. Already the Internet has weakened that role; people can go directly to those offering information. But AI that is based on language models will answer all questions without people ever encountering the source of the information. This is a massive problem for business models. What kind of regulation will be needed, what commercial agreements, what about copyright? Frankly, I’ve never seen industry executives so worried before.

This is indeed threatening.

It’s existential. First, they said, oh my God, the Internet has stolen our ad revenue. Then they said, oh my God, Twitter has taken attention away from us. And now they’re staring at this thing thinking, why in the world would anyone ever come to my website again? And they have to find an answer to that.

Do journalists have to fear for their jobs?

Media organisations won’t disappear overnight. But there will be more products that will look like good journalism. We have a toxic cocktail here that is fascinating, but also scary. This cocktail consists of uncertainty, which journalists always love. It also consists of complexity, which is exciting for all intelligent people. The third ingredient is speed, and the old rule applies here: we usually overestimate the short-term consequences and underestimate the long-term effects. Over the 15 years that I’ve been doing this, there have been people who have said, 80 percent of media brands will disappear, or 60 percent of journalists will no longer be needed or things like that. But today we have more journalism than ever before.

But the dependence on the big tech companies will grow rather than shrink. 

On the one hand, yes. You definitely need friends from this tech world to help you understand these things. On the other hand, suddenly there’s new competition. Google may no longer be this great power we thought it was. New competition always opens opportunities to renegotiate your own position. The media industry must take advantage of these opportunities. I’m on shaky grounds here because the JournalismAI initiative is funded by Google. But I think neither Google nor politicians really care about how the media is doing. Probably quite a few politicians would be happy if journalism disappeared. We therefore need to redefine and communicate as an industry what the added value of journalism is for people and society – regardless of previous ideas about journalism as an institution.

Quite a few colleagues in the industry say behind closed doors, „Fortunately, I’m approaching the end of my career, the best years of journalism are behind us.“ Would you want to be a journalist again under the current conditions and perspectives?

Absolutely. It’s an empirical fact that with all the possibilities today, you can produce better journalism than ever before.


The interview was first published in German by Medieninsider on 9th September 2023 and in English on 14th September 2023.

Free speech in the digital age – a constructive approach

Digital platforms have fundamentally changed the way we communicate, express and inform ourselves. This requires new rules to safeguard democratic values. As the Digital Services Act (DSA) awaits adoption by the EU, Natali Helberger, Alexandra Borchardt and Cristian Vaccari explain here how the Council of Europe’s recently adopted recommendation “on the impact of digital technologies on freedom of expression” can complement the implementation of the DSA, which aims to update rules governing digital services in the EU. All three were members of the Council’s expert committee that was set up for this purpose, working in 2020 and 2021.

When Elon Musk announced his original plan to buy Twitter and, in his words, restore freedom of speech on the platform, EC Commissioner Thierry Breton quickly reminded him of the Digital Services Act (DSA). According to the DSA, providers of what it defines as ‘Very Large Online Platforms’ will have to ‘pay due regard to freedom of expression and information, including media freedom and pluralism.’ They will have to monitor their recommendation and content moderation algorithms for any systemic risks to the fundamental rights and values that constitute Europe. A video of Musk and Breton in Austin, Texas, shows Musk eagerly nodding and assuring Breton that “this all is very well aligned with what we are planning.”

But what exactly is well aligned here? What does it mean for social media platforms, such as Twitter, to pay due regard to freedom of expression, media freedom and pluralism? While the DSA enshrines a firm commitment to freedom of expression, it only provides limited concrete guidance on what freedom of expression means in a platform context. So when Musk was nodding along like an eager schoolboy, whilst his intentions may have been sincere there is also a realistic chance that he had no concrete idea of what exactly he was agreeing to.

The Council of Europe’s recently adopted recommendation “on the impact of digital technologies on freedom of expression” provides some much-needed guidance.

The leading fundamental rights organisation in Europe

The Council of Europe is the largest international fundamental rights organisation in Europe. Distinct from the European Union, the Council’s EU member states and 20 more European states develop joint visions on European values and fundamental freedoms, as enshrined in the European Convention of Human Rights and interpreted by the European Court of Justice. Article 10 of the ECHRdefines freedom of expression as “the freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

European media laws and policies have been significantly shaped by the Conventions, recommendations and guidelines of the Council. One of the most recent expert committees of the Council was tasked with preparing a recommendation on the impacts of digital technologies on freedom of expression, as well as guidelineson best practices for content moderation by internet intermediaries. The guidelines are already described here and here. In this post, the rapporteurs and chair of the Committee briefly summarise the key takeaways from the recommendation (for a full list of experts involved in the making of the recommendation, please see here). In so doing, we will explain the guidelines and address the question of how they complement and add to the recently agreed on DSA.

A value-based approach

The recommendation lays down principles to ensure that “digital technologies serve rather than curtail” freedom of expression and develops proposals to address the adverse impacts and enhance the positive effects of digital technology on freedom of expression. Here we note a first difference with the DSA. The DSA takes a risk-based approach: for example, Art. 26 requires Very Large Online Platforms to identify the risks and dangers that their recommendation and content moderation algorithms pose for fundamental rights and society. As such it focuses on the negative implications of technology.

In contrast, the Council of Europe Recommendation takes a value-based approach. It first clarifies that these technologies have an essential, positive role in a democracy by opening up the public sphere to more and diverse voices. According to the Council, the “digital infrastructures of communication in a democratic society” need to be designed “to promote human rights, openness, interoperability, transparency, and fair competition”. This value-based approach to digital technology acknowledges the need to mitigate risks, but goes one step further and demands that states, companies, and civil society actors work together to realize technology’s positive contribution to democracy and fundamental rights. It is vital to notice this difference, as both a risk-based and value- and opportunity-based approach will set the agenda for research and innovation.

Digital infrastructure design and the creation of counter-power

Where the DSA takes an application or tool-based approach, the recommendation adopts a broader media ecology perspective. The DSA addresses algorithmic content moderation, news recommenders and curation first and foremost as related to specific digital tools and applications. The recommendation takes a different approach and acknowledges that all those digital tools and applications together form the wider digital communication infrastructure that democracies rely on. According to the recommendation, these digital communication infrastructures should be designed to proactively promote human rights, openness, accessibility, interoperability, transparency and fair competition.

One key recommendation that arises from this media ecology view of digital technology is for states to proactively invest in and create the conditions to enhance economic competition and democratic pluralism in and on digital infrastructures. Other key recommendations include stimulating the digital transformation of news organisations, promoting open-source software, and investing in public service media. The recommendation also explicitly stresses the essential democratic role of local and regional media and the need to tackle concentration in terms of both economic dominance and, crucially, the power to shape public opinion. The recently adopted  Council of Europe recommendation on creating a favourable environment for quality journalism complements the document and provides more detail in this particular area.

Transparency, accountability and redress as a joint responsibility of states and internet intermediaries

Transparency and explainability are essential in both the recommendation and the DSA. Like the DSA, the recommendation requires internet intermediaries to provide adequate transparency on the design and implementation of their terms of service and their key policies for content moderation, such as information regarding removal, recommendation, amplification, promotion, downranking, monetisation, and distribution, particularly concerning their outcomes for freedom of expression. The recommendation highlights that such information must ensure transparency on different levels and with different goals, including empowering users, enabling third-party auditing and oversight, and informing independent efforts to counter harmful content online. In other words, transparency is a multi-faceted and multi-player concept.

Having said that, whereas the DSA places the burden of providing transparency in the first place on platforms, the Council of Europe’s recommendation also ascribes responsibility to states and regulators. It advocates that states and regulators “should ensure that all necessary data are generated and published to enable any analysis necessary to guarantee meaningful transparency on how internet intermediaries’ policies and their implementation affect freedom of expression among the general public and vulnerable subjects.” States should also “assist private actors and civil society organisations in the development of independent institutional mechanisms that ensure impartial and comprehensive verification of the completeness and accuracy of data made available by internet intermediaries.” This approach complements the DSA in at least two respects: it assigns states a responsibility to ensure the accessibility and usability of such information, and it supports the development of independent systems of quality control (rather than relying exclusively on the mechanisms of Art. 31 DSA).

The extensive transparency mechanisms must be seen in the context of the recommendations on contestability. Transparency can be a value in itself, but as a regulatory tool, transparency obligations are primarily intended to empower subjects to take action. Consequently, the recommendation includes an obligation for states to ensure that any person whose freedom of expression is limited due to restrictions imposed by internet intermediaries must be able to seek timely and effective redress. Interestingly, the recommendation also extends this right to the news media: news providers whose editorial freedom is threatened due to terms of service or content moderation policies must be able to seek timely and effective redress mechanisms, too.

Actionable and empowering media literacy

The Council of Europe has a long tradition of supporting and developing media literacy policies, and this recommendation is no exception. The recommendation promotes data and digital literacy to help users understand the conditions under which digital technologies affect freedom of expression, how information of varying quality is procured, distributed and processed and, importantly, what individuals can do to protect their rights. As in other domains, the recommendation stresses the positive role that states can play. States should enable users to engage in informational self-determination and exercise greater control over the data they generate, the inferences derived from such data, and the content they can access. Although it is undeniable that the complexity of digital information environments places a higher burden on citizens to select, filter, and evaluate the content they encounter, the recommendation aims to promote processes and practices that reduce this burden by enhancing user empowerment and control.

Independent research for evidence-based rulemaking

In current regulatory proposals, there is a growing recognition of the role that independent research must play. Among other things, research can help to:

  • identify (systemic) risks to fundamental rights, society and democracy as a result of the use of algorithmic tools,
  • monitor compliance with the rules and responsibilities that pertain to those using those tools,
  • develop insights on how to design technologies, institutions and governance frameworks to promote and realise fundamental rights and public values.

There is also growing recognition of the responsibility of states and platforms to create the conditions for independent researchers to be able to play such important role. The provisions in Art. 31 of the DSA on access to research data are an example of this new awareness.

The CoE recommendation, too, emphasises and requires that internet intermediaries must enable researchers to access the kinds of high-quality data that are necessary to investigate the individual and societal impacts of digital technologies on fundamental rights.  The recommendation goes one step further than the DSA, however, and  also emphasises the broader conditions that need to be fulfilled for independent researchers to play such a role. Besides calling for states to provide adequate funding for such research, the recommendation stresses the need to create secure environments that facilitate secure data access and analysis, as well as measures to protect the independence of researchers.

It is worth noting that the recommendation also suggests a new, more general research exception: that data lawfully collected for other purposes by internet intermediaries may be processed to conduct rigorous and independent research under the conditions that such research is developed with the goal of safeguarding substantial public interest in understanding and governing the implications of digital technologies for human rights. Such a research exception goes beyond the scope of Art. 31 DSA and addresses the problem that data access could be restricted because the internet intermediaries’ terms of use and privacy policies users agree to often fail to include explicit derogations for re-use of the data for research.

Conclusions

In sum, the Council of Europe’s recommendation offers a new vision of what it means to safeguard and at the same time expand freedom of expression in the digital age. There is a fine line between regulating speech and making sure that everyone gets a voice. The recommendation offers several actionable suggestions concerning the design of digital communication infrastructures, transparency and accountability, user awareness and empowerment, and support for the societal role of independent research. As such, the guidelines can be an essential resource for policymakers, civil society, academics, and internet intermediaries such as Google, Meta, Twitter or TikTok.

The latter companies are confronted with a challenging problem: prominent and ambitious regulatory proposals such as the DSA will require internet intermediaries to understand and account for the human rights implications of their technologies, even though they are not the classical addressees of human rights law. Fundamental rights, such as the right to freedom of expression, at least in Europe, apply in the first place to the relationship between states and citizens. Mandating that private actors such as internet intermediaries pay due regard to abstract rights such as the right to freedom of expression raises a host of difficult interpretational questions. More generally, the current European Commission’s focus on requiring the application of digital technology in line with fundamental rights and European values is laudable. Still, there is only limited expertise on how to interpret and implement fundamental rights law in the European Union, which started as, and still is primarily, an economic community. The Council of Europe’s recommendations and guidelines have an important complementary role to play in clarifying what respect for fundamental rights entails in the digital age and suggesting concrete actions to realise this vision.

This article, first published on 14th September 2022 , reflects the views of the authors and not those of the Media@LSE blog nor of the London School of Economics and Political Science.