Prof. Pattie Maes, MIT: “We don’t have to simplify everything for everybody”

Prof. Pattie Maes and her team at the MIT Media Lab conduct research on the impact of generative AI on creativity and human decision-making. Their aim is to advice AI companies on designing systems that enhance critical thinking and creativity rather than encourage cognitive offloading. The interview was led for the upcoming EBU News Report “Leading Newsrooms in the Age of Generative AI”.  

It is often said that AI can enhance people’s creativity. Research you led seems to suggest the opposite. Can you tell us about it?  

You’re referring to a study where we asked college students to write an essay and had them solve a programming problem.  

We had three different conditions: One group could use ChatGPT. Another group could only use search without the AI results at the top. And the third group did not have any tool.  

What we noticed was that the group that used ChatGPT wrote good essays, but they expressed less diversity of thought, were more similar to one another and less original. 

Because people put less effort into the task at hand? 

We have seen that in other experiments as well: people are inherently lazy. When they use AI, they don’t think as much for themselves. And as a result, you get less creative outcomes.  

It could be a problem if, say, programmers at a company all use the same co-pilot to help them with coding, they won’t come up with new ways of doing things.  

As AI data increasingly feeds new AI models, you will get more and more convergence and less improvement and innovation.  

Journalism thrives on originality. What would be your advice to media managers? 

Raising awareness can help. But it would be more useful if we built these systems differently.  

We have been building a system that helps people with writing, for example. But instead of doing the writing for you, it engages you, like a good colleague or editor, by critiquing your writing, and occasionally suggesting that you approach something from a different angle or strengthen a claim.  

It’s important that AI design engages people in contributing to a solution rather that automating things for them.  

Sounds like great advice for building content management systems. 

Today’s off-the-shelf systems use an interface that encourages people to say: “write me an essay on Y, make sure it’s this long and includes these points of view.”  

These systems are designed to provide a complete result. We have grammar and spelling correctors in our editing systems, but we could have AI built into editing software that says, “over here your evidence or argument is weak.”  

It could encourage the person to use their own brain and be creative. I believe we can design systems that let us benefit from human and artificial intelligence.  

But isn’t the genie already out of the bottle? If I encouraged students who use ChatGPT to use a version that challenges them, they’d probably say: “yeah, next time when I don’t have all these deadlines”.   

We should design AI systems that are optimised for different goals and contexts, like an AI that is designed like a great editor, or an AI that acts like a great teacher.  

A teacher doesn’t give you the answers to all the problems, because the whole point is not the output the person produces, it is that they have learned something in the process.  

But certainly, if you have access to one AI that makes you work harder and another AI that just does the work for you, it is tempting to use that second one. 

Agentic AI is a huge topic. You did research on AI and agents as early as 1995. How has your view on this evolved since? 

Back when I developed software agents that help you with tasks, we didn’t have anything like today’s large language models. They were built by hand for a specific application domain and were able to do some minimal learning from the user.  

Today’s systems are supposedly AGI (artificial general intelligence) or close to it and are billed as systems that can do everything and anything for us.  

But what we are discovering in our studies is that they do not behave the way people behave. They don’t make the same choices, don’t have that deeper knowledge of the context, that self-awareness and self-critical reflection on their actions that people have.  

A huge problem with agentic systems will be that we think they are intelligent and behave like us, but that they don’t. And it’s not just because they hallucinate. 

But we want to believe they behave like humans? 

Let me give you an example. When I hired a new administrative assistant, I didn’t immediately give him full autonomy to do things on my behalf.  

I formed a mental model of him based on the original interview and his résumé. I saw “oh, he has done a lot of stuff with finance, but he doesn’t have much experience with travel planning.” So when some travel had to be booked, I would tell him, “Let me know the available choices so that I can tell you what I value and help you make a choice.”  

Over time my mental model of the assistant develops, and his model about my needs and preferences. We basically learn about each other. It is a much more interactive type of experience than with AI agents.  

These agents are not built to check and say, “I’m not so confident making this decision. So, let me get some input from my user.” It’s a little bit naïve that AI agents are being portrayed as “they are ready to be deployed, and they will be wonderful and will be able to do anything.”  

It might be possible to build agents that have the right level of self-awareness, reflection and judgment, but I have not heard many developers openly think about those issues. And it will require a lot of research to get it right.  

Is there anything else your research reveals about the difficulties with just letting AI do things for us? 

We have done studies on decision making with AI. What you expect is that humans make better decisions if they are supported by an AI that is trained on a lot of data in a particular domain.  

But studies showed that was not what happened. In our study, we let people decide whether some newspaper headline was fake news or real news. What we found was when it’s literally just a click of a button to get the AI’s opinion, many people just use the AI’s output.  

There’s less deep engagement and thinking about the problem because it’s so convenient. Other researchers got similar results with experiments on doctors evaluating medical diagnoses supported by AI, for example. 

You are telling us that expectations in AI-support are overblown? 

I am an AI optimist. I do think it is possible to integrate AI into our lives in a way that it has positive effects. But we need to reflect more about the right ways to integrate it.  

In the case of the newspaper headlines we did a study that showed that if AI first engages you in thinking about a headline and asks you a question about it, it improves people’s accuracy, and they don’t accept the AI advice blindly.  

The interface can help with encouraging people to be a little bit more mindful and critical.  

This sounds like it would just need a little technical fix.  

It is also about how AI is portrayed. We talk about these systems as artificial forms of intelligence. We constantly are told that we’re so close to AGI. These systems don’t just converse in a human-like ways, but with an abundance of confidence.  

All of these factors trick us into perceiving them as more intelligent, more capable and more human than they really are. But they are more what Emily Bender, a professor at the University of Washington, called “stochastic parrots”.  

LLMs (large language models) are like a parrot that has just heard a lot of natural language by hearing people speak and can predict and imitate it pretty well. But that parrot doesn’t understand what it’s talking about.  

Presenting these systems as parrots rather than smart assistants would already help by reminding people to constantly think “Oh, I have to be mindful. These systems hallucinate. They don’t really understand. They don’t know everything.”  

We work with some AI companies on some of these issues. For example, we are doing a study with OpenAI on companion bots and how many people risk becoming overly attached to chat bots.  

These companies are in a race to get to AGI first, by raising the most money and building the biggest models. But I think awareness is growing that if we want AI to ultimately be successful, we have to think carefully about the way we integrate it in people’s lives.  

In the media industry there’s a lot of hope that AI could help journalism to become more inclusive and reach broader audiences. Do you see a chance for this to happen? 

These hopes are well-founded. We built an AI-based system for kids and older adults who may have trouble processing language that the average adult can process.  

The system works like an intra-language translator – it takes a video and translates it into simpler language while still preserving the meaning.  

There are wonderful opportunities to customize content to the abilities and needs of the particular user. But at the same time, we need to keep in mind that the more we personalize things, the more everybody would be in their own bubble, especially if we also bias the reporting to their particular values or interests.  

It’s important that we still have some shared media, shared news and a shared language, rather than creating this audience of one where people can no longer converse with others about things in the world that we should be talking about. 

This connects to your earlier argument: customisation could make our brains lazy.  

It is possible to build AI systems that have the opposite effect and challenge the user a little bit. This would be like being a parent who unconsciously adjusts their language for the current ability of their child and gradually introduces more complex language and ideas over time.  

We don’t have to simplify everything for everybody. We need to think about what AI will do to people and their social and emotional health and what artificial intelligence will do to natural human intelligence, and ultimately to our society.  

And we should have talks about this with everybody. Right now, our AI future is decided by AI engineers and entrepreneurs, which in the long run will prove to be a mistake. 

The interview was first published by the EBU on 1st April 2025.

Peter Archer, BBC: “What AI doesn’t change is who we are and what we are here to do”

The BBC’s Director of Generative AI talks about the approach of his organization to developing AI tools, experiences with their usage and the rampant inaccuracies AI assistants produce – and what is needed to remedy them. This interview was conducted for the EBU News Report “Leading Newsrooms in the Age of Generative AI” that will be published by the European Broadcasting Union.

BBC research recently revealed disturbing inaccuracies when AI agents provided news content and drew on BBC material. About every second piece had issues. Did you expect this?  

We expected to see a degree of inaccuracy, but perhaps not as high as we found. We were also interested in the range of different errors where AI assistants struggle including factual errors, but also lack of context, and the conflation of opinion and fact.

It was also interesting that none of the four assistants that we looked at – ChatGPT, Copilot, Gemini, and Perplexity – were much better or worse than any of the others, which suggests that there is an issue with the underlying technology.  

Has this outcome changed your view on AI as a tool for journalism?  

With respect to our own use of AI, it demonstrates the need to be aware of the limitations of AI tools.

We’re being conservative about the use of generative AI tools in the newsroom and our internal guidance is that generative AI should not be used directly for creating content for news, current affairs or factual content.

But we have identified specific use cases like summaries and reformatting that we think can bring real value.

We are not currently allowing third parties to scrape our content to be included in AI applications. We allowed ChatGPT and the other AI assistants to access our site solely for the purpose of this research. But, as our findings show, making content available can lead to distortion of that content.  

You emphasised working with the AI platforms was critical to tackle this challenge. Will you implement internal consequences, too? 

Generative AI poses a new challenge – because AI is being used by third parties to create content, like summaries of the news.

I think this new intersection of technology and content will require close working between publishers and technology companies to both help ensure the accuracy of content but also to make the most of the immense potential of generative AI technology.  

So, you think the industry should have more self-confidence? 

Publishers, and the creative and media industries more broadly, are critical to ensuring generative AI is used responsibly. The two sectors – AI and creative industries – can work together positively, combining editorial expertise and understanding of the audience with the technology itself.

More broadly, the media industry should develop an industry position – what it thinks on key issues. The EBU can be a really helpful part of that. In the UK, regulators like Ofcom are interested in the AI space.

We need a constructive conversation on how we collectively make sure that our information ecosystem is robust and trusted. The media sector is central to that.

On the research, we will repeat the study, hopefully including other newsrooms. Because I’m fascinated to see two things: Do the assistants’ performances change over time? And do newsrooms of smaller languages see the same issues or maybe more? 

Do you think the media industry in general is behaving responsibly towards AI? Or what do you observe when you look outside of your BBC world?  

On the whole yes, and it’s great to see different perspective as well as areas of common interest. For example, I think everybody is now looking at experiences like chat assistants.

There’s so much to do it would be fantastic to identify common priorities across the EBU group, because working on AI can be hard and costly and where we can collaborate we should.

That said, we have seen some pretty high-profile mistakes in the industry – certainly in the first 12 to 18 months after ChatGPT launched – and excitement occasionally outpaced responsible use.

It’s also very helpful to see other organizations testing some of the boundaries because it helps us and other public service media organizations calibrate where we are and what we should be doing.  

There are huge hopes in the industry to use generative AI to make journalism more inclusive, transcend format boundaries to attract different audiences. Are these hopes justified?  

I’m pretty bullish. The critical thing is that we stay totally aligned to our mission, our standards, and our values. AI changes a lot, but what it doesn’t change is who we are and what we’re here to do.

One of the pilots that we’re looking at how to scale is taking audio content, in this example, a football broadcast, and using AI to transcribe and create a summary and then a live text page.

Live text updates and pages on football games are incredibly popular with our audiences, but currently there’s only so many games we can create a live page for. The ability to use AI to scale that so we can provide a live text page for every football game we cover on radio would be amazing.

One of the other things that we’re doing is going to the next level with our own BBC large language model that reflects the BBC style and standards. This approach to constitutional AI is really exciting. It’s being led out of the BBC’s R&D team – we’re incredibly lucky to have them.  

Do you have anything fully implemented yet?  

The approach that we’ve taken with generative AI is to do it in stages. In a number of areas, like the football example, we are starting small with working, tactical solutions that we can increase the use of while we work on productionised versions in parallel.

Another example is using AI to create subtitles on BBC Sounds. Again, here we’ve got an interim solution that we will use to provide more subtitles to programmes while in parallel we create a productionised version that is that is much more robust and easier to scale across all audio.

A key consideration is creating capabilities that can work across multiple use cases not just one, and that takes time.  

What is your position towards labelling?  

We have a very clear position: We will label the use of AI where there is any risk that the audience might be materially misled.

This means any AI output that could be mistaken for real is clearly labelled. This is particularly important in news where we will also be transparent about where AI has a material or significant impact on the content or in its production – for example if an article is translated using AI.

We’re being conservative because the trust of our audience is critical.  

What’s the internal mood towards AI? The BBC is a huge organization, and you are probably working in an AI bubble. But do you have any feel for how people are coming on board?  

One of the key parts of my role is speaking to teams and divisions and explaining what AI is and isn’t and the BBC’s approach.

Over the last 12 months, we’ve seen a significant increase in uptake of AI tools like Microsoft Copilot and many staff are positive about how AI can help them in their day-to-day work.

There are of course lots of questions and concerns, particularly as things move quickly in AI.

A key thing is encouraging staff to play with the tools we have so they can understand the opportunities and limitations. Things like Microsoft Copilot are now available across the business, also Adobe Firefly, GitHub Copilot, very shortly ChatGPT.

But it’s important we get the balance right and listen carefully to those who have concerns about the use of AI.

We are proceeding very carefully because at the heart of the BBC is creativity and human-led journalism with very high standards of editorial. We are not going to put that at risk.  

What’s not talked about enough in the context of generative AI and journalism? 

We shouldn’t underestimate the extent to which the world is changing around us. AI assistants, AI overviews are here to stay.

That is a fundamental shift in our information landscape. In two or three years’ time, many may be getting their news directly from Google or Perplexity.

As our research showed, there are real reasons for concern. And there is this broader point around disinformation. We’ve all seen the Pope in a puffer jacket, right? And we’ve all seen AI images of floods in Europe and conflict in Gaza.

But we’re also starting to see the use of AI at a very local level that doesn’t get much exposure but could nevertheless ruin lives.

As journalists, we need to be attuned to the potential misinformation on our doorstep that is hard to spot.  

This interview was published by the EBU on 26th March 2025.

Climate Journalism – What works?

While the war in Ukraine and the pandemic have taken up a lot of space and energy in newsrooms recently, there is hardly any issue that will define our future more than the climate crisis: how it’s reported and received by audiences worldwide and how journalism can spur the debate on how to rebuild our economies in a sustainable way. 

I’m lead author of the upcoming report “Climate Journalism That Works – Between Knowledge and Impact” – that will be published in full in Spring 2023. Working with me on this have been Katherine Dunn from the Oxford Climate Journalism Network, and Felix Simon from the Oxford Internet Institute. The report will look at how to craft journalism about climate change that is likely to have an impact and to resonate with audiences and how to restructure newsrooms accordingly. It will also include best practice case studies and Q&As from thought leaders and influencers on what actually works.

These are some key preliminary findings that we presented at the EBU’s annual News Assembly on 12th October 2022:

•    Facts alone don’t help. More facts are not necessarily more convincing
•    The messenger is often more important than the message. It is a matter of credibility with the audience.
•    It is important to make climate impact part of all the beats in a newsroom – rather than confine it to a dedicated climate desk. All journalists need basic climate literacy.
•    There is no one-size-fits-all model for newsroom organization, language to be used or visual policy. Everyone has to make it fit their resources, values, culture.
•    Images matter a lot and the formats need to fit the particular audience
•    Leaders experience little resistance when implementing climate strategies. When leadership doesn’t make the topic a priority, a climate desk might flourish but the rest of the journalism will stay the same. 
•    The media has a hard time living up to their own standards when it comes to measuring carbon footprints or making newsrooms more sustainable. Travel is a pain point. 
•    There is a lot of material out there on how to communicate the climate challenge successfully, particularly from the field of communication studies. Newsrooms just haven’t used it yet.

Academics doing research on climate communication have discovered: Stories are more likely to work if they are related to the here and know instead of to the distant future, tied to a local context, convey agency, are constructive or solutions-oriented, and envision a sustainable future instead of emphasizing sacrifice, crisis, destruction, loss, and disaster. While doom scrolling might capture attention for a brief moment, it also risks to drive people into news avoidance.   

We also uncovered some indicators on how climate change and the environment resonate particularly with younger audiences – and how focusing on sustainable issues could help public service media speak directly to this audience and solve some of their own problems in the process.

Interestingly the same focus also appeals to young staffers – and attracts young talent – in newsrooms themselves. And there is evidence that these topics energize veteran news reporters and help promote overall diversity. They make journalism broader, more constructive, and help to break the dominance of the “he said, she said”-type of political reporting that hasn’t served audiences too well anyway.  

We will cover all this and more in the next News Report. But you don’t have to wait that long to read our findings. We will be publishing selected Q&As with media leaders, climate journalists and experts in advance of publication. You can read Wolfgang Blau’s take on some of the challenges – and opportunities – for public service newsrooms.

Climate Journalism That Works – Between Knowledge and Impact by Dr Alexandra Borchardt, Katherine Dunn and Felix Simon, will be published on 1 March 2023. This blog was first published on the EBU’s homepage.
 

 

Free speech in the digital age – a constructive approach

Digital platforms have fundamentally changed the way we communicate, express and inform ourselves. This requires new rules to safeguard democratic values. As the Digital Services Act (DSA) awaits adoption by the EU, Natali Helberger, Alexandra Borchardt and Cristian Vaccari explain here how the Council of Europe’s recently adopted recommendation “on the impact of digital technologies on freedom of expression” can complement the implementation of the DSA, which aims to update rules governing digital services in the EU. All three were members of the Council’s expert committee that was set up for this purpose, working in 2020 and 2021.

When Elon Musk announced his original plan to buy Twitter and, in his words, restore freedom of speech on the platform, EC Commissioner Thierry Breton quickly reminded him of the Digital Services Act (DSA). According to the DSA, providers of what it defines as ‘Very Large Online Platforms’ will have to ‘pay due regard to freedom of expression and information, including media freedom and pluralism.’ They will have to monitor their recommendation and content moderation algorithms for any systemic risks to the fundamental rights and values that constitute Europe. A video of Musk and Breton in Austin, Texas, shows Musk eagerly nodding and assuring Breton that “this all is very well aligned with what we are planning.”

But what exactly is well aligned here? What does it mean for social media platforms, such as Twitter, to pay due regard to freedom of expression, media freedom and pluralism? While the DSA enshrines a firm commitment to freedom of expression, it only provides limited concrete guidance on what freedom of expression means in a platform context. So when Musk was nodding along like an eager schoolboy, whilst his intentions may have been sincere there is also a realistic chance that he had no concrete idea of what exactly he was agreeing to.

The Council of Europe’s recently adopted recommendation “on the impact of digital technologies on freedom of expression” provides some much-needed guidance.

The leading fundamental rights organisation in Europe

The Council of Europe is the largest international fundamental rights organisation in Europe. Distinct from the European Union, the Council’s EU member states and 20 more European states develop joint visions on European values and fundamental freedoms, as enshrined in the European Convention of Human Rights and interpreted by the European Court of Justice. Article 10 of the ECHRdefines freedom of expression as “the freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

European media laws and policies have been significantly shaped by the Conventions, recommendations and guidelines of the Council. One of the most recent expert committees of the Council was tasked with preparing a recommendation on the impacts of digital technologies on freedom of expression, as well as guidelineson best practices for content moderation by internet intermediaries. The guidelines are already described here and here. In this post, the rapporteurs and chair of the Committee briefly summarise the key takeaways from the recommendation (for a full list of experts involved in the making of the recommendation, please see here). In so doing, we will explain the guidelines and address the question of how they complement and add to the recently agreed on DSA.

A value-based approach

The recommendation lays down principles to ensure that “digital technologies serve rather than curtail” freedom of expression and develops proposals to address the adverse impacts and enhance the positive effects of digital technology on freedom of expression. Here we note a first difference with the DSA. The DSA takes a risk-based approach: for example, Art. 26 requires Very Large Online Platforms to identify the risks and dangers that their recommendation and content moderation algorithms pose for fundamental rights and society. As such it focuses on the negative implications of technology.

In contrast, the Council of Europe Recommendation takes a value-based approach. It first clarifies that these technologies have an essential, positive role in a democracy by opening up the public sphere to more and diverse voices. According to the Council, the “digital infrastructures of communication in a democratic society” need to be designed “to promote human rights, openness, interoperability, transparency, and fair competition”. This value-based approach to digital technology acknowledges the need to mitigate risks, but goes one step further and demands that states, companies, and civil society actors work together to realize technology’s positive contribution to democracy and fundamental rights. It is vital to notice this difference, as both a risk-based and value- and opportunity-based approach will set the agenda for research and innovation.

Digital infrastructure design and the creation of counter-power

Where the DSA takes an application or tool-based approach, the recommendation adopts a broader media ecology perspective. The DSA addresses algorithmic content moderation, news recommenders and curation first and foremost as related to specific digital tools and applications. The recommendation takes a different approach and acknowledges that all those digital tools and applications together form the wider digital communication infrastructure that democracies rely on. According to the recommendation, these digital communication infrastructures should be designed to proactively promote human rights, openness, accessibility, interoperability, transparency and fair competition.

One key recommendation that arises from this media ecology view of digital technology is for states to proactively invest in and create the conditions to enhance economic competition and democratic pluralism in and on digital infrastructures. Other key recommendations include stimulating the digital transformation of news organisations, promoting open-source software, and investing in public service media. The recommendation also explicitly stresses the essential democratic role of local and regional media and the need to tackle concentration in terms of both economic dominance and, crucially, the power to shape public opinion. The recently adopted  Council of Europe recommendation on creating a favourable environment for quality journalism complements the document and provides more detail in this particular area.

Transparency, accountability and redress as a joint responsibility of states and internet intermediaries

Transparency and explainability are essential in both the recommendation and the DSA. Like the DSA, the recommendation requires internet intermediaries to provide adequate transparency on the design and implementation of their terms of service and their key policies for content moderation, such as information regarding removal, recommendation, amplification, promotion, downranking, monetisation, and distribution, particularly concerning their outcomes for freedom of expression. The recommendation highlights that such information must ensure transparency on different levels and with different goals, including empowering users, enabling third-party auditing and oversight, and informing independent efforts to counter harmful content online. In other words, transparency is a multi-faceted and multi-player concept.

Having said that, whereas the DSA places the burden of providing transparency in the first place on platforms, the Council of Europe’s recommendation also ascribes responsibility to states and regulators. It advocates that states and regulators “should ensure that all necessary data are generated and published to enable any analysis necessary to guarantee meaningful transparency on how internet intermediaries’ policies and their implementation affect freedom of expression among the general public and vulnerable subjects.” States should also “assist private actors and civil society organisations in the development of independent institutional mechanisms that ensure impartial and comprehensive verification of the completeness and accuracy of data made available by internet intermediaries.” This approach complements the DSA in at least two respects: it assigns states a responsibility to ensure the accessibility and usability of such information, and it supports the development of independent systems of quality control (rather than relying exclusively on the mechanisms of Art. 31 DSA).

The extensive transparency mechanisms must be seen in the context of the recommendations on contestability. Transparency can be a value in itself, but as a regulatory tool, transparency obligations are primarily intended to empower subjects to take action. Consequently, the recommendation includes an obligation for states to ensure that any person whose freedom of expression is limited due to restrictions imposed by internet intermediaries must be able to seek timely and effective redress. Interestingly, the recommendation also extends this right to the news media: news providers whose editorial freedom is threatened due to terms of service or content moderation policies must be able to seek timely and effective redress mechanisms, too.

Actionable and empowering media literacy

The Council of Europe has a long tradition of supporting and developing media literacy policies, and this recommendation is no exception. The recommendation promotes data and digital literacy to help users understand the conditions under which digital technologies affect freedom of expression, how information of varying quality is procured, distributed and processed and, importantly, what individuals can do to protect their rights. As in other domains, the recommendation stresses the positive role that states can play. States should enable users to engage in informational self-determination and exercise greater control over the data they generate, the inferences derived from such data, and the content they can access. Although it is undeniable that the complexity of digital information environments places a higher burden on citizens to select, filter, and evaluate the content they encounter, the recommendation aims to promote processes and practices that reduce this burden by enhancing user empowerment and control.

Independent research for evidence-based rulemaking

In current regulatory proposals, there is a growing recognition of the role that independent research must play. Among other things, research can help to:

  • identify (systemic) risks to fundamental rights, society and democracy as a result of the use of algorithmic tools,
  • monitor compliance with the rules and responsibilities that pertain to those using those tools,
  • develop insights on how to design technologies, institutions and governance frameworks to promote and realise fundamental rights and public values.

There is also growing recognition of the responsibility of states and platforms to create the conditions for independent researchers to be able to play such important role. The provisions in Art. 31 of the DSA on access to research data are an example of this new awareness.

The CoE recommendation, too, emphasises and requires that internet intermediaries must enable researchers to access the kinds of high-quality data that are necessary to investigate the individual and societal impacts of digital technologies on fundamental rights.  The recommendation goes one step further than the DSA, however, and  also emphasises the broader conditions that need to be fulfilled for independent researchers to play such a role. Besides calling for states to provide adequate funding for such research, the recommendation stresses the need to create secure environments that facilitate secure data access and analysis, as well as measures to protect the independence of researchers.

It is worth noting that the recommendation also suggests a new, more general research exception: that data lawfully collected for other purposes by internet intermediaries may be processed to conduct rigorous and independent research under the conditions that such research is developed with the goal of safeguarding substantial public interest in understanding and governing the implications of digital technologies for human rights. Such a research exception goes beyond the scope of Art. 31 DSA and addresses the problem that data access could be restricted because the internet intermediaries’ terms of use and privacy policies users agree to often fail to include explicit derogations for re-use of the data for research.

Conclusions

In sum, the Council of Europe’s recommendation offers a new vision of what it means to safeguard and at the same time expand freedom of expression in the digital age. There is a fine line between regulating speech and making sure that everyone gets a voice. The recommendation offers several actionable suggestions concerning the design of digital communication infrastructures, transparency and accountability, user awareness and empowerment, and support for the societal role of independent research. As such, the guidelines can be an essential resource for policymakers, civil society, academics, and internet intermediaries such as Google, Meta, Twitter or TikTok.

The latter companies are confronted with a challenging problem: prominent and ambitious regulatory proposals such as the DSA will require internet intermediaries to understand and account for the human rights implications of their technologies, even though they are not the classical addressees of human rights law. Fundamental rights, such as the right to freedom of expression, at least in Europe, apply in the first place to the relationship between states and citizens. Mandating that private actors such as internet intermediaries pay due regard to abstract rights such as the right to freedom of expression raises a host of difficult interpretational questions. More generally, the current European Commission’s focus on requiring the application of digital technology in line with fundamental rights and European values is laudable. Still, there is only limited expertise on how to interpret and implement fundamental rights law in the European Union, which started as, and still is primarily, an economic community. The Council of Europe’s recommendations and guidelines have an important complementary role to play in clarifying what respect for fundamental rights entails in the digital age and suggesting concrete actions to realise this vision.

This article, first published on 14th September 2022 , reflects the views of the authors and not those of the Media@LSE blog nor of the London School of Economics and Political Science.

What’s wrong with the News?

The rise of data analytics has made journalists and their editors confident that they know what the people want. Why, then, did almost one-third of respondents to the Reuters Institute’s latest Digital News Report say that they regularly avoid news altogether?

The British public can’t get enough news about Brexit – at least, that’s what news platforms’ data analytics say. But, according to the Reuters Institute’s latest Digital News Report, 71% of the British public tries to avoid media coverage of the United Kingdom’s impending departure from the European Union. This disparity, which can be seen in a wide range of areas, raises serious questions about news organizations’ increasingly data-driven approach to reporting.

The rise of data analytics has made journalists and their editors confident that they know what people want. And for good reason: with a large share of news consumed on the Internet, media platforms know exactly which stories readers open, how much they read before getting bored, what they share with their friends, and the type of content that entices them to sign up for a subscription.

Such data indicate, for example, that audiences are interested in extraordinary investigative journalism, diet and personal-finance advice, and essays about relationships and family. They prefer stories with a personal angle – say, detailing an affected individual’s fate – rather than reports on ongoing conflicts in the Middle East or city hall coverage. And they are drawn to sensational stories – such as about US President Donald Trump’s scandals and antics – under “clickbait” headlines.

But if newsrooms were really giving audiences what they wanted, it seems unlikely that almost one-third (32%) of respondents in the Digital News Report, the world’s largest ongoing survey of online news consumption, would report that they regularly avoid news altogether. But they did, and that figure is up three percentage points from two years ago.

The most common explanation for avoiding the news media, given by 58% of those who do, is that following it has a negative effect on their mood. Many respondents also cited a sense of powerlessness.

Moreover, only 16% of participants approve of the tone used in news coverage, while 39% disapprove. Young people, in particular, seem fed up with the negativity bias that has long been regarded as a sure-fire way to attract audiences. For many, that bias feels disempowering. Conversations indicate that the problem is compounded for young parents, who want to believe that the world will be good to their children. Younger generations also feel consuming news should be more entertaining and less of a chore.

One reason for the disconnect between the data and people’s self-reported relationship with the news media may be the “guilty pleasure” effect: people have an appetite for voyeurism, but would prefer not to admit it, sometimes even to themselves. So, even as they click on articles about grisly crimes or celebrity divorces, they may say that they want more “quality news.”

 

When newsrooms indulge readers’ worst impulses, the consequences are far-reaching. Media are integral to support accountability by anyone wielding power or influence, and to mobilize civic engagement. Democracies, in particular, depend on voters being well informed about pressing issues. News organizations thus have a responsibility to report on serious topics, from political corruption to climate change, even if they are unpleasant.

That does not mean that readers’ complaints about media’s negativity bias should be disregarded. On the contrary, if people are to be motivated to confront challenges that are shaping their lives, they should not be made to feel powerless.

This is where so-called solutions journalism comes in. By balancing information about what needs changing with true stories about positive change, news organizations can fulfill their responsibility both to inform and to spur progress. This means occasionally recognizing that over the long term, living standards have improved globally.

Reconnecting with audiences will also require media organizations to broaden their perspectives. In much of the West, it is largely white, male, middle-class journalists who decide what to cover and how. This limits news media’s ability to represent diverse societies fairly and accurately.

In fact, only 29% of Digital News Report respondents agreed that the topics the news media choose “feel relevant” to them. A joint study by the Reuters Institute and the Johannes Gutenberg University in Mainz, Germany, indicates that the key to increasing this share is to increase diversity in newsrooms.

At the same time, news media need to do a better job of contextualizing and otherwise explaining the news. While 62% of Digital News Report respondents feel that media keep them apprised of events, only half believe news outlets are doing enough to help them understand what is happening. At a time when nearly one-third of people think that there is simply too much news being reported, the solution seems clear: do less, better.

This means listening to readers, not just studying the data analytics. It means balancing good news with bad news, and offering clarifying information when needed. It also means representing diverse perspectives. Media organizations that do not make these changes will continue to lose trust and relevance. That is hardly a sound strategy for convincing consumers that their work is worth paying for.

This commentary was published by Project Syndicate on September 11, 2019