Prof. Pattie Maes, MIT: “We don’t have to simplify everything for everybody”

Prof. Pattie Maes and her team at the MIT Media Lab conduct research on the impact of generative AI on creativity and human decision-making. Their aim is to advice AI companies on designing systems that enhance critical thinking and creativity rather than encourage cognitive offloading. The interview was led for the upcoming EBU News Report “Leading Newsrooms in the Age of Generative AI”.  

It is often said that AI can enhance people’s creativity. Research you led seems to suggest the opposite. Can you tell us about it?  

You’re referring to a study where we asked college students to write an essay and had them solve a programming problem.  

We had three different conditions: One group could use ChatGPT. Another group could only use search without the AI results at the top. And the third group did not have any tool.  

What we noticed was that the group that used ChatGPT wrote good essays, but they expressed less diversity of thought, were more similar to one another and less original. 

Because people put less effort into the task at hand? 

We have seen that in other experiments as well: people are inherently lazy. When they use AI, they don’t think as much for themselves. And as a result, you get less creative outcomes.  

It could be a problem if, say, programmers at a company all use the same co-pilot to help them with coding, they won’t come up with new ways of doing things.  

As AI data increasingly feeds new AI models, you will get more and more convergence and less improvement and innovation.  

Journalism thrives on originality. What would be your advice to media managers? 

Raising awareness can help. But it would be more useful if we built these systems differently.  

We have been building a system that helps people with writing, for example. But instead of doing the writing for you, it engages you, like a good colleague or editor, by critiquing your writing, and occasionally suggesting that you approach something from a different angle or strengthen a claim.  

It’s important that AI design engages people in contributing to a solution rather that automating things for them.  

Sounds like great advice for building content management systems. 

Today’s off-the-shelf systems use an interface that encourages people to say: “write me an essay on Y, make sure it’s this long and includes these points of view.”  

These systems are designed to provide a complete result. We have grammar and spelling correctors in our editing systems, but we could have AI built into editing software that says, “over here your evidence or argument is weak.”  

It could encourage the person to use their own brain and be creative. I believe we can design systems that let us benefit from human and artificial intelligence.  

But isn’t the genie already out of the bottle? If I encouraged students who use ChatGPT to use a version that challenges them, they’d probably say: “yeah, next time when I don’t have all these deadlines”.   

We should design AI systems that are optimised for different goals and contexts, like an AI that is designed like a great editor, or an AI that acts like a great teacher.  

A teacher doesn’t give you the answers to all the problems, because the whole point is not the output the person produces, it is that they have learned something in the process.  

But certainly, if you have access to one AI that makes you work harder and another AI that just does the work for you, it is tempting to use that second one. 

Agentic AI is a huge topic. You did research on AI and agents as early as 1995. How has your view on this evolved since? 

Back when I developed software agents that help you with tasks, we didn’t have anything like today’s large language models. They were built by hand for a specific application domain and were able to do some minimal learning from the user.  

Today’s systems are supposedly AGI (artificial general intelligence) or close to it and are billed as systems that can do everything and anything for us.  

But what we are discovering in our studies is that they do not behave the way people behave. They don’t make the same choices, don’t have that deeper knowledge of the context, that self-awareness and self-critical reflection on their actions that people have.  

A huge problem with agentic systems will be that we think they are intelligent and behave like us, but that they don’t. And it’s not just because they hallucinate. 

But we want to believe they behave like humans? 

Let me give you an example. When I hired a new administrative assistant, I didn’t immediately give him full autonomy to do things on my behalf.  

I formed a mental model of him based on the original interview and his résumé. I saw “oh, he has done a lot of stuff with finance, but he doesn’t have much experience with travel planning.” So when some travel had to be booked, I would tell him, “Let me know the available choices so that I can tell you what I value and help you make a choice.”  

Over time my mental model of the assistant develops, and his model about my needs and preferences. We basically learn about each other. It is a much more interactive type of experience than with AI agents.  

These agents are not built to check and say, “I’m not so confident making this decision. So, let me get some input from my user.” It’s a little bit naïve that AI agents are being portrayed as “they are ready to be deployed, and they will be wonderful and will be able to do anything.”  

It might be possible to build agents that have the right level of self-awareness, reflection and judgment, but I have not heard many developers openly think about those issues. And it will require a lot of research to get it right.  

Is there anything else your research reveals about the difficulties with just letting AI do things for us? 

We have done studies on decision making with AI. What you expect is that humans make better decisions if they are supported by an AI that is trained on a lot of data in a particular domain.  

But studies showed that was not what happened. In our study, we let people decide whether some newspaper headline was fake news or real news. What we found was when it’s literally just a click of a button to get the AI’s opinion, many people just use the AI’s output.  

There’s less deep engagement and thinking about the problem because it’s so convenient. Other researchers got similar results with experiments on doctors evaluating medical diagnoses supported by AI, for example. 

You are telling us that expectations in AI-support are overblown? 

I am an AI optimist. I do think it is possible to integrate AI into our lives in a way that it has positive effects. But we need to reflect more about the right ways to integrate it.  

In the case of the newspaper headlines we did a study that showed that if AI first engages you in thinking about a headline and asks you a question about it, it improves people’s accuracy, and they don’t accept the AI advice blindly.  

The interface can help with encouraging people to be a little bit more mindful and critical.  

This sounds like it would just need a little technical fix.  

It is also about how AI is portrayed. We talk about these systems as artificial forms of intelligence. We constantly are told that we’re so close to AGI. These systems don’t just converse in a human-like ways, but with an abundance of confidence.  

All of these factors trick us into perceiving them as more intelligent, more capable and more human than they really are. But they are more what Emily Bender, a professor at the University of Washington, called “stochastic parrots”.  

LLMs (large language models) are like a parrot that has just heard a lot of natural language by hearing people speak and can predict and imitate it pretty well. But that parrot doesn’t understand what it’s talking about.  

Presenting these systems as parrots rather than smart assistants would already help by reminding people to constantly think “Oh, I have to be mindful. These systems hallucinate. They don’t really understand. They don’t know everything.”  

We work with some AI companies on some of these issues. For example, we are doing a study with OpenAI on companion bots and how many people risk becoming overly attached to chat bots.  

These companies are in a race to get to AGI first, by raising the most money and building the biggest models. But I think awareness is growing that if we want AI to ultimately be successful, we have to think carefully about the way we integrate it in people’s lives.  

In the media industry there’s a lot of hope that AI could help journalism to become more inclusive and reach broader audiences. Do you see a chance for this to happen? 

These hopes are well-founded. We built an AI-based system for kids and older adults who may have trouble processing language that the average adult can process.  

The system works like an intra-language translator – it takes a video and translates it into simpler language while still preserving the meaning.  

There are wonderful opportunities to customize content to the abilities and needs of the particular user. But at the same time, we need to keep in mind that the more we personalize things, the more everybody would be in their own bubble, especially if we also bias the reporting to their particular values or interests.  

It’s important that we still have some shared media, shared news and a shared language, rather than creating this audience of one where people can no longer converse with others about things in the world that we should be talking about. 

This connects to your earlier argument: customisation could make our brains lazy.  

It is possible to build AI systems that have the opposite effect and challenge the user a little bit. This would be like being a parent who unconsciously adjusts their language for the current ability of their child and gradually introduces more complex language and ideas over time.  

We don’t have to simplify everything for everybody. We need to think about what AI will do to people and their social and emotional health and what artificial intelligence will do to natural human intelligence, and ultimately to our society.  

And we should have talks about this with everybody. Right now, our AI future is decided by AI engineers and entrepreneurs, which in the long run will prove to be a mistake. 

The interview was first published by the EBU on 1st April 2025.

Why climate change should be at the heart of modern journalism

The best insurance against misinformation is strong journalism. Professor Alexandra Borchardt explains how climate journalism and the data and verification skills we need to do this properly can transform our newsrooms.

It is often said that an abundance of questionable information drowns out facts. In climate journalism, the strategy should be to do the opposite: make journalism about global warming, its causes, and its remedies, so pervasive, that everybody everywhere can tell facts and reality from greenwashing and wishful thinking; drown out the misinformation with factual journalism.

This requires rethinking climate journalism from it being a “beat” or “specialist subject” to something that frames all our storytelling, particularly business reporting. This is a tough call, of course. Many obstacles hold media organisations back from prioritising investment in climate journalism. Climate issues often lack a newsy angle. They may be complicated and difficult to understand Coverage may mean expensive travel, and stories can be depressing, politically polarising, and if the journalism is delivered in a less than spectacular way, may fail to attract big audiences. All of which makes the commitment even harder.

Nevertheless, climate journalism is not optional. Journalists have an ethical responsibility, even a mandate to inform the public of threats and help them to make better decisions for themselves, their children, and their communities. Media has the duty to hold power to account and investigate wrongdoing. And a lot has gone wrong. Far too often publishers and broadcasters have kept global warming in the silo of science journalism, rather than at the heart of wider business and news coverage, even though it has been known for decades that the core issues are primarily economic, with powerful interests at play.

The good news is it might help editors and media managers to know that an investment in climate journalism will generate all sorts of benefits for their organisation. Precisely because climate journalism is so complex, the lessons that newsrooms can learn from doing it well can also be applied to other fields. To put it differently: sustainability journalism can make media more sustainable. This is the major conclusion of a report recently published by the European Broadcasting Union: “Climate Journalism That Works – Between Knowledge and Impact”.

It identified seven such benefits:

  • First, climate journalism is about the future. Today’s journalism is too often stuck in the now. It needs to develop strategies to increase its legitimacy in the attention economy. This is especially true for public service media, which is under attack from various political camps. Who else should have a clearer mandate to contribute to the protection of humankind through better journalism? This way, public service media would also meet the needs of younger generations they are struggling to reach. Above all, it is their future.
  • Second, climate protection needs hope. People only act if they believe they can make a difference. In contrast, today’s journalism focuses on conflict, shortcomings, and wrongdoing. Constructive and solutions-oriented journalism offer a way forward. A project called Drive, in which 21 German regional publishers pool their data, recently proved that inspirational pieces were the most valuable digital content when it came to subscriptions.
  • Third, in climate change, it’s what’s done that counts. Today’s journalism still focuses too much on what has been said. The “he said, she said” type of journalism that dominates political reporting tends to be highly unpopular with users though. Modern journalism should be based more on data than on quotes. Fact-checking and verification come in right here: both need to become second nature for any journalist. Climate journalism is an excellent training ground.
  • Fourth, climate journalism that works approaches a variety of audiences with respect and in languages they understand. It explains. Today’s journalism often elevates itself above its audience in a know-it-all manner. Journalism must become more diverse and inclusive if it is to reach people, inspire them, and move them to action. This applies to formats and protagonists.
  • Fifth, climate journalism must be rooted in the local. In contrast, today’s journalism too often strives for reach, neglecting the specific needs of communities. To make itself indispensable, journalism should reclaim its importance as a community-building institution. Those who use or even subscribe to a media product often do so because it makes them feel they belong.
  • Sixth, climate journalism must have an impact, otherwise it is meaningless. It should therefore reflect on its own practices and use insights from research, especially from communication sciences and psychology. Today’s journalism does this far too rarely. Journalists tend to be curious but often surprisingly resistant to change. Media companies could gain a lot if their managers and employees developed more of a learning mindset and trained their strategic thinking.
  • Seventh, climate journalism benefits from collaboration. In today’s journalism, old-fashioned competitive thinking still dominates far too often. Yet so much potential could be leveraged through cooperation. This applies to networking within organizations among desks and regional bureaus, as well as to devoping links with external partners from within the industry and beyond. The journalism of the future is collaborative.

This blog post was published in March 2023 by the BBC’s Trusted News Initiative.