Prof. Pattie Maes, MIT: “We don’t have to simplify everything for everybody”

Prof. Pattie Maes and her team at the MIT Media Lab conduct research on the impact of generative AI on creativity and human decision-making. Their aim is to advice AI companies on designing systems that enhance critical thinking and creativity rather than encourage cognitive offloading. The interview was led for the upcoming EBU News Report “Leading Newsrooms in the Age of Generative AI”.  

It is often said that AI can enhance people’s creativity. Research you led seems to suggest the opposite. Can you tell us about it?  

You’re referring to a study where we asked college students to write an essay and had them solve a programming problem.  

We had three different conditions: One group could use ChatGPT. Another group could only use search without the AI results at the top. And the third group did not have any tool.  

What we noticed was that the group that used ChatGPT wrote good essays, but they expressed less diversity of thought, were more similar to one another and less original. 

Because people put less effort into the task at hand? 

We have seen that in other experiments as well: people are inherently lazy. When they use AI, they don’t think as much for themselves. And as a result, you get less creative outcomes.  

It could be a problem if, say, programmers at a company all use the same co-pilot to help them with coding, they won’t come up with new ways of doing things.  

As AI data increasingly feeds new AI models, you will get more and more convergence and less improvement and innovation.  

Journalism thrives on originality. What would be your advice to media managers? 

Raising awareness can help. But it would be more useful if we built these systems differently.  

We have been building a system that helps people with writing, for example. But instead of doing the writing for you, it engages you, like a good colleague or editor, by critiquing your writing, and occasionally suggesting that you approach something from a different angle or strengthen a claim.  

It’s important that AI design engages people in contributing to a solution rather that automating things for them.  

Sounds like great advice for building content management systems. 

Today’s off-the-shelf systems use an interface that encourages people to say: “write me an essay on Y, make sure it’s this long and includes these points of view.”  

These systems are designed to provide a complete result. We have grammar and spelling correctors in our editing systems, but we could have AI built into editing software that says, “over here your evidence or argument is weak.”  

It could encourage the person to use their own brain and be creative. I believe we can design systems that let us benefit from human and artificial intelligence.  

But isn’t the genie already out of the bottle? If I encouraged students who use ChatGPT to use a version that challenges them, they’d probably say: “yeah, next time when I don’t have all these deadlines”.   

We should design AI systems that are optimised for different goals and contexts, like an AI that is designed like a great editor, or an AI that acts like a great teacher.  

A teacher doesn’t give you the answers to all the problems, because the whole point is not the output the person produces, it is that they have learned something in the process.  

But certainly, if you have access to one AI that makes you work harder and another AI that just does the work for you, it is tempting to use that second one. 

Agentic AI is a huge topic. You did research on AI and agents as early as 1995. How has your view on this evolved since? 

Back when I developed software agents that help you with tasks, we didn’t have anything like today’s large language models. They were built by hand for a specific application domain and were able to do some minimal learning from the user.  

Today’s systems are supposedly AGI (artificial general intelligence) or close to it and are billed as systems that can do everything and anything for us.  

But what we are discovering in our studies is that they do not behave the way people behave. They don’t make the same choices, don’t have that deeper knowledge of the context, that self-awareness and self-critical reflection on their actions that people have.  

A huge problem with agentic systems will be that we think they are intelligent and behave like us, but that they don’t. And it’s not just because they hallucinate. 

But we want to believe they behave like humans? 

Let me give you an example. When I hired a new administrative assistant, I didn’t immediately give him full autonomy to do things on my behalf.  

I formed a mental model of him based on the original interview and his résumé. I saw “oh, he has done a lot of stuff with finance, but he doesn’t have much experience with travel planning.” So when some travel had to be booked, I would tell him, “Let me know the available choices so that I can tell you what I value and help you make a choice.”  

Over time my mental model of the assistant develops, and his model about my needs and preferences. We basically learn about each other. It is a much more interactive type of experience than with AI agents.  

These agents are not built to check and say, “I’m not so confident making this decision. So, let me get some input from my user.” It’s a little bit naïve that AI agents are being portrayed as “they are ready to be deployed, and they will be wonderful and will be able to do anything.”  

It might be possible to build agents that have the right level of self-awareness, reflection and judgment, but I have not heard many developers openly think about those issues. And it will require a lot of research to get it right.  

Is there anything else your research reveals about the difficulties with just letting AI do things for us? 

We have done studies on decision making with AI. What you expect is that humans make better decisions if they are supported by an AI that is trained on a lot of data in a particular domain.  

But studies showed that was not what happened. In our study, we let people decide whether some newspaper headline was fake news or real news. What we found was when it’s literally just a click of a button to get the AI’s opinion, many people just use the AI’s output.  

There’s less deep engagement and thinking about the problem because it’s so convenient. Other researchers got similar results with experiments on doctors evaluating medical diagnoses supported by AI, for example. 

You are telling us that expectations in AI-support are overblown? 

I am an AI optimist. I do think it is possible to integrate AI into our lives in a way that it has positive effects. But we need to reflect more about the right ways to integrate it.  

In the case of the newspaper headlines we did a study that showed that if AI first engages you in thinking about a headline and asks you a question about it, it improves people’s accuracy, and they don’t accept the AI advice blindly.  

The interface can help with encouraging people to be a little bit more mindful and critical.  

This sounds like it would just need a little technical fix.  

It is also about how AI is portrayed. We talk about these systems as artificial forms of intelligence. We constantly are told that we’re so close to AGI. These systems don’t just converse in a human-like ways, but with an abundance of confidence.  

All of these factors trick us into perceiving them as more intelligent, more capable and more human than they really are. But they are more what Emily Bender, a professor at the University of Washington, called “stochastic parrots”.  

LLMs (large language models) are like a parrot that has just heard a lot of natural language by hearing people speak and can predict and imitate it pretty well. But that parrot doesn’t understand what it’s talking about.  

Presenting these systems as parrots rather than smart assistants would already help by reminding people to constantly think “Oh, I have to be mindful. These systems hallucinate. They don’t really understand. They don’t know everything.”  

We work with some AI companies on some of these issues. For example, we are doing a study with OpenAI on companion bots and how many people risk becoming overly attached to chat bots.  

These companies are in a race to get to AGI first, by raising the most money and building the biggest models. But I think awareness is growing that if we want AI to ultimately be successful, we have to think carefully about the way we integrate it in people’s lives.  

In the media industry there’s a lot of hope that AI could help journalism to become more inclusive and reach broader audiences. Do you see a chance for this to happen? 

These hopes are well-founded. We built an AI-based system for kids and older adults who may have trouble processing language that the average adult can process.  

The system works like an intra-language translator – it takes a video and translates it into simpler language while still preserving the meaning.  

There are wonderful opportunities to customize content to the abilities and needs of the particular user. But at the same time, we need to keep in mind that the more we personalize things, the more everybody would be in their own bubble, especially if we also bias the reporting to their particular values or interests.  

It’s important that we still have some shared media, shared news and a shared language, rather than creating this audience of one where people can no longer converse with others about things in the world that we should be talking about. 

This connects to your earlier argument: customisation could make our brains lazy.  

It is possible to build AI systems that have the opposite effect and challenge the user a little bit. This would be like being a parent who unconsciously adjusts their language for the current ability of their child and gradually introduces more complex language and ideas over time.  

We don’t have to simplify everything for everybody. We need to think about what AI will do to people and their social and emotional health and what artificial intelligence will do to natural human intelligence, and ultimately to our society.  

And we should have talks about this with everybody. Right now, our AI future is decided by AI engineers and entrepreneurs, which in the long run will prove to be a mistake. 

The interview was first published by the EBU on 1st April 2025.

Beyond the headline race: How the media must lead in a polarized world

When US Supreme Court Justice Ruth Bader Ginsburg succumbed to cancer recently, the headline race was on once again. Instead of pausing for a moment to honor a great personality for her leadership and stamina in the quest for justice, most of the news media didn’t miss a beat. Who would President Donald Trump nominate as her successor, and how would that reshape American society? Reporting instantly took second place to speculation and opinion, drowning out the announcement of the 87-year-old’s death in a sea of noise.

The predominant frame for interpreting today’s world is winning and losing, and the media has bought right into it. Being faster, smarter, delivering yet another interpretation, speculation and judgement – a certain breathlessness has always been inherent in journalism. But in pre-digital times, news media only competed against each other. The difference now is that they are up against everything an average smartphone holds. The battle for attention shapes their very existence. And readers are responding by leaving in droves. According to the Reuters Institute’s Digital News Report, one in three people now regularly avoids the news. A rising share of audiences find journalism too overwhelming, too negative, too opinionated with too little relevance for their daily lives. And they believe it can’t always be trusted.

This is bad news – for democracy. In a world of noise, propaganda and misinformation, leadership by independent media that provide the facts is needed more than ever. Studies show that voting turnout is higher, more people run for office and public money is spent more responsibly where local news media keep citizens informed and hold institutions to account. But business models are broken. Platform monopolies have gobbled up advertising money and optimize for attention; too often the media has followed suit.

Now there is no way that media companies can outsmart Google, Facebook and the like. News media have to go where their audiences are. But when opinion is everywhere, quality information becomes a critically important currency. Covid-19 has demonstrated that people crave trustworthy journalism. According to the Edelman Trust Barometer, in the first weeks of the pandemic more people relied on major news organizations than on government agencies or even their own friends and family for information. This is a huge responsibility, but what to do with it?

First of all, listening to audiences is vital. Many journalists still spend more energy on beating the competition than attempting to find out what their audiences need. Among these are more explanation, more solutions, a clear distinction between facts and opinion, less noise, clickbait and talking down to people. Instead of indulging in thumbs-up, thumbs-down journalism, more constructive reporting is needed.

The news media cannot go it alone, though. The political sphere needs to secure press freedom; supporting the economic viability of the industry is part of it. And the platform companies that shape today’s communication infrastructure have to take responsibility too. Their algorithms have to optimize for quality content.

Yet blaming Silicon Valley for everything that is going wrong has been the easy way out for too long. A recent study by the Berkman Klein Center for Internet and Society confirmed what other research has already pointed out: the mass media are much more responsible for spreading misinformation – for the most part thought up by political leaders – than social media is. This is bad news and good news at the same time. Bad news, because journalism has not lived up to its potential. Good news, because the media still has plenty of agenda-setting power. Instead of blaming platform companies or foreign meddling for spreading “fake news”, the news media and its leaders should confidently reassert their historic mission to lead through a world of information confusion: that is, to deliver the facts, be transparent about their quest and stimulate serious public conversation. The health of our societies depends on it.