Interview: Thinking through the ethics of AI assistants with Iason Gabriel from Google DeepMind

The Google Workspace Team
Google Workspace Newsletter
Keep up with the evolving future of work and collaboration with insights, trends, and product news.
SIGN UPAI assistants are in the air at Google. In recent months, we’ve introduced multiple products and features with agentic capabilities, meaning they can act on a user’s behalf (a few examples: Google Cloud’s Agentspace, Gemini’s Deep Research, and custom Gems in Google Workspace with Gemini). Meanwhile, the dream of a universal AI assistant continues to evolve with Project Astra.
But how do we make sure our innovations are both bold and responsible, and where do ethics come in as we explore future possibilities?
Professor Hannah Fry, British mathematician and host of Google DeepMind’s AI podcast, recently interviewed Iason Gabriel, a staff research scientist in DeepMind’s ethics team, about these questions and more. Before joining Google, Gabriel taught moral and political philosophy at Oxford University and worked at the United Nations. His work at the intersection of ethics and artificial intelligence has earned him recognition as one of the leading thinkers in the field, and he was recently featured in the Time 100 AI list. This interview is adapted from a longer conversation on Google DeepMind’s podcast, and it's been edited here for clarity and length.
Professor Hannah Fry: I think it's probably good to start with some definitions. So what do you actually mean by an AI assistant?
Iason Gabriel: So we're all familiar with generative AI and things like ChatGPT and Gemini, but there's this idea that these kinds of base technologies will become more capable down the line. They'll be plugged into different kinds of tools that will allow them to take action in the world. We've seen them becoming more competent at reasoning, and maybe in due course they'll be able to pursue really complicated goals rather than just producing very fluent text.
And so the assistant is a kind of agent, but one that has a special relationship with the user, and which is linked to the user's intentions. So it does what I tell it to do within reason and potentially it can help us on this life journey in a variety of different ways.
Hannah: What types of AI assistants are we talking about here?
Iason: When people talk about AI assistants, they range from things that are almost right in front of us to things that are potentially really powerful and advanced technologies. So the things that are right in front of us are the administrative assistants, for example, interfaces for managing your meetings in your calendar. Some kinds of conversational chatbots are already quite sophisticated and you can imagine having really good learning experiences with an AI assistant that's designed to help you master some skill.
But there are more capable things that could be built and sometimes it's a matter of taking an example of what we have now and just extrapolating into the future. We can imagine a kind of research helper, but can we imagine a research helper that has literally read every scientific paper in the world and has a superhuman ability to synthesize information?
And there’s also the idea of an AI assistant as a coach that helps you achieve your goals and keeps you on track. Maybe it protects you against certain kinds of distractions or mistakes you might make.
And then I think one final thing people have started to talk about is a universal interface or a kind of assistant that moves between different devices. And so maybe it does information retrieval here and gives advice over there. That's really a different world from the one we're in now.


Hannah: You published a paper on the ethics of advanced AI assistants. Can you tell us how the paper came about and what you explored?
Iason: The paper was the result of an almost two-year collaboration and we were really preoccupied with this question of what comes after language models. For us this is a very high-stakes question. We really need to know what's going to happen next so that we can understand what the consequences might be, and then reason backwards to make good decisions in the present moment.
We started to have this dawning realization that there would be an agentic turn, or that you could build all these things on top of language models. And then we just started to ask ourselves questions like “What happens if you have a million or a billion agents in the world?” That's quite a different society from the one we live in now.
We gathered researchers from many different disciplines — economists, sociologists, computer scientists, human computer interaction experts. Along with Ariana Manzini and Geoff Keeling, my co-authors, we pretty much asked everyone just to tell us about their own expertise, but mapped onto this topic.
We went and spoke to the privacy researchers and we asked: “What do you think privacy looks like in this world where agents are interacting with one another on our behalf? What do you think safety looks like in that world?”
Hopefully there’s quite a lot of wisdom in the paper applied to one domain.
Hannah: Let's get into some of the knotty philosophical issues in more detail. It feels like there's been a lot of conversation about anthropomorphization recently. Do we actually want to think of our AI as though it's human?
Iason: As people have interacted with these systems, there’s an unexpected magnetism or pull that comes from interacting with an AI that's fluent and very intelligent. And then, of course, there's this question about what the ideal persona should be. Whether we want to think of AI as though it’s human depends on the context. Some kind of baseline anthropomorphic ability, like speaking natural language, is very helpful. We'd all rather talk to our assistant than type out instructions. So it makes it easier to communicate, which is potentially a great thing.
I think there's also a kind of bad situation that we want to avoid, which is essentially people forgetting the nature of the interaction that they're in. When you study people who use AI companions it's quite complicated and there's actually a lot of evidence that it can be a really beneficial experience.
Hannah: In what way?
Iason: There's evidence that it's improved their sense of mental health and wellbeing. And they report that it has led them to have better interactions with others because they can model discussions in advance. It's a bit like an energy boost to have this kind of companionship.
These things are complex and in our paper we offer some benchmarks. For example, we want these relationships to be conducive to long-term health and we don't want the user's autonomy to be undermined.
But there’s also the bigger question of how do you make sure that users remain anchored and safe within these kinds of environments?
So I imagine the thing that would naturally happen is we start telling our AI all sorts of things that you would never have thought you were going to tell a computer before. And that might be okay, but we need to make sure that you're safe in that interaction. There are all these safeguards that we need to build around it, and probably also some kind of check-in protocol where it pushes back on certain kinds of things. One thing that we've been very clear on is we don't think that the AI should actually pretend to be human.
Hannah: I do also wonder a little bit about what happens at the sort of marketplace level of assistants. As with driverless cars, you want to buy the car that will save your life, you want to buy the assistant that will give you the advantage.
Iason: It's true, but it's not necessarily the case that everything has to be set up through a chaotic market system. The flip side of AI assistants is that they don't have to be engaged in a kind of aggressive, unstructured interaction with one another. It's possible that a coordinated system could work much better. Imagine if our assistants were working for our advantage, but in a kind of collectively joined up way.
Watch the full YouTube video of the interview for more on the ethics of advanced AI agents.