Jump to Content
Identity and Security

API policy protections for the generative AI era

February 2, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/20526_GWS_GenAI_Privacy_Blog_Header_V3.4.m.max-2500x2500.png
Yulie Kwon Kim

VP, Product Management, Workspace Platform

Try Google Workspace at No Cost

Get a business email, all the storage you need, video conferencing, and more.

SIGN UP

Editorial note: Duet AI for Google Workspace is now Gemini for Google Workspace. Learn more.


In 2023, we saw how fast technology can move when it captures the imagination of an entire industry. The transformative potential of generative AI and large language models (LLMs) is plainly apparent, and the development of powerful new solutions continues to accelerate. At Google Workspace, we’ve been thrilled to see how millions of people are using AI tools like Duet AI as a powerful collaboration partner that can act as a coach, source of inspiration, and productivity booster. 

But the pace of innovation and development can never be an excuse to forget user protections. On the contrary, it’s more important than ever to be deeply focused on being thoughtful, intentional, and principled in deploying generative AI responsibly. We take this as an absolute imperative.

There are many facets to this responsibility, but one area we’ve particularly emphasized is protecting every user’s and organization’s Workspace data. In August, we outlined how our core privacy principles protect users in the generative AI era, and in November we explained how Duet AI is designed to safeguard organizations’ data. In continuation of these efforts, today we are clarifying how our existing API use policies ensure users' Workspace data is used responsibly by third parties in the generative AI era.

We’ve long held that an open ecosystem makes Workspace the strongest solution for our users and customers, but that the ecosystem can only thrive with guardrails. Our API use policies are critical to this end. They are designed to:

  1. Keep our users in control of how their Workspace data is used.

  2. Protect our products from misuse.

  3. Grow a healthy ecosystem for developers to build and innovate on the Workspace platform.

The clear policies and protections we’ve instituted in this vein have been vital to maintaining the highest levels of trust and participation from our users. This has kept our ecosystem healthy and vibrant. 

In this spirit, we want to be clear about how our existing API policies apply in the context of generative AI:  

  1. Our “Limitation on User Data Transfer” prohibits the use of Workspace user data to train non-personalized AI and/or ML models. To be clear: transfers of data for generalized machine-learning (ML) or artificial intelligence (AI) models are prohibited.

  2. Developers that access Workspace APIs will be required to commit via their privacy policies that they do not retain user data obtained through Workspace APIs to develop, improve, or train non-personalized AI and/or ML models.

While the policy details are nuanced, the benefits are straightforward:

  1. These API protections provide another layer of data protection for users, preventing the unauthorized and/or irrevocable exposure of a user’s Workspace data. 

  2. For developers, these new rules bolster user trust by bringing clarity to data protections for user data in the context of LLMs. We’ve seen over time that when users have higher trust in the ecosystem, they more actively engage in it. 

Taken together, these API policy clarifications will play an integral role in keeping our security and privacy protections up-to-date for our users and maintain a thriving developer ecosystem as generative AI technology continues to advance. 

We look forward to continuing our close work with developers to protect our users.

Posted in