A human-in-the-loop approach to collaborating with gen AI

Reena Jana
Head of AI Research & Standards, Trust & Safety, Google
Try Google Workspace at No Cost
Get a business email, all the storage you need, video conferencing, and more.
SIGN UPGen AI is adept at performing complex calculations and consolidating large amounts of information. It can also help you generate a video, develop a blog post, or vibe code. But human input is still vital for guiding and overseeing AI, as humans have unique expertise and experiences that gen AI can’t replicate. One way to combine human and machine intelligence is using a “human-in-the-loop” approach.
Human-in-the-loop refers to a person who collaborates with AI by using their subject-matter knowledge, situational understanding, and professional judgment to refine output and take advantage of gen AI’s data-gathering and generative potential. The approach is used in the development of gen AI tools you use, too. People collaborate with AI models to train them and test performance over time. They apply human expertise to fine-tune how AI models interpret complex safety guidelines. By evaluating these models against real-world nuances, people help ensure the technology remains consistently helpful and reliable in every interaction.
The human-in-the-loop approach maximizes the benefits of human-AI collaboration in quality, productivity, and reliability — blending the best of both to help you complete tasks and achieve your goals.
How it works
Human-in-the-loop might seem like a new term, but the approach has a long history, dating back decades and first used in the context of earlier, pre-AI automated systems. As an example, think about an airplane’s autopilot. It uses data points from dozens of sensors and signals to keep a plane on course. But it never has the final say.
If sensors fail, or a system malfunctions, or a particularly tricky maneuver is needed, there’s always a skilled, attentive pilot at the controls. Leaning on the strengths of both technology and humans, this approach keeps a safety net in place while decreasing the workload for people.
Gauge potential impact
Every day, we're using gen AI for everything from polishing an email to reviewing legal contracts or conducting cancer research. And not every output needs a head-to-toe review.
Instead, it’s helpful to consider what impact the output will have and tailor review to the elements most critical to its purpose. Consider:
- Risk. Some decisions are low-stakes and might require less oversight. But if irreversible decisions or large amounts of resources are involved, there’s a greater need for review.
- Scale. A decision that has the potential to affect many people, even in small ways, may benefit from additional scrutiny.
Identify critical knowledge and find the right person
Just as a ship requires a navigator to chart the course, a human-in-the-loop approach ensures the AI is always steered by expert judgment.
A good reviewer:
- Understands the context of the task
- Can vouch for the accuracy of the inputs
- Knows how the outputs will be used
- Can vet the logic/approach for new challenges or problems
Because humans are better equipped to deal with uncertainty, the right human in the loop can take potential knowledge gaps into account when deciding how to use AI output. In many cases, it can be the same person who created the AI-assisted output.
Below are some sample guidelines:


Better together
A human-in-the-loop approach takes advantage of human-AI partnership to get the best of human expertise and of AI’s data-processing power. This collaborative approach helps increase the accuracy, quality, and usability of output. It also has benefits for human workers. When AI is used well, your energy no longer needs to be consumed by low-importance, time-intensive tasks. And with a human-in-the-loop approach, you’re free to build AI-assisted workflows with the confidence that you’re always in control.



