Google spent several months allowing internet users’ health opinions to be organized by AI and served to people making medical searches, then quietly pulled the feature. “What People Suggest” was the name of the tool, which gathered community health content from online forums and presented it in curated, thematic form. Three insiders confirmed the removal, and Google’s acknowledgment of it came with a disputed explanation.
The feature was launched at a health event in New York by then-chief health officer Karen DeSalvo, who framed it as a humanizing addition to health search. She wrote that users genuinely value the experiences of others who have managed similar health conditions, and that the feature was a practical way to surface those perspectives. The AI system organized community discussions and linked users to the original posts.
Google denied safety was a factor in removing the feature, attributing the decision to search page simplification. The claim was undermined when the blog post cited as public disclosure contained no mention of “What People Suggest.” The handling of the removal has been described as inconsistent and inadequate by observers of health AI governance.
The removal comes against a backdrop of major concerns about Google’s AI health content. An investigation earlier this year documented how AI Overviews in Google Search were distributing false health information to billions of users monthly. Google’s limited response — removing some medical AI Overviews — did not satisfy health experts who called for more fundamental reform.
At the upcoming health event, Google is expected to present new AI health advancements. The story of “What People Suggest” will remain a pointed reference for those assessing whether Google’s enthusiasm for health AI is matched by the discipline and honesty that responsible deployment in this sensitive domain requires.
