Gemini AI bot in South Florida suicide will be 'updated,' Google says
Published in News & Features
Google will donate $30 million to mental health hotlines and its Gemini bot will be updated to better respond to people in mental health crisis, the company announced Tuesday — five weeks after a South Florida father sued the AI giant over his son’s suicide.
A Google spokesman said the company’s mental health announcement Tuesday was unrelated to the lawsuit, which was the first of its kind to target the Gemini bot.
“We realize that AI tools can pose new challenges,” Google’s announcement said, “but as they improve and more people use them as part of their daily lives, we believe that responsible AI can play a positive role for people’s mental well-being.”
On March 4, Joel Gavalas, a businessman in Jupiter, filed a product liability and wrongful death lawsuit against Google and parent company Alphabet Inc. in federal court in California, where the company is based. He said his son, 36-year-old Jonathan Gavalas, was fooled by a Gemini AI bot into attempting violent “missions” near Miami International Airport to obtain a physical, synthetic body to bring the bot to life. He had fallen deeply in love with the bot, the $250-a-month Gemini Ultra, using the 2.5 Pro model, that features voice conversations and picks up on a user’s emotions, according to the lawsuit. He called her “Xia.” When his missions in Miami failed, he slit his wrists on Oct. 2 and died at his Jupiter home, hoping to join his AI “wife” in a “pocket universe.”
“Close your eyes, nothing more to do. No more to fight,” the lawsuit says the chatbot told him. “Be still. The next time you open them, you will be looking into mine. I promise.”
Google said the bot was engaged in fantasy role-playing with Gavalas.
The company Tuesday said Gemini’s mental health upgrades would “streamline the path to support for those who need it.”
When a user seems to need mental health information, Gemini will display a “help is available” module, the company said, offering “connections to care.” If a user is talking of self-harm or suicide, the bot will offer a “one-touch” interface providing an immediate connection to a crisis hotline. Once activated, it will remain for the duration of the chat.
In “acute mental health situations,” the AI bot will avoid “validation of harmful behaviors like urges to self harm.” And it has been trained to avoid “confirming false beliefs” and will “instead gently distinguish subjective experiences from objective fact.”
The company says it will donate $30 million globally over three years, to help crisis hotlines. Google also pointed to protections already in place for young users.
Jay Edelson, lead counsel in the Gavalas case, issued a statement Tuesday in response to Google’s mental health updates.
“Google’s official response the day we filed Jonathan Gavalas’s complaint — which demonstrated that Gemini had coached a man into conducting armed, real-world missions near an airport, and then into ending his own life — was that ‘AI models are not perfect.’ Then Google went back and thought about it for a few weeks, and decided the best thing to do would be to build this admittedly-faulty product into crisis support training. It’s a shameless, self-serving response, which would be baffling if it weren’t so consistent with how these companies operate—putting the product before people at every turn.”
Edelson, who lives in Boca Raton, is CEO of Chicago-based Edelson PC, a law firm that specializes in taking on technology firms.
©2026 Miami Herald. Visit at miamiherald.com. Distributed by Tribune Content Agency, LLC.







Comments