this post was submitted on 08 Apr 2026
2 points (75.0% liked)

Google

1978 readers
1 users here now

Welcome to the Google community! This is a place to discuss everything related to Google products, services, features, and discussions.

ChromeOS discussions are welcome!

General discussions about Google products, updates, tips, and related topics are welcome. However, for specific technical support, account-related inquiries, advertising questions, and other issues, please direct them to official Google support channels.

Rules
  1. Stay on topic: All posts should be related to Google products, services, or the Google ecosystem.
  2. Respectful discussions: Treat fellow community members with respect and engage in constructive discussions. Avoid personal attacks, harassment, or offensive language.
  3. No support inquiries: Please refrain from posting individual support inquiries or account-related issues. Use official Google support channels for assistance.
  4. No spam or self-promotion: Do not post spam or self-promotional content. This includes links to personal websites, blogs, or products/services.
  5. No illegal content: Do not share or discuss illegal content, including piracy, hacking, or copyright infringement.
  6. No misleading information: Avoid spreading false or misleading information about Google or its products.
  7. No inappropriate content: Do not post or link to any inappropriate or NSFW (Not Safe for Work) content.
  8. No off-topic discussions: Keep the discussions focused on Google products, services, and related topics. Avoid unrelated or off-topic discussions.
  9. No excessive advertising: Do not excessively promote products, services, or websites.
  10. Follow community guidelines: Adhere to the overall community guidelines and terms of service.

founded 2 years ago
MODERATORS
 
top 5 comments
sorted by: hot top controversial new old
[–] AbsolutelyNotCats 2 points 3 weeks ago (1 children)

LLMs are terrible at crisis intervention because they hallucinate reassurance. A model telling someone it gets better without real clinical judgment is dangerous. Google should be mandating crisis hotline redirects, not tuning better conversational responses.

[–] sabreW4K3@lazysoci.al 1 points 3 weeks ago

They should absolutely identify crisis and handoff the conversation to a human.

[–] AbsolutelyNotCats 1 points 3 weeks ago

An ad company wants to be your therapist. The irony is not subtle. Gemini has an engagement problem and somebody decided the solution was to monetize mental health alongside the targeted advertising. You cannot harvest data from vulnerable people and call it wellness support.

[–] AbsolutelyNotCats 1 points 3 weeks ago

Google fixing mental health responses in Gemini sounds like PR cleanup after their model told someone to die. AI models giving mental health advice remains fundamentally dangerous regardless of how they tune the safety filters.

[–] AbsolutelyNotCats 1 points 1 week ago

Every big tech company rebrands basic sentiment analysis as mental health support the moment it sounds marketable. Google dropping 'mental health responses' into Gemini sounds like a liability hedge more than a feature. Who actually greenlit treating an LLM as a mental health tool, and what happens when someone takes the advice seriously?