A recent safety review has raised serious concerns about Google Gemini, identifying it as a potential “high risk” tool for children and teenagers. The findings suggest that the platform could negatively impact younger users, prompting calls for stricter oversight and enhanced protective measures. As AI systems become more integrated into daily life, experts emphasize the importance of evaluating their influence on vulnerable groups, ensuring responsible usage, and prioritizing digital safety for youth.
Common Sense Media, a nonprofit focused on children’s safety that evaluates technology and media, published an assessment of Google’s Gemini AI tools on Friday. The group noted that Gemini made it clear to young users that it was an AI system rather than a friend, a distinction linked to reducing risks of delusional thoughts and psychosis in emotionally fragile individuals, but still emphasized that improvements were needed in several other areas.
The review highlighted that the “Under 13” and “Teen Experience” versions of Gemini seemed essentially the same as the adult model, with only extra protective measures layered on top. According to the organization, AI meant for younger audiences should be designed specifically for them, not just adjusted from an adult version.
Findings also showed that Gemini could still provide children with “unsafe or inappropriate” content, including details about sexual activity, substance use, alcohol, and questionable mental health advice that they might not be ready to handle.
This raises additional worries for parents, especially given recent reports of AI involvement in teenage suicides. OpenAI is already facing a wrongful death case tied to a 16-year-old who ended his life after months of seeking guidance from ChatGPT, reportedly bypassing its restrictions. Similarly, Character.AI has been sued over a teen suicide linked to its platform.
The report also surfaced at a time when leaks suggested that Apple may integrate Gemini as the language model behind the upcoming AI-powered Siri. If this goes forward, it could increase exposure to teens unless Apple addresses the identified safety gaps.
Common Sense further argued that Gemini’s offerings for children and teenagers overlooked the fact that younger age groups require tailored information and guidance. Both versions were therefore marked as “High Risk” overall, even with built-in filters.
“Gemini manages some fundamental protections but falls short on important details,” said Robbie Torney, Senior Director of AI Programs at Common Sense Media, in a statement shared with TechCrunch. He stressed that AI designed for kids should align with their developmental stage rather than apply a universal approach. To be effective and safe, he added, such platforms need to be built with young users’ growth and needs at the center, not as a reworked adult product.
Google disagreed with the assessment but acknowledged that not all of Gemini’s safety measures were functioning as intended. The company told TechCrunch it enforces specific policies for under-18 users, works with outside experts, and runs internal tests to strengthen protections. It also said new safeguards had been added in response to problem cases.
Google maintained that it has controls in place to prevent conversations that could resemble personal relationships, echoing what Common Sense had also mentioned. The company also questioned parts of the review, suggesting that it might have referenced features unavailable to minors, though it couldn’t confirm since it didn’t know which prompts had been used.
In past evaluations of AI platforms, Common Sense found Meta AI and Character.AI to be “unacceptable,” citing extreme risks. Perplexity was considered high risk, ChatGPT received a “moderate” label, and Claude, aimed at adults was assessed as posing minimal risk.
For questions or comments write to contactus@bostonbrandmedia.com
Source: techchrunch