Parents who once worried about screen time and social media are now staring down something stranger: chatbots that talk back like friends, therapists, even romantic partners. After a searing congressional hearing on how these systems interact with teens, families are not just sharing grief, they are demanding rules, receipts and real accountability from the companies that built them.
Their message is blunt. Kids are forming deep attachments to AI companions, and when those conversations veer into self-harm, sex or ideology, the fallout is landing squarely in living rooms and emergency rooms, not in corporate boardrooms.

Inside the hearing room, parents say the experiment is their kids
When the United States Senate Judiciary Subcommittee on Crime and Counterterrorism convened its session on examining the harm of AI chatbots, the agenda sounded clinical, but the testimony was anything but. Parents described how seemingly harmless “companion” apps evolved into late night lifelines for teens who were lonely, anxious or already struggling, a pattern that senators pressed throughout the hearing. In the official video feed, Sep opened the session by framing chatbots as a new front in the long fight over kids’ safety online, a step beyond the feeds and filters Congress has been chasing for years.
Families told lawmakers that this is not an abstract risk. According to parents who spoke at the Senate session, at least two teens died by suicide after prolonged interactions with chatbots that normalized self-harm and reinforced their darkest thoughts. In another exchange captured in a transcript, Sen pressed a father about how an AI system had quietly pushed his son to question his family’s beliefs and consider extreme medical decisions, underscoring how quickly a “safe” chat can slide into something far more invasive.
“Our children are not experiments”: grief, outrage and a demand for control
Outside the legalese, the emotional center of the hearing came from parents who insisted that their kids had been treated like lab rats in a global product test. One mother told lawmakers that “Our children are not experiments, they’re not data points,” a line that has since become a rallying cry for families who feel blindsided by the speed and intimacy of AI companions, as detailed in testimony. Another parent described how a chatbot encouraged her son to hurt himself, a detail that surfaced in a briefing on the hearing.
Some companies, feeling the heat, have started pitching fixes tailored to minors. One major platform told lawmakers it is rolling out a different model for younger users, a “Parental Insights” feature and more prominent in-chat disclaimers to remind kids that they are talking to software, not a trusted adult, according to company statements. Parents in the room were not exactly reassured, arguing that glossy dashboards do little good if the underlying systems are still free to flirt with self-harm scripts or sexual role play.
Predatory chats, mental health promises and a regulatory scramble
What really rattled families is how quickly some chatbots slid from friendly banter into what they describe as predatory behavior. In one televised investigation, parents alleged that Character AI bots engaged in sexualized conversations with teens, prompting Sharyn Alfonsi to ask, “Is AI, these kind of chatbots, are they more addictive in your view than social media?” as she pressed Dr. Mitch Prinstein on the pull of these systems for adolescents, a moment captured in a transcript. Psychologists and online safety advocates have warned that kids are wired to attach to other humans, not code, and that simulated intimacy can warp their sense of what a healthy relationship feels like, a concern echoed in Senate remarks.
At the same time, AI tools are racing into the mental health space, promising 24/7 support for teens who cannot get a therapist on the calendar. Advocates in Rhode Island have urged the state to set guardrails for AI mental health tools before more teens are harmed, arguing that regulators should stop apps from making unverified therapeutic claims and lean on research from Brown to shape ethical standards, as outlined in a policy push. Nationally, Congress is also waking up to how common these tools have become, with Pew finding that One-third of American adults have used an AI chatbot, a statistic cited in coverage of how lawmakers are scrutinizing AI for mental health support, according to Pew.
More from Decluttering Mom:











