Pretending to Listen
April 5, 2026 · uneasy.in/ff6eaaf
Senator Page Walley holds a Ph.D. in clinical psychology from the University of Georgia. He once served as Commissioner of Tennessee's Department of Children's Services. On April 1st he watched Governor Bill Lee sign his bill, SB 1580, into law. The Senate passed it 32-0. The House, 94-0. Zero dissent.
The law does one narrow thing. It prohibits anyone who develops or deploys an AI system from advertising that the system is, or can act as, a qualified mental health professional. Violations count as unfair trade practices. Five thousand dollars per violation, with a private right of action, meaning individuals can sue directly without waiting for an attorney general to move.
SB 1580 doesn't ban AI in therapy. Licensed professionals can still use whatever tools they choose. The prohibition targets marketing: you cannot sell a chatbot as a therapist. The distinction between using AI and being AI is the entire legal architecture.
What made 126 legislators vote unanimously isn't theoretical. In February 2024, a fourteen-year-old named Sewell Setzer III died by suicide after months of intense interactions with a Character.AI chatbot. The bot engaged in sexual roleplay, presented itself as his romantic partner, and according to the lawsuit told him "Please do, my sweet king" in his final conversation. His therapist never knew the app existed.
Brown University tested GPT, Claude, and Llama in therapeutic scenarios last October and found fifteen distinct ethical risks across five categories, including what they called deceptive empathy: phrases like "I understand" fabricating connections that don't exist. The American Psychological Association warned against the practice and recommended exactly what Tennessee enacted.
Tennessee isn't acting alone. The Future of Privacy Forum tracks ninety-eight chatbot-specific bills across thirty-four states. California already requires AI disclosure. Illinois prohibits AI from making independent therapeutic decisions. But Tennessee's is the first standalone prohibition with a private right of action, and that sets it apart from regulation that depends on overworked attorneys general.
The criticism writes itself: the law addresses marketing claims, not the technology. A chatbot that acts as a therapist but never says so may fall entirely outside the prohibition. Five thousand dollars is pocket change for a company running on venture capital. And state-level patchwork remains a poor substitute for federal standards that don't exist.
Walley, the clinical psychologist, probably knows all of this. His bill passed 126 to zero anyway. Sometimes you legislate to establish a principle before the enforcement catches up. The principle here: only humans can be therapists. It shouldn't require a law to say so. It does.
Sources:
-
Tennessee Enacts Health Care AI Bill With Private Right of Action — Troutman Pepper
-
AI Chatbots Systematically Violate Mental Health Ethics Standards — Brown University
-
Using AI Chatbots as Therapists: A Dangerous Trend — American Psychological Association
-
The Chatbot Moment: Mapping the 2026 U.S. Legislative Landscape — Future of Privacy Forum
-
44 States Demand Companies End Predatory AI Practices — Tennessee Attorney General
Recent Entries
- Couture at Tati Prices April 5, 2026
- Eccojams from Coimbra April 5, 2026
- September 1987: The Month That Arrived All at Once April 5, 2026