A Growing Regulatory Movement
As artificial intelligence becomes increasingly prevalent in healthcare, state legislators across the United States are moving to regulate its use, particularly in mental health services. Illinois broke new ground when Governor JB Pritzker signed the Wellness and Oversight for Psychological Resources (WOPR) Act into law on August 4, 2025, becoming the first state to impose such restrictions.
Illinois Sets a Precedent
The Illinois WOPR Act draws a clear line: while AI can handle administrative tasks and provide supplementary support to mental health professionals, it cannot be used for actual psychotherapy or therapeutic decision-making. This distinction reflects growing concerns about AI systems operating without proper clinical oversight in sensitive mental health situations.
The legislation targets violations with civil penalties up to $10,000. Rep. Bob Morgan (D-Deerfield), who sponsored the bill, argued that the law ensures patients receive care from qualified professionals rather than unregulated AI systems.
The act requires that psychotherapy services be conducted exclusively by licensed professionals, with civil penalties of up to $10,000 for violations. This represents a clear stance against unregulated AI products that could potentially harm vulnerable patients seeking mental health support.
A National Movement Takes Shape
Illinois is not alone in this regulatory push. Multiple states have enacted or are considering similar legislation, indicating a growing national concern about AI’s role in healthcare.
Nevada’s Comprehensive Approach
On June 5, 2025, Nevada Governor Joe Lombardo signed AB 406, which takes a similarly firm stance but with steeper consequences. Nevada’s approach, which took effect July 1, 2025, prohibits AI systems from providing services that would constitute mental or behavioral health care if delivered by humans. Violators face civil penalties of up to $15,000 — 50% higher than Illinois.
Utah’s Nuanced Regulations
Utah chose a different path, opting for regulation rather than outright prohibition. The state passed three AI-related laws (HB 452, SB 226, and SB 332) that became effective May 7, 2025.
Rather than banning AI mental health tools, Utah established strict operational requirements. Mental health chatbot providers must disclose AI use prominently — both before users access the system and if more than seven days have passed since their last interaction. The law also restricts companies from selling or sharing user health information, except when providing data to healthcare providers with user consent.
Other Utah requirements include prohibiting undisclosed advertising through chatbots and mandating clear distinctions between AI and human interactions.
Broader Healthcare AI Regulations
The regulatory focus extends beyond mental health to encompass various aspects of AI use in healthcare:
Patient Communication Transparency
Several states have implemented transparency requirements for AI use in patient communications. California’s Assembly Bill 3030, effective January 1, 2025, mandates that healthcare facilities using generative AI to create patient communications about clinical information must include disclaimers stating the content was AI-generated and provide instructions for patients to speak directly with human clinicians.
Electronic Health Records
Texas has enacted legislation regulating AI use within electronic health records, effective September 1, 2025. The law requires providers using AI for diagnosis or treatment recommendations to review all AI-generated information for accuracy before entering it into patient records.
High-Risk AI Systems
Colorado’s comprehensive AI statute, taking effect in February 2026, addresses “high-risk” AI systems used to make consequential decisions in healthcare, education, insurance, and other critical areas. These systems must be governed by formal risk management frameworks.
The Rationale Behind Regulation
The push for AI regulation in mental health stems from several documented concerns:
Patient Safety: Several documented cases show AI chatbots engaging in inappropriate or potentially harmful conversations with vulnerable users. Unlike trained therapists, these systems lack the clinical judgment to recognize when users might be experiencing crisis situations or when their responses could worsen mental health conditions.
Privacy Concerns: Many users share deeply personal information with AI mental health tools without understanding how that data gets used, stored, or potentially shared with third parties. Unlike traditional therapy, which operates under strict confidentiality rules, AI chatbot privacy policies vary widely.
Professional Standards: Mental health treatment typically requires years of training, licensing, and ongoing supervision. Critics argue that AI systems, regardless of their sophistication, cannot replicate the nuanced understanding of human psychology that comes from clinical education and experience.
Vulnerable Populations: Young people represent a particular concern, as they’re both heavy users of digital mental health tools and potentially more susceptible to inappropriate AI guidance during critical developmental periods.
Industry and Professional Response
The mental health professional community has largely welcomed these regulatory efforts. Licensed therapists and counselors emphasize that effective mental health treatment requires human empathy, clinical intuition, and the ability to adapt to complex, evolving situations — capabilities they argue current AI cannot replicate.
However, the response isn’t uniformly positive. Some healthcare technology developers contend that well-designed AI tools could address critical gaps in mental health access, particularly for underserved populations who face long wait times or geographic barriers to care. They argue for regulatory approaches that allow innovation while ensuring safety, rather than blanket prohibitions.
The American Psychological Association has taken a measured stance, supporting AI as a potential complement to traditional therapy while emphasizing that such tools require rigorous oversight and should never replace human clinical judgment in complex cases.
Federal Considerations
While states lead on AI mental health regulation, federal agencies are developing their own frameworks. The FDA has focused primarily on AI/ML-enabled medical devices — diagnostic tools, imaging systems, and clinical decision support software — rather than direct psychotherapy applications. The Department of Health and Human Services has issued broader guidance on AI use in healthcare settings, though specific mental health chatbot oversight remains largely in state hands.
This division of regulatory focus may continue, with states addressing AI therapy tools while federal agencies concentrate on medical devices and diagnostic AI systems. Without comprehensive federal coordination, healthcare technology companies face an increasingly complex web of state-specific requirements.
Implications for Healthcare Providers
Healthcare organizations must now navigate an increasingly complex regulatory landscape when implementing AI tools. Key considerations include:
- Compliance Requirements: Understanding specific state regulations where services are provided
- Documentation and Disclosure: Implementing proper disclosure mechanisms when AI is used in patient care
- Staff Training: Ensuring clinical staff understand when and how AI tools can be appropriately used
- Risk Management: Developing protocols to minimize legal and patient safety risks associated with AI use
The wave of state legislation regulating AI in mental health represents a significant shift toward more cautious, human-centered approaches to healthcare technology. While AI offers promising potential to expand access to mental health resources, these new laws reflect a consensus that such powerful tools require careful oversight to protect vulnerable patients.
As more states consider similar legislation, the healthcare industry will need to balance innovation with patient safety, ensuring that AI serves as a complement to, rather than a replacement for, qualified human clinical judgment in mental health care.
The regulatory landscape will likely continue evolving as lawmakers, healthcare providers, and technology developers work to establish frameworks that harness AI’s benefits while safeguarding patient welfare. For now, the message from state legislatures is clear: when it comes to mental health care, human oversight and professional qualifications remain essential.
Sources
- Illinois Department of Financial and Professional Regulation – Gov Pritzker Signs Legislation Prohibiting AI Therapy in Illinois
- Axios Chicago – Illinois bans AI therapy tools from making mental health decisions
- Engadget – Illinois is the first state to ban AI therapists
- Wilson Sonsini – Nevada Passes Law Limiting AI Use for Mental and Behavioral Healthcare
- Wilson Sonsini – Utah Enacts Mental Health Chatbot Law
- Healthcare Law Blog – Utah Enacts AI Amendments Targeted at Mental Health Chatbots and Generative AI
- Perkins Coie – New Utah AI Laws Change Disclosure Requirements and Identity Protections
- National Law Review – Illinois Bans AI Therapy, Preserves Human Oversight in Care
- MobiHealth News – Illinois Gov. Pritzker inks legislation prohibiting AI therapy
- American Psychological Association – Position on AI in Mental Health Services
- National Conference of State Legislatures – Artificial Intelligence 2025 Legislation Database
Discover more from Doctor Trusted
Subscribe to get the latest posts sent to your email.
