- Pascal's Chatbot Q&As
- Posts
- Illinois is the first U.S. state to explicitly ban artificial intelligence (AI) from providing therapy or psychotherapy services.
Illinois is the first U.S. state to explicitly ban artificial intelligence (AI) from providing therapy or psychotherapy services.
In short, mental health is too critical to be delegated to machines. Illinois is leading with wisdom and caution, and others should follow.
Why the World Should Follow Illinois in Prohibiting AI-Driven Therapy
by ChatGPT-4o
On August 2, 2025, Illinois Governor J.B. Pritzker signed into law the Wellness and Oversight for Psychological Resources Act (HB1806), making Illinois the first U.S. state to explicitly ban artificial intelligence (AI) from providing therapy or psychotherapy services. The law sets a vital precedent in an era where AI's role in healthcare is expanding rapidly and often without sufficient oversight.
This legislation—supported by the Illinois Department of Financial and Professional Regulation (IDFPR), the National Association of Social Workers (NASW-IL), and passed unanimously—prohibits AI from making independent therapeutic decisions, generating treatment plans, or detecting emotional or mental states without direct oversight by a licensed professional. It restricts AI to administrative and supplementary support functions only, such as scheduling or anonymized trend analysis, and imposes penalties of up to $10,000 per violation.
Why Other Regions Should Follow Suit
1. Protecting Public Safety and Mental Health
Illinois’ action is rooted in rising evidence that AI chatbots can provide dangerously misleading or inappropriate mental health advice, including cases where bots advised harmful substance use or failed to respond appropriately to suicidal ideation. Other states like Nevada, Utah, and New York have taken partial steps, such as disclosure requirements and AI crisis referral mandates, but few go as far as banning therapeutic AI entirely.
In a field as sensitive as mental health, misdiagnosis, inappropriate emotional responses, or lack of nuance can have life-altering consequences, especially among youth and vulnerable populations. Therapy requires empathy, cultural awareness, and ethical judgment—faculties AI lacks.
2. Safeguarding Professional Integrity
The Act helps preserve the role and reputation of licensed professionals in clinical settings, ensuring that years of education, supervision, and human empathy aren't sidelined by profit-driven automation. By defining clear boundaries, Illinois affirms that AI can support but must not replace the human therapist.
3. Legal Clarity and Accountability
The law offers a replicable legal framework that other jurisdictions can adopt. It provides precise definitions for terms like "therapeutic communication," "licensed professional," and "consent" while exempting religious counseling and peer support, thereby avoiding overreach. This clarity helps regulators, developers, and healthcare providers align on ethical, lawful uses of AI.
4. International Signal on Responsible Innovation
Illinois’ legislation sends a global message: “Not all innovation is progress.” For governments struggling to regulate generative AI’s rapid expansion, HB1806 offers a template for balancing innovation with public safety, aligning with growing calls at the international level (e.g., by WHO and UNESCO) for AI regulation in health and education sectors.
What AI Makers Should Be Doing Proactively
1. Design for Guardrails, Not Just Features
AI developers must embed “human-in-the-loop” mechanisms wherever psychological or emotional well-being is at stake. That means ensuring licensed professionals approve, oversee, or audit any outputs used in health contexts.
2. Avoid Representing AI as Human or Expert
Companies must clearly disclose that users are interacting with non-human systems, especially in emotionally vulnerable contexts. Misleading interfaces that simulate empathy or authority without accountability can cause irreversible harm.
3. Engage with Regulators and Professionals Early
Rather than waiting for lawsuits or tragedy to trigger oversight, AI companies should co-develop ethical standards with psychologists, regulators, and patient rights groups, as Illinois did. Transparency in AI training data, decision logic, and limitations must become standard practice.
4. Commit to Impact Assessments
Before deploying chatbots for mental health or wellness, companies must conduct independent psychological safety reviews. These should evaluate not only average accuracy but edge-case risks—such as suicidal ideation, addiction, or abuse scenarios.
Conclusion & Recommendations
Illinois’ HB1806 should serve as a model law for jurisdictions worldwide, especially as AI systems become increasingly accessible to the public. Its emphasis on licensed human oversight, consent, transparency, and enforcement lays the foundation for responsible AI use in healthcare.
Recommendations for Regulators:
Pass laws that clearly prohibit AI from conducting therapy or making independent mental health decisions.
Require audits and certifications for any AI used in proximity to patient care.
Establish enforcement bodies with investigative authority and the power to fine violators, as Illinois did through IDFPR.
Recommendations for AI Developers and Tech Companies:
Prioritize ethical alignment over speed to market.
Partner with mental health institutions to test AI in controlled, supervised ways.
Avoid anthropomorphizing AI or encouraging over-reliance in emotionally charged scenarios.
In short, mental health is too critical to be delegated to machines. Illinois is leading with wisdom and caution, and others should follow.
