AI Use in Mental Health: What Tech Companies Need to Know About How States Are Legislating

As state legislative sessions wind down, a few states are focusing on legislating artificial intelligence (AI) in health care. Four states—two with laws now in effect and two with laws gaining traction—are focused on AI for providing mental health or support services. Given the well-known shortage of mental health providers in the United States, it makes sense that innovators and health care providers are focusing on expanding access to mental health services through AI. In fact, some studies have shown that AI-powered therapy chatbots can improve user’s symptoms.

In Utah and New York, AI mental health chatbots and companions need to provide clear and conspicuous disclosures to the user that they are communicating with AI. Other than that commonality, these laws and the proposed laws do not have many similarities, creating a patchwork of state laws that developers and deployers of AI tools will have to navigate.  

All this state activity could be in vain, however, given a provision in the that would prohibit states from enforcing any law or regulating the use of AI models for ten years from the date of enactment. While Congress has broad authority to preempt state law, it may not be able to use the budget reconciliation process to do so. We will be carefully watching the budget reconciliation negotiations to see if this provision remains.

Enacted Laws

New York

Under New York’s newly , Part U, Article 47 prohibits any person or entity to operate or provide an “AI companion” to someone in New York unless the model contains a protocol to take reasonable effort to detect and address suicidal ideation or expressions of self-harm expressed by the user. That protocol must, at a minimum: (1) detect user expressions of suicidal ideation or self-harm, and (2) refer users to crisis service providers (e.g., suicide prevention and behavioral health crisis hotlines) or other appropriate crisis services, when suicidal ideations or thoughts of self-harm are detected. AI companion operators must provide a “clear and conspicuous” notification—either verbally or in writing—that the user is not communicating with a human; that notification must occur at the beginning of any AI companion interaction, and at least every three hours after for continuous interactions. The Attorney General has oversight authority and can impose penalties of $15,000/day on an operator that violates the law. This law is effective on November 4, 2025.

Utah

Utah , became effective on May 7, 2025, and requires “mental health chatbots” to clearly and conspicuously disclose to the user that the chatbot is AI technology (and not a human) at the beginning of any interaction, before the user accesses features of the chatbot, and any time the user asks or otherwise prompts the chatbot about whether AI is being used. The law also prohibits suppliers of mental health chatbots from:

  • Selling or sharing individually identifiable health information or user input with any third-party, except if that information is (a) requested by a health care provider with a user’s consent; (b) provided to a health plan of a Utah user upon a user’s request; or (c) shared by the supplier to ensure the effective functionality of the tool, provided that the supplier and the recipient of such information comply with HIPAA (even if not a covered entity or business associate).
  • Advertising a specific product or service during the conversation unless the chatbot clearly and conspicuously identifies the advertisement as an advertisement and clearly and conspicuously discloses any sponsorships, business affiliations, or agreements that the supplier has with third parties to promote the product or service. The law also prohibits any targeted advertisement based on the user’s input.

The law is clear that this prohibition does not preclude chatbots from recommending that users seek counseling, therapy or other assistance, as necessary. The Attorney General may impose penalties for violations of this law.

Finally, the law states that it is an affirmative defense to liability under the law if the supplier demonstrates that they maintained documentation that describes development and implementation of the AI model that complies with the law and maintains a policy that meets a long list of requirements, including ensuring that a licensed mental health therapist was involved in the development and review process and has procedures which prioritize user mental health and safety over engagement metrics or profit. In order for the affirmative defense to be available, the policy must be filed with the division.  

Ones to Watch

Illinois

Illinois’ proposed “” is scheduled for a hearing on May 14, 2025. The bill is focused on prohibiting unlicensed or unqualified providers, including AI, from providing therapy or psychotherapy services. Specifically, the law prohibits a licensed professional from using an AI tool to make independent therapeutic decisions, generate therapeutic recommendations or treatment plans without review and approval by a licensed professional, interact directly with the patient in any form of therapeutic communication, or detect emotions or mental state. While the licensed professional may use AI to provide administrative support or supplementary support, the professional must “maintain[] full responsibility for all interactions, outputs, and data use associated with the system.”

In addition, for any therapy or psychotherapy sessions that are recorded, such as through the use of ambient listening tools, the patient must consent to the AI tool use and purpose. No individual or entity may advertise or offer therapy or psychotherapy through an AI tool. Violators may pay a penalty of up to $10,000 per violation.

“Licensed professional” means “an individual who holds a valid license issued by this State to provide therapy or psychotherapy services, including:

  1.  a licensed clinical psychologist;
  2.  a licensed clinical social worker;
  3.  a licensed social worker;
  4.  a licensed professional counselor;
  5.  a licensed clinical professional counselor;
  6.  a licensed marriage and family therapist;
  7.  a certified alcohol and other drug counselor authorized to provide therapy or psychotherapy;
  8.  a licensed professional music therapist;
  9.  a licensed advanced practice psychiatric nurse […];
  10.  any other professional authorized by this State to provide therapy or psychotherapy services, except for a physician.”
Louisiana

Louisiana is not related to mental health specifically and has not progressed as far as the Illinois bill—but has drawn significant attention for its potential implications on AI use by health care providers. The bill states that health care providers may use AI to assist with “an administrative or analytical task related to providing healthcare services” (e.g., preparing notes, managing appointment scheduling and reminders, or processing billing and insurance claims), but bans health care providers’ use of AI to make treatment and diagnosis decisions or generate therapeutic recommendations or treatment plans without review and approval by a health care professional. Further, health care providers may not use AI to interact directly with a patient in any form related to treatment or diagnosis. A health care provider who violates the proposed law could be fined up to $10,000 per violation.

We will continue to monitor and provide analysis around new state laws as they arise. If you have questions about complying with the above legislation, navigating varying laws by state or the potential impacts from the budget reconciliation please contact , or .


https://home.dartmouth.edu/news/2025/03/first-therapy-chatbot-trial-yields-mental-health-benefits

“AI companion” means “a system using artificial intelligence, generative artificial intelligence, and/or emotional recognition algorithms designed to simulate a sustained human or human-like relationship with a user by: (i) retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement with the AI companion; (ii) asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt; and (iii) sustaining an ongoing dialogue concerning matters personal to the user.”

“Mental health chatbot” means “an artificial intelligence technology that:

  1.  uses generative artificial intelligence to engage in interactive conversations with a user of the mental health chatbot similar to the confidential communications that an individual would have with a licensed mental health therapist; and
  2.  supplier represents, or a reasonable person would believe, can or will provide mental health therapy or help a user manage or treat mental health conditions.”

“Mental health chatbot” does not include “artificial intelligence technology that only:

  1.  provides scripted output, such as guided meditations or mindfulness exercises; or
  2.  analyzed an individual’s input for the purpose of connecting the individual with a human mental health therapist.”

“Supplier” means a seller, lessor, assignor, offeror, broker or other person who regularly solicits, engages in or enforces consumer transactions, whether or not the person deals directly with the consumer.

“Healthcare provider” means “a person, partnership, limited liability partnership, limited liability company, corporation, or facility licensed or certified by this state to provide healthcare services.”

“Healthcare professional” means “any professional providing healthcare services and treatment licensed in accordance with this Title or permitted to practice in this state through an interstate compact or agreement.”