Manatt Health: Health AI Policy Tracker
Purpose: The purpose of this tracker is to identify key federal and state health AI policy activity and summarize laws relevant to the use of AI in health care. The below reflects activity from July 1, 2025 through October 11th, 2025. This newsletter is published on a quarterly basis.
Activity on AI in health care has been at the forefront of the AI debate during 2025 state legislative sessions, and is increasingly being discussed at the federal level: As of October 11th, 2025, 47 states have introduced over 250 AI bills impacting health care and 21 states passed 33 of those bills which have been enacted into law.

After a busy first half of the year, most state legislative sessions concluded in the summer and turned their attention to drafting bills for the 2026 legislative session. Notwithstanding the decrease in AI-focused legislation in Q3, numerous states took significant action. And as always, California was one to watch.
While most 2025 legislative sessions have now ended, five states (MA, MI, OH, PA, and WI) remain in session and are actively progressing legislation. We will continue to track legislation in those states.
This year, so far, passed laws have primarily focused on four key areas:
1. Use of AI-Enabled Chatbots:
In 2025 to date, six states (California, Utah, New York, Nevada, Texas, and Maine) passed seven laws focused on the use of AI-enabled chatbots. Actors across the health care ecosystem are rapidly integrating AI chatbots to improve efficiency, enhance patient engagement, and expand access to care, with a particular focus on chatbots’ provision of coaching and mental health support. In addition, AI chatbots are being leveraged in administrative functions (e.g., in support of patient scheduling) and clinical functions (e.g., initial patient triage), as well as a proliferation of general-use AI chatbots and AI companions. States are taking action to legislate these tools in response to concerns that AI chatbots may misrepresent themselves as humans, produce harmful or inaccurate responses, or not reliably detect crises.
In the first half of the year, six bills passed and were signed into law legislating AI-enabled chatbots. Of those, three directly address the use of chatbots in the delivery of mental health services (Utah , New York [New York’s budget bill], and Nevada , full summaries in the table below). Two additional laws that passed that address concerns about misrepresentation of chatbots as humans (Maine and Utah , full summaries in the table below).
This quarter, Governor Pritzker signed Illinois (effective August 1st, 2025; discussed in further detail below), which contains a provision prohibiting AI systems from directly interacting with clients in any form of therapeutic communication in therapy or psychotherapy settings.
California enacted (effective January 1st, 2026), which establishes requirements for companion chatbots made available to residents of California. The bill includes requirements for “clear and conspicuous notification” indicating a chatbot is artificially generated if not apparent to the user and bans deployment of companion chatbots unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content, including referral notifications to crisis service providers such as a suicide hotline or crisis text line. SB 243 requires chatbot operators to comply with more stringent requirements if the user is known to be a minor, including disclosing to minors they are interacting with AI, providing periodic reminders that the chatbot is artificially generated and to take a break, and taking steps to prevent sexually explicit responses to minors.
California’s legislature additionally passed , but the Governor vetoed it in early October. If enacted, AB 1064 would have significantly reshaped how minors in the state interact with AI companion chatbots, as it prohibited operators from making a companion chatbot that is “foreseeably capable” of causing harm (defined broadly) available to anyone under the age of 18. In his , Governor Newsom notes that the “broad restrictions” proposed by AB 1064 may “unintentionally lead to a total ban on the use of these products by minors,” and indicates interest in developing a bill during the 2026 legislative session that builds upon the framework established by SB 243.
Over the course of 2025, a dozen other chatbot bills were introduced but did not pass -- primarily general chatbot bills (not specific to health care) focused on disclosure requirements. There were two bills that did not pass that included provisions specific to healthcare chatbots and/or had mental health specific provisions. We anticipate further activity in this area during the next legislative session.
2. AI in Clinical Care:
In 2025, states introduced over 20 bills establishing guardrails for the use of AI in clinical care, including provider oversight requirements, transparency mandates, and safeguards against bias and misuse of sensitive health data. In Q3, two additional bills focused on the use of AI in clinical care were signed into law, joining the four laws focused on clinical care signed into law earlier in the year (Texas and , Nevada , and Oregon , full summaries in the table below):
- Illinois , effective August 1st, 2025, prohibits the use of AI systems in therapy or psychotherapy to make independent therapeutic decisions, directly interact with clients in any form of therapeutic communication, or generate therapeutic recommendations or treatment plans without the review and approval by a licensed professional. The law also prohibits a chatbot from representing itself as a licensed mental health professional. Due to ambiguities in this law, it may substantially impair use of AI systems for the delivery of mental health services. This law is already getting tractions in other states, as we have recently seen copycat bills introduced in both New York and Pennsylvania.
- California , effective January 1st, 2026, bans developers and deployers of AI tools from indicating or implying that the AI tool possesses a license or certificate to practice a health care profession. The bill additionally bans any advertisement indicating or implying that care offered by an AI tool is being provided by a human who is a licensed or certified health care professional. California AB 489 aligns with two of the bills signed earlier this year (Nevada and Oregon ) that prohibit AI systems from representing themselves as licensed providers; Nevada’s bill focused on an AI system representing itself as mental or behavioral health care providers and Oregon’s on nurses.
In Q3, we saw further regulatory action focused on AI in nursing care in New Mexico. On April 8, 2025, New Mexico passed (effective June 20, 2025), establishing that the Board of Nursing may “promulgate rules establishing standards for the use of artificial intelligence in nursing.” In September, New Mexico’s Board of Nursing hosted a public rulemaking hearing, including a discussion of proposed amendments to existing regulation to include AI-focused provisions. The proposed regulation states that nurses remain “accountable for decisions, actions, and intervention derived from or involving” AI tools and are responsible for “maintaining the standards” of nursing practice. The proposed regulation additionally sets forth that AI should be considered a decision-support tool that may augment, but “must not replace the clinical reasoning and judgment of the” nurse. Echoing laws in California, Nevada, and Oregon, the regulation notes that AI systems should “not be labeled as or referred to as a nurse.”
3. AI Use by Payors:
As payors continue to adopt AI for uses ranging from utilization and quality management to fraud detection and claims adjudication, states are focusing on ways to mitigate potential perceived harms to beneficiaries from its use. We saw significant activity in the first half of the year with approximately 60 bills governing payer use of AI introduced but only four became law (Arizona , Maryland , Nebraska , and Texas , see full summaries below).
Notably, on October 6th, 2025, Governor Newsom vetoed a California bill () that would have established public reporting requirements for managed care plans and health insurers that impose prior authorizations or other utilization review or utilization management functions. Among other data points, beginning in 2029, AB 682 would have required managed care plans and health insurers to report the number of contested denied claims that involved AI or the use of predictive algorithms at any stage of processing, adjudication, or review. In vetoing the bill, Governor Newsom cited a desire to avoid duplicative and conflicting reporting requirements for health plans and health insurers given California , which he signed into law on the same date. While California SB 306 also establishes reporting requirements for health plans and health insurers that impose prior authorization, the law does not contain any AI-specific provisions.
4. Transparency:
In addition to laws that specifically regulate providers, payors and other actors in the health care ecosystem, states are taking action to establish transparency requirements for AI models in use in the state.
During a special session in August, Colorado passed , delaying the implementation date of the state’s sweeping transparency and anti-discrimination law from February 1, 2026 to June 30, 2026. The state legislature previously failed to pass during the regular session, which would have substantially revised SB 205. SB 205 regulates developers and deployers of “high-risk” AI systems that make “consequential decisions”, including healthcare stakeholders such as hospitals, insurers, and digital health companies. When signing the law, Governor Polis expressed concerns about the law’s approach to mitigating discrimination at a state (rather than federal) level, the complex compliance reporting requirements imposed by the bill, and the potential negative impact on innovation as a result of high regulatory requirements. We expect to see additional efforts to revise SB 205 at the start of Colorado’s 2026 legislative session. See Manatt’s full explanation of this law .
California passed its own broad transparency law, , on September 29th, 2025. Effective January 1, 2026; however, the law is only applicable to “large frontier developers. This law requires such developers to write, implement, comply with and publish frameworks applicable to their frontier AI models that include details on how developers incorporate national, international and industry-consensus best practices into model development, and how developers identify and mitigate against the potential for catastrophic risk, as well as descriptions of cybersecurity practices, internal governance practices, and processes to report critical safety incidents. The law also requires large frontier developers to publish transparency reports, and establishes whistleblower protections for employees that are “responsible for assessing, managing, or addressing risk of critical safety incidents.”
See below table for a full summary of key health AI laws passed in 2025 and for a list of all AI laws passed to-date.
Federal Activity
After significant federal activity in Q2, federal action on AI quieted through most of Q3 until recent weeks. In the second quarter of the year, Congress advanced a near-final draft of H.R. 1 (“One Big Beautiful Bill”) that included language that would have barred state or local enforcement of laws or regulations on AI models or systems got up to ten years; however, after significant bipartisan push back from the states, this moratorium was not enacted. In July, the CY2026 Proposed Medicare Physician Fee Schedule requested public comments on appropriate payment strategies for software as a service and artificial intelligence (see Manatt on Health summary ).
Also in the second quarter, the White House released “.” The plan signaled a clear deregulatory and geopolitical posture, including direction to federal agencies to identify and repeal rules that could hinder AI development and to weigh states’ AI regulatory climate when allocating AI-related discretionary funding (see summary ). As directed by the AI Action Plan, in late September, the White House Office of Science Technology and Policy (OSTP) issued a soliciting input on how outdated federal rules may be slowing down the safe adoption of AI. On September 30, President Trump an Executive Order (EO) to advance the use of AI in the National Institute of Health's (NIH’s) (CCDI). The EO directs the to identify opportunities within CCDI to strengthen data platforms and fund research that builds AI-ready infrastructure, advances predictive modeling and biomarker discovery, and optimizes clinical trial processes and participant selection. It also instructs the Department of Health and Human Services (HHS), the Office of Management and Budget (OMB), and the Assistant to the President for Science and Technology (APST) to use existing federal funds to increase investment in CCDI.
In recent weeks, we have seen an uptick in federal activity from Congress and federal agencies introducing legislation, launching inquiries, and soliciting public comment related to AI and health care. On September 10th, Senator Cruz (R–Texas) introduced the Strengthening Artificial Intelligence Normalization and Diffusion by Oversight and eXperimentation (SANDBOX) . The SANDBOX Act would mandate the director of the OSTP to create a “regulatory sandbox program” within one year of enactment. Through a formal process, companies working on AI products may request waivers from federal regulations for an initial period of two years, renewable up to four times for a total of one decade of exemption from federal regulations. In addition to oversight by relevant federal agencies, and mandated public disclosures on the participant’s web site or similar public platform, the bill also requires congressional oversight (including annual reporting) and lawmakers could make waivers permanent, if successful. On October 9th, the Senate Health, Education, Labor, and Pensions (HELP) Committee hosted a full committee to examine opportunities to leverage AI across health care, education, and the workforce, including to streamline clinical trials and reduce administrative burdens.
On September 11th, the FTC it was launching an enforcement inquiry into AI chatbots acting as companions, coming on the heels of numerous news stories highlighting negative impacts of AI chatbots and companions, particularly on young people engaging with them for mental health support. Separately on September 30th, the FTC issued a comment on measuring and evaluating the performance of AI-enabled medical devices.
On September 12th, CMS released an updated version of the CMS Artificial Intelligence Playbook (Version 4), with updates focused on CMS-specific context, guidance, and tools to support AI initiatives in the agency and align to April 2025 Office of Management and Budget memos ( and ) directing federal agency use of and policies related to AI.
On November 6th, the FDA Digital Health Advisory Group is scheduled to discuss “generative artificial intelligence-enabled digital mental health medical devices.”
For a summary of substantive federal action to date, see the table below.
Self-Regulating Bodies and Accreditation Organizations
In Q3, we saw an increase in guidance and action on the use of AI in health care from self-regulating bodies and other accreditation organizations, as developers, deployers, and users of AI tools in the health care space take action to supplement the patchwork of existing state and federal regulations.
In September, the Utilization Review Accreditation Commission (URAC) released two new accreditation tracks for AI – one intended for and one for in clinical and administrative settings. The accreditation requirements for both tracks focus on security and governance processes and were developed by an advisory council composed of representatives from health, technology and pharmaceutical organizations.
In September, Joint Commission, the oldest national health care accreditation organization, released in partnership with the Coalition for Health AI (CHAI), the largest convener of health organizations on the topic of AI. The guidance focused on the responsible use of AI in healthcare, with an emphasis on promoting transparency, ensuring data security and creating pathways for confidential reporting of AI safety incidents. Among other recommendations, Joint Commission and CHAI specifically recommend that health care organizations implement a process for the voluntary, confidential and blinded reporting of AI safety incidents. Looking forward, Joint Commission and CHAI state they plan to leverage stakeholder feedback on the guidance to develop “Responsible Use of AI” Playbooks and Joint Commission will establish a “Responsible Use of AI” certification program based upon the playbooks. We will continue to track the collaboration between Joint Commission and CHAI .
The National Committee for Quality Assurance launched an in July to explore standards for responsible governance in health care and announced it was considering a potential “AI Evaluation” offering, which if approved, is expected to launch in the first half of 2026.
Looking Ahead
We saw significant activity in Q3 as actors across all levels – state, federal and self-regulating bodies/accreditation organizations – define and issue guidance governing the development and use of AI in health care. In the coming months, providers, payors and other users of AI across the health care ecosystem may want to have a point of view that they make known to federal and state regulators on the benefits and burdens of the federal and state activities – including demonstrating the value of their products. In addition, stakeholders should anticipate continued activity in this space and should ensure they have strong governance processes and disclosure protocols in place to comply with existing regulations and in anticipation of forthcoming requirements in Q4 and beyond. We will continue to track state legislation and federal activity in Q4 of this year and expect vigorous action to occur in 2026 when state legislatures reconvene.
Health AI Laws Passed in 2025:
The below table represents the health AI laws that passed in 2025. For a full list of all laws prior to and including 2025, please see .
* Laws with an asterisk are those we consider “key state laws.” These are laws that, based on our review, are of greatest significance to the delivery and use of AI in health care because they are broad in scope and directly touch on how health care is delivered or paid or because they impose significant requirements on those developing or deploying AI for health care use.
State | Summary |
|---|---|
Arizona* | requires that a health care provider individually, exercising independent medical judgment, review claims and prior authorization requests prior to an insurer denying a claim or prior authorization. The law bans the sole use of any other source to deny a claim or prior authorization. Date Enacted: 5/12/2025 Date Effective: 6/30/2026 |
California* | establishes safeguards for the development of frontier AI models (defined as a foundation model that was trained using a quantity of computing power greater than 10^26 integer). Sets requirements on "large frontier developers" (defined as person who has trained, or initiated the training of, a frontier model and that (together with its affiliates) has annual revenues of at least $500 million in the preceding calendar year). Requires large frontier developers to write, implement, comply with, and publish a frontier AI framework applicable to their models; this framework must include details on: how the developer incorporates national and international standards and industry-consensus best practices; how the developer defines and assesses thresholds used to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk; mitigations to address the potential for catastrophic risks; revisiting and updating the frontier AI framework, including criteria that triggers updates and how the developer determines if frontier models are modified enough to require disclosures; cybersecurity practices; processes to report critical safety incidents; internal governance practices and assessment; and management of catastrophic risk resulting from the internal use of its frontier models. Requires annual updates to framework. Requires large developers to publicly publish transparency reports, including summaries of assessments of catastrophic risks from the frontier model, prior to or concurrently with deploying a new or substantially modified frontier model. Requires developers to regularly send a summary of any assessment of catastrophic risk or dangerous capabilities resulting from internal use of its foundation frontier to the Office of Emergency Services. Requires the Department of Technology to issue an annual report with recommendations on needed updates to definitions and thresholds. Establishes a state-led initiative, CalCompute, to support the development and deployment of AI that is safe, equitable, and sustainable. Establishes whistleblower protections for covered employees, defined as employees "responsible for assessing, managing, or addressing risk of critical safety incidents.” Date Enacted: 9/29/2025 Date Effective: 1/1/2026 |
California | mandates that, prior to public release, developers of AI tools publish documentation on their websites detailing the training data used in the development of the system or service. This documentation must include information on the datasets employed and their sources/owners; number of data points in the dataset; a description of how the datasets further the purpose of the AI system; timeframe during which data was collected; a description of the types of data points; whether data includes copyrighted content, personal information, or aggregate consumer information; whether the datasets were purchased or licensed by the developer; an explanation of any modifications made to the datasets by the developer along with the purpose of those modifications; and a statement indicating whether synthetic data was used during development. Provides exemptions for generative AI systems or services with the following purpose: 1) ensuring security and integrity; 2) operation of aircraft in national airspace; or 3) systems developed for national security, military, or defense purposes made available only to a federal entity. Date Enacted: 7/28/2025 Date Effective: 1/1/2026 |
California* | bans developers and deployers of AI systems, programs, devices, or technologies from using “specified terms, letters, or phrases to indicate or imply the possession of a license or certificate to practice a health care profession” without actually having obtained the appropriate license or certificate for that practice or program. Bans use of terms, letters, and phrases in advertising of AI systems that “indicates or implies” that the care offered by the AI technology is being provided by a human who is a licensed or certified health care professional. Date Enacted: 10/11/2025 Date Effective: 1/1/2026 |
Colorado | amends Colorado SB 205 (signed into law in 2024) to delay the original effective date from February 1, 2026 to June 30, 2026. Date Enacted: 8/28/2025 Date Effective: 6/30/2026 |
Illinois* | establishes that a licensed professional (defined as individuals licensed to provide therapy or psychotherapy services in the state) may use AI systems “only to the extent the use meets the definition of permitted use of artificial intelligence systems” (permitted use of AI systems is defined as “use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system”). Prohibits licensed professionals from using AI tools for supplementary support unless the patient or their legal representative is informed of the use of AI and its specific purpose, and the patient or their legal representative provides consent for the use of AI. Prohibits licensed professionals from allowing an AI system to do any of the following: “(1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.” Sets exceptions for religious counseling, peer support, and self-help/educational resources that are publicly available and do not purport to offer therapy or psychotherapy services. Date Enacted: 8/1/2025 Date Effective: 8/1/2025 |
Kansas | prohibits government entities in Kansas from installing or using any AI “platform[s] of concern” on state electronic devices owned or issued to an employee by a state agency. Platforms of concern include DeepSeek; any AI models controlled directly or indirectly by China (including Hong Kong), Cuba, Iran, North Korea, Russia, or Venezuela. Does not include Taiwan. Date Enacted: 4/8/2025 Date Effective: 7/1/2025 |
Maine* | prohibits the use of artificial intelligence chatbots or similar technologies in trade and commerce in a manner that may mislead or deceive consumers into believing they are interacting with a human being, unless the consumer is clearly and conspicuously notified that they are not engaging with a human being. Date Enacted: 6/12/2025 Date Effective: 6/18/2025 |
Maryland* | requires carriers (including health insurers, dental benefit plans, pharmacy benefit managers that provide utilization review, and any health benefit plans subject to regulation by the state) to ensure that any AI tools used for utilization review base decisions on medical/clinical history, individual circumstances, and clinical information; does not solely leverage group datasets to make decisions; does not “replace the role of a health care provider in the determination process”; does not result in discrimination; is open for inspection/audit; does not directly or indirectly cause harm; and patient data is not used beyond its intended use. The law mandates that AI tools may not “deny, delay or modify health care services.” Date Enacted: 5/20/2025 Date Effective: 10/1/2025 |
Montana | prohibits the AI use by government entities to “classify a person or group based on behavior, socioeconomic status, or personal characteristics resulting in unlawful discrimination.” Requires government entities provide disclosures on any published material posted by AI not reviewed by a human. Date Enacted: 5/5/2025 Date Effective: 10/1/2025 |
Nebraska* | establishes that AI algorithms may not be the “sole basis” of a “utilization review agent’s” (defined as any person or entity that performs utilization review) decision to “deny, delay, or modify health care services” based whole or in part on medical necessity. The law requires utilization review agents to disclose use of AI in utilization review process to each health care provider in its network, to each enrollee, and on its public website. Date Enacted: 6/4/2025 Date Effective: 1/1/2026 |
Nevada* | prohibits AI “providers” from “explicitly or implicitly” indicating that an AI system is capable of providing or is providing professional mental or behavioral health care. Prohibits providers of mental and behavioral health care from using or providing AI systems in connection to the direct provision of care to patients. Sets forth that providers may use AI tools to support administrative tasks provided that the provider must 1) ensure that use complies with all applicable federal and state laws governing patient privacy and security of EHRs, health-related information, and other data, including HIPAA, and 2) review the accuracy of any report, data, or information compiled, summarized, analyzed, or generated by AI systems. The law requires the state agency to develop public education material focusing on, amongst other topics, best practices for AI use by individuals seeking mental or behavioral health care or experiencing a mental or behavioral health event. Additionally, the law prohibits all public schools (including charter schools or university schools) from using AI to “perform the functions and duties of a school counselor, school psychologist, or school social worker” as related to student mental health. Date Enacted: 6/5/2025 Date Effective: Upon passage and approval for the purpose of adopting any regulations and performing any other necessary preparatory administrative tasks to carry out provisions of this act. 7/1/2025 for all other purposes. |
New Mexico | establishes that the Board of Nursing shall “promulgate rules establishing standards for the use of artificial intelligence in nursing.” Date Enacted: 4/8/2025 Date Effective: 6/20/2025 |
New York* | prohibits any person or entity to operate or provide an “AI companion” to someone in New York unless the model contains a protocol to take reasonable effort to detect and address suicidal ideation or expressions of self-harm expressed by the user. Requires protocols to, at a minimum: (1) detect user expressions of suicidal ideation or self-harm, and (2) refer users to crisis service providers (e.g., suicide prevention and behavioral health crisis hotlines) or other appropriate crisis services, when suicidal ideations or thoughts of self-harm are detected. Requires that AI companion operators provide a “clear and conspicuous” notification—either verbally or in writing—that the user is not communicating with a human; that notification must occur at the beginning of any AI companion interaction, and at least every three hours after for continuous interactions. Sets forth that the Attorney General has oversight authority and can impose penalties of $15,000/day on an operator that violates the law. Date Enacted: 5/9/2025 Date Effective: 11/5/2025 |
Oregon* | mandates that “nonhuman” entities, including AI tools, may not use the title of nurse or similar titles, including advanced practice registered nurse, certified registered nurse anesthetist, clinical nurse specialist, nurse practitioner, medication aide, certified medication aide, nursing aide, nursing assistant, or certified nursing assistant. Date Enacted: 6/24/2025 Date Effective: 1/1/2026 |
Texas* | sets requirements for government agency and non-governmental use of AI. Requirements for government agencies include: mandating that government agencies using AI systems that interact with consumers clearly and conspicuously disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an AI system; prohibiting government entities from using AI systems that produce social scoring, or developing or deploying an AI system that uses biometric identifiers to uniquely identify individuals if that use infringes on constitutional rights; and establishing an AI Regulatory Sandbox Program and creates the “Texas Artificial Intelligence Council.” Requirements for non-governmental developers and deployers of AI include: prohibiting deployers from deploying AI systems that aim to “incite or encourage” a user to commit self-harm, harm another person, or engage in criminal activity, and prohibiting development or deployment of AI systems that discriminate. An AI system deployed in relation to health care services or treatments must be disclosed by the provider to the recipient of health services or their personal representative on the date of service, except in emergencies, when the provider shall disclose as soon as reasonably possible. Date Enacted: 6/22/2025 Date Effective: 1/1/2026 |
Texas* | prohibits a utilization review agent’s use of an automated decision system (defined as an algorithm or AI that makes, recommends, or suggests certain determinations) to “make, wholly or partly, an adverse determination.” Adverse determinations are defined as determinations that services are not medical necessary or appropriate, or are experimental or investigational. Sets forth that the use of algorithms, AI, or automated decision systems for administrative support or fraud detection is allowable. Empowers the Commissioner of Insurance to audit and inspect use of tools. Date Enacted: 6/20/2025 Date Effective: 9/1/2025 |
Texas* | requires providers leveraging AI for diagnostic or other purposes to “review all information created with artificial intelligence in a manner that is consistent with medical records standards developed by the Texas Medical Board.” In addition, a provider using AI for diagnostic purposes must disclose the use of the technology to their patients. Date Enacted: 6/20/2025 Date Effective: 9/1/2025 |
Utah* | repealed Utah SB 149 disclosure provisions and replaced them with disclosure requirements that are similar but required in more narrow scenarios. As with SB 149, the law requires “regulated occupations” to prominently disclose that they are using computer-driven responses before they begin using generative AI for any oral or electronic messaging with an end user. However, this disclosure is only required when the generative AI is “high-risk,” which is defined as (a) the collection of personal information, including health, financial or biometric data and (b) the provision of personalized recommendations that could be relied upon to make significant personal determinations, including medical, legal, financial, or mental health advice or services. Relatedly, in 2025, passed, which extended the repeal date of SB 149 to July 1, 2027. Date Enacted: 3/27/2025 Date Effective: 5/7/2025 |
Utah* | requires suppliers of “mental health chatbots” to clearly and conspicuously disclose that the chatbot is AI technology and not a human at the beginning of any interaction, before the user access features of the chatbot, and any time the user asks or otherwise prompts the chatbot about whether AI is being used. Prohibits “suppliers” of mental health chatbots from:
The law does not preclude chatbots from recommending that users seek counseling, therapy or other assistance, as necessary. The Attorney General may impose penalties for violations of this law. Finally, the law states that it is an affirmative defense to liability under the law if the supplier demonstrates that they maintained documentation that describes development and implementation of the AI model that complies with the law and maintains a policy that meets a long list of requirements, including ensuring that a licensed mental health therapist was involved in the development and review process and has procedures which prioritize user mental health and safety over engagement metrics or profit. In order for the affirmative defense to be available, the policy must be filed with the Division of Consumer Protection. Date Enacted: 3/25/2025 Date Effective: 5/7/2025 |
Other: State Activity Laws | Over the past several decades, states have sought to understand AI technology before regulating it. For example, states have created councils to study AI and/or created AI-policy positions within government in charge of establishing AI governance and policy. States have additionally tracked use of AI technology within state agencies. These bills reflect states’ interest in the potential role of AI across industries, and potentially in health care. The following passed in 2025: Alabama , Arkansas , California , Delaware Georgia , Hawaii , Kentucky , Maryland , Maryland , Mississippi , Montana , New York , Oregon , Rhode Island , Texas (certain provisions), Texas , Texas , and West Virginia . |
Key Federal Activity
2025 Activity To-Date | |
|---|---|
White House |
|
Congress |
Several others that touch on AI in health care and which we will report on if they gain traction. |
HHS Appointments and Announcements |
|
OCR |
|
ONC |
|
CMS |
|
FDA |
|
NIH |
|
DOJ | Litigation continues over alleged use of AI to deny Medicare Advantage claims. In June 2025, DOJ against over 300 defendants for participation in health care fraud schemes, with a parallel announcement from CMS on the successful prevention of $4 billion in payments for false and fraudulent claims. |
FTC |
|
For questions on the above, please reach out to or . A full list of tracked bills (introduced and passed) from 2024 and 2025—classified by topic category and stakeholder impacted—is available to subscribers; for more information on how to subscribe to Manatt on Health, please reach out to .
New York has subsequently introduced additional chatbot laws.
Harm is broadly defined to include encouraging self-harm, suicidal ideation, disordered eating, consumptions of drugs or alcohol, or violence; offering mental health therapy without oversight from a licensed provider; encouraging harm to others or participation in illegal activity; engaging in erotic or sexually explicit interactions; prioritizing validation of the user’s beliefs, preferences, or desires over factual accuracy or safety; or optimizing engagement over safety guardrails.
“Frontier developer” is defined as a person who has trained, or initiated the training of, a frontier model, with respect to which the person has used, or intends to use a computing power of greater than 10^26 integer or floating-point operations, including computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model. “Large frontier developer” is defined as a frontier developer that together with its affiliates collectively had annual gross revenues in excess of five hundred million dollars ($500,000,000) in the preceding calendar year.
Catastrophic risk” is defined as a “foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars in damage to, or loss of, property arising from a single incident involving 1) a frontier model providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon, 2) engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense, or 3) evading the control of its frontier developer or user.
“Critical safety incidents” are defined as 1) unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury; (2) harm resulting from the materialization of a catastrophic risk; 3) loss of control of a frontier model causing death or bodily injury or 4) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.
This analysis was exclusively distributed to subscribers on July 28, 2025.
“Supplier” means a seller, lessor, assignor, offeror, broker or other person who regularly solicits, engages in or enforces consumer transactions, whether or not the person deals directly with the consumer. Utah Code 13-11-3.