Manatt Health: Health AI Policy Tracker
Purpose: The purpose of this tracker is to identify key federal and state health AI policy activity and summarize laws relevant to the use of AI in health care. The below reflects activity from January 1, 2025 through March 31, 2025. This is published on a quarterly basis.
It has been an active start to 2025 – the past 18 months have seen a whirlwind of state legislative activity on AI in health care – and there are no signs of slowing down. In the first three months of 2025 alone, states have introduced over 250 AI bills impacting health care stakeholders, well exceeding the number of AI bills introduced in all of 2024 (~100). As of March 31, 2025, forty-two states had introduced some relevant legislation and six bills has been signed into law: Utah , Utah , and Utah , Kentucky , Mississippi , New York . These signed laws are primarily focused on transparency / disclosure requirements (Utah HB 452, Utah SB 226 (essentially modifying Utah SB 149, which passed last year)) or mandating the state inventory departmental use of AI and/or create an AI task force to support future policy making (Kentucky SB 4, New York SB 822, Mississippi SB 2426). Virginia’s – a bill heavily modelled off of a Colorado law passed last year, which would have imposed significant requirements on developers and deployers of high-risk AI systems – was vetoed. See table below for additional information on all passed bills. Because many legislative sessions end in Q2 2025, we expect the next quarter to be equally active.
At a federal level, AI continues to be discussed as a promising tool to root out fraud and abuse in health care and reduce costs, including by Dr. Oz who has publicly endorsed potentially using AI to assist with health care. The Trump administration has thus far taken a strong de-regulatory stance, reversing many Biden-era policies, including revoking President Biden’s executive order on addressing AI risks, and replacing it with its own . The few regulations previously adopted that address AI in health care remain in effect, but could be repealed or rescinded; in early April, CMS proposed regulatory provisions related to artificial intelligence in the Medicare Advantage final rule.
There are three sections in this report-out:
- Key Takeaways from Q1 State Health-Related AI Activity
- Key Federal Activity
- Summary of all Passed Health-Related AI Bills

The map does not include bills we categorize as “Other: State Activity Laws,” which generally are bills that create councils or tasks forces to study AI or related to narrow state activity (as further discussed below).
Key Takeaways from Q1 Health-Related AI State Activity
Key Takeaway #1: Many states introduced language that was modeled on Colorado SB205, a consumer protection law signed with reservations by Colorado Governor Jared Polis in May 2024.
As a refresher, Colorado’s , effective February 1, 2026, imposes significant requirements on developers and deployers of “high risk” AI systems. Developers and deployers are required to share a variety of information between each other, the public, and the Attorney General, as well as protect consumers from algorithmic discrimination and conduct risk assessments. The law’s broad definitions would include health care stakeholders such as hospitals, insurers, and digital health companies if they are developing or deploying high-risk AI systems and are not exempt. The law exempts AI systems that have been approved by a federal agency (e.g., FDA) or are in compliance with national standards (e.g., ONC), and also exempts HIPAA-covered entities that provide non-high-risk health care recommendations which require a health care provider to take action to implement. There are several other exemptions for small-scale deployers and AI systems acquired by the federal government. In the event of an enforcement action, there is a rebuttal presumption that a developer/deployer used reasonable care if they complied with the requirements set forth in the law and by the Attorney General. For a more detailed overview of Colorado’s SB205, see Manatt on Health’s analysis .
In his to the CO Assembly, Governor Polis expressed concerns about the law’s approach to mitigating discrimination, the complex compliance reporting requirements, and the potential negative impact on innovation as a result of high regulatory requirements, and requested that the General Assembly amend the bill to address his concerns before the law goes into effect in 2026. We understand that such amendments are being discussed, but none have yet been introduced.
Despite these reservations, at least 18 bills have been introduced that are heavily modeled off of Colorado’s SB205: CA , CT , GA , IL , MA , MA , MA , MA , MD , MD , NE , NM , NY , NY , RI , TX , VA , VA . The majority of these bills mirror Colorado’s definitions of ‘high-risk,’ ‘consequential decision,’ ‘developer,’ ‘deployer’ with CO SB205, developer and deployer requirements, and copy exemptions (some notable differences are discussed below). Most of the above-listed bills remain in committee.
Virginia was the first to reach a Governor’s desk; this bill did have additional exemptions that would have made it less likely to impact many health care stakeholders. Specifically, the proposed law exempted: HIPAA-covered entities, as long as the covered entity is providing health care recommendations that (i) are generated by an AI system and (ii) require a health care provider to take action to implement the recommendations; HIPAA-covered entities providing services utilizing an AI system for administrative, quality measurement, security, or internal cost or performance improvement functions. The bill also exempted developer/deployers that “facilitates or engages in the provision of telehealth services.”
Even with these changes, the bill was vetoed on March 24th, 2025, by Governor Glenn Youngkin. In a to the legislature, Governor Youngkin provided his veto rationale, including that HB 2094 would “establish a burdensome artificial intelligence regulatory framework” that would inhibit economic growth in the state – especially for startups and small businesses – and could not adapt to the “rapidly evolving” AI industry. Governor Youngkin emphasized the need for balanced AI regulation and noted existing efforts to ensure AI governance as a result of (2024), which sets forth principles for use of AI in the state and establishes a task force for further study of AI. Governor Youngkin’s letter is reflective of strong industry pushback on HB 2094.
A few states also introduced language to expand the scope of the proposed laws’ application, including others who may be in the chain from developer-to-user, to “integrators” (Virginia , Connecticut , Rhode Island ) and “distributors” (Texas ). “Integrators” and “distributors” are essentially entities or individuals who do not develop AI tools, but who (i) integrate AI tools into existing operations that are available to the market, or (ii) otherwise make the AI system available in the market. Requirements for these integrators/distributors are typically focused on ensuring any AI tools they integrate or distribute avoid discrimination and require these entities to make certain information publicly available. For example, EMR systems (that are not exempt from the law under an ONC-certified exemption) that purchase AI tools to integrate into their EMR may qualify as a ‘integrator’ because they are integrating AI tools into their existing technology systems and making those systems available to their health care clients (note that, in other legislation, EMRs would likely be classified as either a developer or a deployer so they would still be required to abide by any legislation that does not specify ‘integrator’ or ‘distributor’).
Key Takeaway #2: In 2025, there has been a very significant increase in bills legislating payer use of AI, particularly related to utilization management and physician oversight of adverse decisions. Over 56 bills were introduced that would govern payer use of AI.
We have seen ten times the number of payor use of AI bills introduced in Q1 relative to last year. These bills mainly focus on (1) insurance eligibility and medical necessity/authorization determinations and (2) transparency with end-user and/or state.
Insurance eligibility and medical necessity/authorization determinations. Last year, California passed which requires that a licensed physician or licensed health care professional retain ultimate responsibility for making individualized medical necessity determinations for each member of a health care service plan or health insurer. While the bill imposes some other requirements, essentially the bill requires that medical necessity determinations must ultimately be made by a licensed physician or health care professional. This law is consistent with the that CMS published regarding Medicare Advantage Plan’s use of AI to render clinical coverage determinations.
Almost all of the 2025 proposed bills include language that would require a physician to review a decision (or adverse decision) made by an AI tool related to eligibility determinations and/or medical necessity. Minnesota prohibits any health carrier from using AI when making determinations to either approve or deny a prior authorization request. Some bills prohibit payers from issuing denials, reductions, or terminations of insurance based “solely” on the use of AI (e.g., Massachusetts , Maine , Ohio , Nebraska , Washington , among others). Other bills expressly require review by a human before the decision can be issued. For example, Illinois bans the denial, reduction, or termination of insurance or benefits that result solely from the use of AI issued without “meaningful review” by an individual with authority to override the AI systems (similar to South Carolina ). Massachusetts requires a medical necessity determination be made only by a health care professional competent to evaluate the AI decision and who can review and consider “the requesting providers recommendation, the insured’s medical or other clinical history, as applicable, and individual clinical circumstances.”
Transparency with end-user and/or state. A handful of states introduced language that would require payers to submit information on AI use to the state (CA , New York , among others); both Maryland and Maine require payers to submit quarterly reports (specifics differ, but generally about the payer’s use of AI, denials, overturned denials, the person responsible for training the AI tool, among other provisions). A few other states went further, requiring payers to submit AI algorithms and training data sets for review by the relevant department (e.g., Texas , Texas , MA , MA , New York ).
A subset of those bills also included provisions outlining disclosure requirements to end-users about the payers use of AI tools – these were typically focused on disclosing the use of AI in utilization review to members, health care providers, and/or on the payers website (e.g., Nebraska , NY , Rhode Island , among others).
Key Takeaway #3: States are grappling with the role of AI in clinical delivery, in particular what provider oversight should be required when using AI tools in clinical decision-making and how providers should communicate the use of AI to patients. Over 20 bills regulating provider use of AI were introduced.
Clinical Use of AI / Physician Oversight: States are increasingly introducing language that would require physicians to provide oversight and review of any AI tools used in clinical decision- making.
For example, Texas requires providers that leverage AI for diagnostic purposes to “review all records created with artificial intelligence to ensure that the data is accurate and properly managed”. Illinois – specifically focused on licensed providers of therapy or psychotherapy services – prohibits AI systems from making independent therapeutic decisions or generating therapeutic recommendations or treatment plans without review and approval by a licensed professional. Louisiana states that health care providers may use AI to assist with “an administrative or analytical task in healthcare services” (e.g., preparing notes, managing appointment scheduling and reminders, or processing billing and insurance claims), but bans health care providers from using AI to make decisions related to treatment and diagnosis, generation of therapeutic recommendations, or direct interactions with a patient related to treatment or diagnosis without clinician review. Texas requires that AI mental health services be approved by the Health and Human Services Commission and be provided by a licensed mental health professional, and requires that a licensed health professional be available at all times to review progress, communicate with individuals, and intervene in cases of harm.
Maryland has a broad bill that explicitly bans health care providers from using AI tools designed “only to reduce costs for a health care provider at the expenses of reducing the quality of patient care, delaying patient care, or denying coverage for patient care.”
There are five bills focused on AI in nursing. The majority specifically mandate that AI tools not replace nurses and provide protections for nurses who override AI-generated clinical decisions (Hawaii , Maine , Minnesota , Illinois ). One bill, Oregon , mandates that “nonhuman” entities, including AI tools, may not use the title of nurse or similar titles.
Provider transparency: Numerous states introduced bills similar to CA and Utah , which passed last year, mandating that providers ensure patients know when AI tools are used.
Utah , signed into law on March 27th, repeals Utah SB 149’s disclosure requirements and replaces them with new disclosure requirements that are similar, but required in more limited use cases. As with SB 149, SB 226 requires “regulated occupations” – which include any professional that requires a license or state certification to practice the occupation, and would include physicians, nurses, and other health professionals – to “prominently disclose when an individual receiving services is interacting with generative artificial intelligence”; this disclosure must occur before they begin using generative AI for any oral or electronic messaging with an end user. However, unlike HB 149, SB 226 specifies that disclosure is only necessary “if the use of generative artificial intelligence constitutes a high-risk artificial intelligence interaction.” Although this narrows when disclosure is required, the definition of “high-risk” is broadly defined – and includes the collection of personal information, including health data, and/or the provision of personalized recommendations, including medical advice – and thus would likely implicate many health care stakeholders and health-related generative AI activity.
Other states introduced bills that require disclosures to the patient if AI-generated clinical communications were not reviewed and approved by a provider and/or require that patients be told how to contact a human health care provider in lieu of communicating directly with AI tools (e.g., Illinois , Nevada , Massachusetts ). In addition to disclosing AI use in any patient communications, Indiana requires a health care provider disclose to a patient the provider's use of AI technology to make or inform any decision involved in the provision of health care to the patient.
Arizona published a (NPRM) in March that, if finalized as written, would require providers to document as part of written (including electronic) informed consent obtained from a patient notification of “the extent, if any, to which clinical services are provided through, recorded or documented with, or involve the use of artificial intelligence, machine learning, deep learning, or any other human simulation modality.” Public comments related to this NRPM can be submitted through April 20 (30 days after NPRM publication); there may also be an oral proceeding with oral comments accepted after April 20. This would include, for instance, requiring a disclosure when ambient documentation tools are used.
State/Federal approval for clinical AI tools: Notably, states have begun introducing legislation that mandates AI tools can only be used in clinical settings if they are approved by federal or state agencies and/or requires developers/deployers register with the state or obtain licenses from the state in order to operate. Arkansas , which has been withdraw, would have prohibited the use of AI tools in the delivery of health care services or generation of medical records unless the tool has been approved by the FDA and verified by a “quality assurance laboratory.” North Carolina requires operators or distributors of chatbots that handle health information to obtain a “health information chatbot license”; Texas requires that mental health AI applications be approved by the Health and Human Services commission; Maryland requires deployers of AI health software to register with the state prior to distribution or operation; New York requires developers obtain a license from the state prior to developing or deploying a high-risk AI system (which includes AI systems that influence health). Others were not health specific but may implicate health care stakeholders (e.g., Nevada , Vermont ). While no registration or licensing bills have passed so far in 2025, developers or deployers may soon have to contend with a patchwork of state registration and licensing application requirements across states in which they operate and providers may need to routinely obtain permission from state agencies to utilize AI tools in clinical practice.
Key Takeaway #4: Multiple states introduced bills that govern the use of AI-enabled chatbots, with some specifically targeting chatbots used in the provision of mental health services. One targeting mental health services passed (Utah HB 452).
This year, multiple states have introduced chatbot-disclosure legislation (e.g., Illinois , VA , Hawaii , Idaho , among others). For instance, New York’s legislature introduced two chatbot disclosure bills that expanded upon the restrictions and disclosure requirements outlined in California and New Jersey chatbot laws passed in 2018: New York mandates that “proprietors” of chatbots are responsible for ensuring the chatbot provides accurate information and that proprietors may not waive liability for information provided by a chatbot even if a consumer is notified that they are interacting with a chatbot (note: this bill is not specific to health); New York similarly does not waive liability and prohibits AI chatbots from providing any medical or psychological advice.
North Carolina introduced an expansive chatbot-focused bill, , which mandates that operators or distributors of chatbots that handle health information must obtain a health information chatbot license from the state Department of Justice, requires disclosures to the user, and sets strict provisions for the protection of user data. Specifically, “covered platforms” are banned from processing data or using chatbots that conflict with the best interests of the user; required to de-identify data and take reasonable care with personal information; required to obtain user consent for all data collection and use; required to store all non-sensitive conversations for at least 60 days; required to provide users with the ability to access and delete their data; and, for all health care and mental health support chatbots, required to utilize “self-destructing messages” that delete 30 days after data has been acquired.
Notably, several states’ chatbot bills directly reference the provision of mental health services. Utah , which was signed into law on March 25, 2025, requires “mental health chatbots” to clearly and conspicuously disclose that the chatbot is AI technology (and not a human) and prohibits suppliers of mental health chatbots from selling or sharing individually identifiable health information or user input (with some exceptions). California introduced which would ban the deployment of chatbots unless the operator “implemented a protocol for addressing suicidal ideation, suicide, or self-harm expressed by a user,” which may include referral to crisis services providers like a suicide hotline or crisis text line. New York included language that would prohibit AI chatbots from providing responses or information that includes any medical or psychological advice (note: subsequent amendments removed this language). Although the term ‘chatbot’ is not directly used, Nevada prohibits AI “providers” from making statements that ‘explicitly or implicitly’ indicate that the AI system is capable of providing or is a provider of professional mental or behavioral health care.
***
In addition to the above key takeaways, 2025 saw a few other trends. As with 2024, many bills outline requirements to prohibit discrimination in the development and use of AI tools. States also introduced a high volume of “state activity” bills – those that create councils to study AI, create AI-policy positions with government, and/or track AI technology use within state agencies: over 70 of these bills were introduced in 2024 and over 90 bills have been introduced in 2025 thus far (for a list on all those that have passed, see below “Other: State Activity Laws”).
Finally, a few states began thinking about the question of liability. Missouri declares AI systems “non-sentient” and mandates that any direct or indirect harm by an AI system’s operation, output, or recommendation, shall be the responsibility of the owner or user who directed or employed the AI, and not the AI tool itself. Owner is broadly defined as “any natural person, corporation, or other legally recognized entity that creates, controls, deploys, operates, or otherwise exercises authority over an AI system.” As summarized above, NY and NY also include liability language that mandates proprietors of chatbots are liable for the information provided by the chatbot. Allocation of liability remains a big question for all stakeholders, and we anticipate furthered activity here over the year, some which may be borne out in litigation.
Key Federal Activity
Where 2024 Ended | 2025 Activity To-Date | |
---|---|---|
White House |
|
|
Congress |
| Bills to:
Several others that touch on AI in health care and which we will report on if they gain traction. |
HHS | See below for relevant updates from HHS divisions. |
|
OCR |
|
|
ONC |
| No significant activity. |
CMS |
|
|
FDA |
|
|
DOJ |
| Ongoing. |
FTC |
|
|
Passed Bills
Categorizations: “Key state laws” are those that, based on our review, are of greatest significance to the delivery and use of AI in health care because they are broad in scope and directly touch on how health care is delivered or paid or because they impose significant requirements on those developing or deploying AI for health care use. “Additional state laws” are those that were identified as being relevant to AI in the provision of health care or health care services, but were smaller in scope or significance than “key” laws.
Key Health AI Laws Laws passed in 2025 are in bold. |
---|
State | Summary |
---|---|
California | requires that a licensed physician or licensed health care professional retain ultimate responsibility for making individualized medical necessity determinations for each member of a health care service plan or health insurer. One of the major requirements of the bill is that health care service plans and health insurers that use AI tools cannot use the tool to “deny, delay, or modify health care services” based upon medical necessity. Said another way, the determinations of medical necessity may only be made by a licensed physician or health care professional. Date Enacted: 9/28/2024 Date Effective: 1/1/2025 |
California | requires that health care providers disclose, via a disclaimer, to a patient receiving clinical information produced by generative AI that the information was generated by AI. In addition, the disclaimer must tell the patient how to contact a “human health care provider” or employee of the health facility. This disclaimer must be included in traditional written communications, such as letters and emails, as well as chat-based technology. Disclaimers are not required if the communications generated by generative AI are “read and reviewed by a human licensed or certified health care provider.” This bill is similar to Utah law . Date Enacted: 9/28/2024 Date Effective: 1/1/2025 |
California | requires developers of generative artificial intelligence systems to publicly post information on the data used to train the AI system, including the source or owners of the datasets, the number of data points in the datasets, a description of the types of data points in the dataset, and whether the datasets include personal information. Developers are defined as those who make AI tools for “members of the public” and specifically exclude “hospital’s medical staff member[s],” though the intention of the exclusion is not entirely clear. Date Enacted: 9/28/2024 Date Effective: 1/1/2025 |
California | requires “covered providers,” i.e., developers of AI systems, to create and make freely available AI detection tools that can identify whether AI content was created or altered by the developer's generative AI system. “Covered Providers” refers to individuals that create, code, or otherwise produce a generative AI system that has over one million monthly visitors or users and is publicly accessible within California. Further, AI-generated content must include embedded metadata (called a “latent disclosure”) that identifies it as being AI-created. Date Enacted: 9/19/2024 Date Effective: 1/1/2025 |
California | prohibits individuals from using undisclosed bots (“automated online account where all or substantially all of the actions or posts of that account are not the result of a person”) to communicate with another person in California with the intent to mislead or knowingly deceive the other person in order to influence purchases or votes. Stipulates that disclosures of bots must be "clear, conspicuous, and reasonably designed," and makes exemptions for online platform service providers. Date Enacted: 9/28/2018 Date Effective: 7/1/2019 |
Colorado | governs developers and deployers of high risk AI systems. High risk AI systems are defined as those that make, or are a substantial factor in making “consequential decision[s],” which are decisions that have “material legal or similarly significant effect on the provision or denial to any consumer” or the costs or terms of health care services or insurance (among other areas). Developers must mitigate algorithmic discrimination and ensure transparency between themselves and deployers, the public, and the Attorney General through information disclosures. Additionally, the law requires deployers to mitigate algorithmic discrimination, implement a risk management program, and complete impact assessments. For more detailed information, please see "CO Enacts 'High Risk' AI Law Regulating Deployers and Developers, Including Health Care Stakeholders" on Manatt on Health . Date Enacted: 5/17/2024 Date Effective: 2/1/2026 |
Colorado | prohibits insurers from using algorithms that rely on external consumer data sources in a way that unfairly discriminates. After a stakeholder process, the commissioner shall adopt rules that: establish when an insurer may use algorithms, detail how to demonstrate the algorithm has been tested for unfair discrimination, outline what information insurers must submit to the commissioner regarding the use of AI models and external consumer data, and mandate that insurers establish and maintain a risk management framework. Colorado Department of Regulatory Agencies, Division of Insurance has proposed regulations and is beginning the stakeholdering process for health insurance in January 2025. Date Enacted: 7/6/2021 Date Effective: 9/6/2021 |
New Jersey | prohibits individuals from using undisclosed bots (“automated online account where all or substantially all of the actions or posts of that account are not the direct result of a person”) to communicate with another person in the state with the intent to mislead or knowingly deceive the other person in order to influence purchases or votes. Stipulates that disclosures of bots must be “clear, conspicuous, and reasonably designed” and imposes escalating civil penalties for multiple violations. Date Enacted: 1/21/2020 Date Effective: 7/18/2020 |
Utah | (“AI Policy Act”) implements disclosure requirements between a deployer and end user. This consumer protection law requires generative AI to comply with basic marketing and advertising regulations overseen by the Division of Consumer Protection of the Utah Department of Commerce. The law requires “regulated occupations”, which encompass over 30 different health care professions in Utah, ranging from physicians, surgeons, dentists, nurses, and pharmacists to midwives, dieticians, radiology techs, physical therapists, genetic counselors, and health facility managers to prominently disclose that they are using computer-driven responses before they begin using generative AI for any oral or electronic messaging with an end user. Disclosures about generative AI likely cannot reside solely in an entity’s terms of use or privacy notice. For more detailed information, please see “Utah Enacts First AI Law – A Potential Blueprint for Other States, Significant Impact on Health Care” on Manatt on Health . In 2025, passed, which extended the repeal date of SB 149 to July 1, 2027. Date Enacted: 3/13/2024 Date Effective: 5/1/2024 |
Utah | repealed Utah SB 149 disclosure provisions and replaced them with disclosure requirements that are similar, but required in more narrow scenarios. As with SB 149, the law requires “regulated occupations” to prominently disclose that they are using computer-driven responses before they begin using generative AI for any oral or electronic messaging with an end user. However, this disclosure is only required when the generative AI is “high-risk” (broadly defined, but includes the collection of personal information, including health data and/or the provision of personalized recommendations, including medical advice). Date Enacted: 3/27/2025 Date Effective: 5/7/2025 |
Utah | prohibits suppliers of “mental health chatbots” do not sell or share individually identifiable health information or user input (with some exceptions), and clearly conspicuously disclose that the chatbot is AI technology not a human. Date Enacted: 3/25/2025 Date Effective: 5/7/2025 |
Additional State Laws.
Additional Health AI Laws Laws passed in 2025 are in bold. |
---|
State | Summary |
---|---|
Arkansas | requires the Arkansas Department of Heath to develop algorithms within the controlled substance database that would alert a practitioner if their patient is being prescribed opioids by more than three physicians within any thirty-day period. The bill includes a caveat that this is only required if funding is available. Date Enacted: 4/8/2015 Date Effective: 4/8/2015 |
California | requires the California State Department of Health Care Services to, in partnership with managed care plans and in consultation with stakeholders, implement a mechanism or algorithm to identify persons with higher risk and more complex care needs. Date Enacted: 10/19/2010 Date Effective: 10/19/2010 |
California | requires that a clinical laboratory director or authorized designee establish, validate, and document criteria by which any clinical laboratory test or examination result is auto-verified (“autoverification” means the use of a computer algorithm in conjunction with automated clinical laboratory instrumentation to review and verify the results of a clinical laboratory test or examination for accuracy and reliability). This criteria must be annually re-evaluated. Requires an authorized person to be responsible for the accuracy and reliability of all test and examination results. Date Enacted: 9/18/2006 Date Effective: 1/1/2007 |
Illinois | mandates the Illinois Department of Healthcare and Family Services to solicit stakeholder input about and subsequently implement an algorithm to facilitate automatic assignment of eligible Medicaid enrollees into managed care entities based on quality scores and other operational proficiency criteria. It also dictates that the algorithm preserve provider-beneficiary relationships and only be used to assign enrollees that have not voluntarily selected a primary care physician and a managed care entity or care coordination entity; the algorithm cannot be used to reassign an individual currently enrolled in a managed care entity. Enrollees are granted a 90-day period after the algorithm's automatic assignment to select a different managed care entity. Date Enacted: 8/26/2016 Date Effective: 1/1/2017 |
Kentucky | outlines requirements related to the use of “assessment mechanisms” (including AI devices) to conduct eye exams or generate prescriptions for contact lenses. Select requirements include: ensuring assessment mechanisms allow for synchronous or asynchronous interaction between the patient and the KY-licensed optometrist, osteopath, or physician; patient age minimums; pre-visit requirements and patient disclosures, among others. Similar to . Date Enacted: 3/30/2018 Date Effective: 7/13/2018 |
New York | prohibits state agencies or entities acting on behalf of an agency from using or procuring automated decision-making systems in relation to the delivery of any public assistance benefit or in circumstances that impact the rights, civil liberties, safety, or welfare of an individual, unless such utilization is subject to ongoing human review or authorized by law. Requires state agencies to submit an impact assessment including description of the objectives of technology, data used to train the system, and testing of accuracy, fairness, and potential bias to the governor and legislature every two years. It also prohibits agency use of tools that alter the rights or benefits of existing employees of the state and/or demonstrates bias and requires disclosure of information about automated decision-making tools, including description of the system, software vendors, the data used, the purpose, among others. Date Enacted: 12/21/2024 Date Effective: 12/21/2024, 12/21/2025 |
Oklahoma | allows physician-approved protocols to utilize or reference “medical algorithms” (note: medical algorithms undefined). Physician-approved protocols are protocols “such as standing orders that describe the parameters of specified situations under which a registered nurse may act to deliver public health services for a client who is presenting with symptoms or needs addressed in the protocol.” Date Enacted: 5/1/2012 Date Effective: 5/1/2012 |
Rhode Island | outlines requirements related to the use of “assessment mechanisms” (including AI devices) to conduct eye exams or generate prescriptions for contact lenses. Select requirements include: ensuring assessment mechanisms allow for synchronous or asynchronous interaction between the patient and the RI-licensed optometrist, osteopath, or physician; patient age minimums; pre-visit requirements, and patient disclosures, among others. Similar to . Date Enacted: 6/29/2022 Date Effective: 6/29/2022 |
Utah | mandates Utah's Medicaid agency to apply for a Medicaid and CHIP waiver from CMS to, amongst other initiatives, develop an algorithm to assign new recipients to accountable care plans based upon the plan's performance in relation to quality measures. Date Enacted: 3/26/2013 Date Effective: 5/14/2013 |
Virginia | requires hospitals, nursing homes, and certified nursing facilities to establish and implement policies on access to, and use of, an intelligent personal assistant at their facility. Date Enacted: 3/18/2021 Date Effective: 7/1/2021 |
Other: State Activity Laws. Over the past several decades, states have sought to understand AI technology before regulating it. For example, states have created councils to study AI and/or created AI-policy positions within government in charge of establishing AI governance and policy. States have additionally tracked use of AI technology within state agencies. These bills reflect states interest in the potential role of AI in across industries, and potentially in health care specifically. Note, as well, that some of these laws may no longer be applicable (e.g., if an AI research task force was disbanded after a set number of years, it may no longer be active), but are included here to provide a more exhaustive list. , , , , , , , , , , , , , , , , , , , , , , , , . KY HB 4, MS SB 2426, and NY SB 822 passed this year; all others passed previously.
For questions on the above, please reach out to , , or . A full list of tracked bills (introduced and passed) from 2024 and 2025—classified by topic category and stakeholder impacted—is available to Manatt on Health subscribers; for more information on how to subscribe to Manatt on Health, please reach out to .
See from NetChoice (an online business trade association focused on free expression and enterprise); from center-right think tank R Street Institute; , , and from the Chamber of Progress (a technology industry coalition); and from the Center for Data Innovation (a think tank for science and technology policy).
Last year, Utah and California signed laws with key transparency provisions. CA mandates that health care providers provide disclosure, via a disclaimer, to a patient receiving clinical information generated by generative AI that the information was generated by AI (and provide information on how to contact a human health care provider or employee of the health care facility). Utah requires “regulated occupations,” including over 30 different health care professions, to prominently disclose that they are using computer-driven responses before they begin using generative AI for any oral or electronic messaging with an end user. For more information, please see Manatt on Health’s summary of CA SB 3030 and of Utah SB 149 .
Arizona Register Volume 31, Issue 12. https://apps.azsos.gov/public_services/register/2025/12/contents.pdf
In 2018 (CA) and 2020 (NJ), both California and New Jersey passed “bot” (chatbot) laws focused on disclosures to the user (neither specific to health care). California requires that a person using a bot to communicate provide a disclosure that is “clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.” There is no liability on behalf of the chatbot if there is a disclosure that the person is communicating with a bot. New Jersey mandates that a person shall not use an online bot to communicate or interact with a person in the state in connection with the sale or advertisement of any merchandise unless the person discloses in a clear and conspicuous fashion that the communication or interaction is being conducted by or through a bot.
Defined as AI technologies that use generative AI to engage in interactive conversations with an user similar to conversations the user would have with a mental health therapist, and is used to provide mental health therapy or treat mental health conditions; does not include AI technology that provides scripted output or analyzes an individual’s input for the purposes of connecting the individual to a human mental health therapist.
“Artificial intelligence” is defined as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”
Defined broadly, as “artificial intelligence that can generate derived synthetic content, including images, videos, audio, text, and other digital content.”
Proposed amended Regulation at 3 CCR 702-4, available at .