New York’s Safeguards for AI Companions Are Now in Effect

With the rapid development and growth of AI-powered chatbots—designed or able to mimic friendship or emotional intimacy—ethical, safety, and regulatory concerns and real-world examples of harm are mounting.

In an effort to protect users of “AI companions” through chatbots, particularly young people and other vulnerable populations, New York enacted into law the , effective as of November 5, 2025, requiring operators of AI companions to implement certain safeguards and provide clear notices and reminders to users that they are interacting with artificial intelligence, and not a human. 

Who is Subject to the Law?

Any person or entity that operates or provides an AI companion in New York is subject to the law. The law defines “AI companions” as systems designed to simulate a sustained human or human-like relationship (e.g., intimate, romantic or platonic companionship) with users, by remembering past interactions to personalize responses, asking emotion-based questions without being prompted, and maintaining ongoing conversations about personal matters. This broad definition incorporates not only AI chatbots designed to be AI companions but those AI chatbots that act as an AI companion. AI companions expressly exclude customer service bots, productivity tools, and systems used by businesses solely for internal purposes or employee productivity (which are tools that clearly do not meet the definition).

An operator of an AI companion includes both those who operate, as well as an entity that provides the AI companion to a user – so the law does not just apply to the developer.

What Safeguards and Notice Obligations Are Required Under the Law?

The law requires operators of AI companions to implement safety protocols to take reasonable efforts to detect and address suicidal ideation or expressions of self-harm expressed by a user to the AI companion. At a minimum, upon detection of user expressions of suicidal ideation or self-harm, the operator must refer users to crisis service providers (e.g., suicide prevention and behavioral health crisis hotlines) or other appropriate crisis services.

Operators of AI companions must also provide clear and conspicuous upfront notice and periodic reminders to users (either verbally or in writing) that they are communicating with an AI tool, not a human. Specifically, operators must notify users at the beginning of each interaction and at least every three hours during extended sessions with the AI companion.

What Are the Penalties for Non-Compliance?

While there is currently no private cause of action, the New York Attorney General is authorized to seek injunctions and impose civil penalties of up to $15,000 per day for violations. All fines collected will be directed to a newly established suicide prevention fund, which will support mental health initiatives, education, and awareness programs.

How Does New York’s Law Interact with Existing AI Laws?

Currently, there is no national or New York State law or regulation generally governing generative AI or chatbots specifically. The landscape is currently emerging though, as there is growing pressure on legislatures and regulators to put guardrails on AI companions, which are front and center of the national conversation on AI as more people turn to technology for their emotional and mental health needs. This new law also builds on other recent legislation in New York regulating the use of digital replicas ensuring that consumers understand when and how they are interacting with technology.

The requirements under the New York law align closely with statutes in other states seeking to regulate the use of AI in emotionally sensitive contexts. As detailed in , over the course of 2025, several bills passed and were signed into law legislating AI-enabled chatbots. In addition to New York’s law, Nevada and Utah similarly address the use of chatbots in the delivery of mental health services. Two additional laws that passed address concerns about misrepresentation of chatbots as humans (Maine and Utah ). Over the summer, Illinois banned AI systems from directly interacting with clients in any form of therapeutic communication in therapy or psychotherapy settings (Illinois ) and effective January 1, 2026, new requirements will be in place for companion chatbots made available to residents in California (California ).

These statutes focus on greater transparency through the use of required disclosures, crisis prevention protocol and crisis referrals, and more robust protections for minor users.

At the federal level, the Federal Trade Commission has recently to understand what steps technology companies have taken to evaluate the safety of their consumer-facing AI-enabled chatbots and mitigate the potentially negative impact on children and teenagers.

Looking ahead, companies should anticipate a varied assortment of state laws converging on transparency, crisis intervention, and limits on AI autonomy, alongside growing federal scrutiny and potential enforcement actions.

Recommended Actions for Compliance

Organizations should assess whether any of their AI offerings qualify as “AI companions” under the New York statute. If they do, immediate steps should be taken to review existing safety protocols and update them as needed to comply with the new requirements. Operators should implement monitoring and testing processes to verify that their protocols are working as designed to ensure transparency and safety and minimize harm. Notification systems will also need to be reviewed to determine how users will be presented with clear and conspicuous messaging that they are not interacting with a human, and how often.

Operators in New York should also review customer-facing materials for their AI companions (e.g., terms of service, acceptable use policy, and FAQs) to ensure that they align with any system updates. Easy to read FAQs or community policies that supplement terms and pop-up disclosures during interactions would be helpful to the user and mitigate potential regulatory action regarding the lack of notice or disclosure. Internal teams, including product development, legal, and customer support, should be trained on these new obligations. AI providers may also consider establishing more robust auditing and ongoing monitoring procedures to verify that their systems are functioning as intended and to demonstrate compliance with regulatory requirements. To the extent that developers or operators are relying on third parties, they must ensure that those downstream vendors are also monitoring their systems and should contractually require them to do so.

Always stay informed on guidance from the New York Attorney General and regularly visit for regulatory developments.