California Privacy and AI Roundup 2025: What Passed and What Didn’t
California’s 2025 legislative session was one of the busiest yet for privacy and artificial intelligence. Now that the session has come to a close and Governor Newsom’s October 13 sign-or-veto deadline has passed, we’re releasing our annual roundup of the notable bills that passed and what failed.
Key Takeaways
California continues to leverage its role as the world’s fourth-largest economy to set national policy on emerging technology and consumer protection. Given the rapid integration of AI-powered tools into everyday lives, the interplay between the state’s one-party supermajority and its powerful industry voices in technology, entertainment, and beyond took center stage.
In data privacy, the state advanced its yearslong campaign as a leading voice in consumer and online privacy with bills focused on third-party targeted advertising, data brokerage, and children’s privacy. In AI, the signed bills represent something of a “lessons learned” from the 2024 session, which was marked by ambitious but failed attempts at regulation. Now, California has a signature law targeted at large AI developers and an assortment of other laws responsive to recent developments and headlines on companion AIs and other generative AI tools. At the same time, more ambitious attempts at aggressive regulation did not acquire the necessary votes or were vetoed.
Data Privacy
Signed
- (Lowenthal) - Opt-out Preference Signals. Backed by California’s chief privacy regulator, the California Privacy Protection Agency (CPPA), the California Opt Me Out Act requires all web browsers to include a setting that enables consumers to send an opt-out preference signal by January 1, 2027. Opt-out preference signals are designed to communicate to website operators a consumer’s choice to opt out of the sale or sharing of personal data. While businesses are already required to honor these signals under the California Consumer Privacy Act, many browsers do not offer such options in their privacy settings, and they require users to download browser extensions to broadcast such signals. This law will ensure that consumers can configure all browsers to automatically and immediately instruct website operators not to share personal data. The law also grants businesses that develop or maintain browsers functional immunity from liability if a website operator that receives the signal fails to honor it.
- (Wicks) - Age Verification Signals. In the latest attempt to protect minors online, the Digital Age Assurance Act will require operating system providers to implement an age verification interface at account setup beginning January 1, 2027. The operating system providers will in turn be required to send age bracket signals to mobile applications in covered app stores. This approach contrasts with existing mandatory age verification laws in Texas and Utah, which put the burden of verification on individual website and app owners rather than operating systems.
- (Becker) - Data Brokers: Data Collection and Deletion. This bill expands California’s data broker law by requiring data brokers to provide additional disclosures at registration, such as whether the data broker collects certain data, including sensitive data such as children’s data, precise geolocation, biometric data, reproductive health data, and government IDs. The amendment is latest in a series of recent crackdowns on the data broker industry by California lawmakers and the CPPA. Similar federal efforts to regulate data brokers have failed. For example, earlier this year, the Consumer Financial Protection Bureau (CFPB) withdrew its previously proposed rule that would have limited how data brokers can collect, use, and sell private financial data.
- (Hurtado) - Data Breach Notifications. California expanded its longstanding data breach notification law by requiring notification within 30 calendar days of discovery, in line with the strictest timeline followed by certain other states. The law also requires notifying the California Attorney General within 15 calendar days of providing consumer notice.
Vetoed
- (Stern) - Social Media Platform Liability. This bill would have held large social media platforms liable for civil rights violations – including those related to harassment, threats, and discrimination – if their algorithms amplified the content. The bill included potential fines of up to $500,000 for reckless violations and $1M for knowing violations.
Artificial Intelligence
Signed
- (Wiener) - AI Models: Large Developers. In the state’s signature piece of AI legislation to date, the Transparency in Frontier Artificial Intelligence Act (TFAIA) will require large AI developers to publish frameworks detailing safety and risk assessments, and to report catastrophic risk summaries to the Office of Emergency Services by January 1, 2026. SB 53 is a “mulligan” on AI regulation in California, as it is a slimmed-down version of last year's vetoed SB 1047 (Wiener). This law incorporates the June 2025 findings of the Governor's influential Joint California Policy Working Group on AI Frontier Models.
- (Padilla) - Companion Chatbots. Companion chatbot operators will be required to disclose that chatbots are AI-generated, especially to minors, and mandate protocols to prevent suicidal or self-harm content. The bill bears similarities to New York’s , which takes effect on November 5, 2025.
- (Wicks) - California AI Transparency Act. The amendment to last year’s California AI Transparency Act adds provisions requiring large online platforms to enable users to access provenance data in uploaded content starting January 1, 2027. Additionally, capture devices – which can record photographs, audio, or video content – will be required to offer latent disclosures in content captured by the device that conveys certain provenance data beginning January 1, 2028.
- (Krell) - AI Defenses. This law will prohibit defendants who developed, modified, or used AI that is alleged to have caused harm from asserting as a defense that the AI autonomously caused such harm.
- (Bonta) - Healthcare: Deceptive AI Systems/Design. This law will prohibit AI systems from falsely implying – through terminology, interactive elements, and post-nominal letters (e.g., M.D. or R.N.) – that users are receiving care from licensed healthcare professionals when no professional oversight is present. The law is similar to Nevada’s new AB 406, which specifically pertains to mental health services, but it does not go as far as Illinois’ new HB 1806, which prohibits the use of AI systems in therapy or psychotherapy to make independent therapeutic decisions or conduct other activities without review and approval by a licensed professional.
- (Aguiar Curry) - Cartwright Act Violations. This law amends the Cartwright Act by expressly prohibiting coercive use or distribution of common pricing algorithms as part of a contract, combination in the form of a trust, or conspiracy to restrain trade or commerce. The law also amends complaint requirements such that it will be sufficient to provide factual allegations demonstrating that the existence of a contract, combination in the form of a trust, or conspiracy to restrain trade or commerce is plausible. Complainants will not be required to allege facts tending to exclude the possibility of independent action.
Vetoed
- (McNerney) - Employment-Related Automated Decision Systems. This bill would have obligated employers to notify employees, contractors, and job applicants about the use of automated decision systems (ADS) in employment decisions, restrict ADS functions, and provide workers the right to access data used in disciplinary actions. Nevertheless, certain of these obligations will be covered by the California Privacy Protection Agency’s new AI regulations.
- (Ashby) - AI Technology. This bill would have defined “false impersonation” to include the use of digital replicas for impersonation with fraudulent intent.
- (Bauer-Kahan) - AI Companion Chatbots and Children. This bill would have prohibited companion chatbots from engaging in harmful behaviors toward children, such as encouraging self-harm or substance use. The attempt aligns with a broader regulatory push to protect minors online, as seen with the FTC’s recent inquiry into with respect to children and teens.
Held for Next Session
- (Bauer-Kahan) - Automated Decision Systems. A bill that would have required businesses and government agencies to notify individuals when automated decision systems are used to make significant decisions – such as those leading to the provision or denial of housing, education, employment, healthcare, and financial services – has been stalled for a third legislative session in a row. The bill’s designation as a two-year bill signals that it still has legs, and we will continue to track its movement. In the meantime, certain of these obligations will be covered by the California Privacy Protection Agency’s new AI regulations.
What’s Next?
This year’s new laws in privacy and AI might be understood as doubling down on the familiar regulatory tools of transparency and mandated consumer rights. For example, the Transparency in Frontier Artificial Intelligence Act (TFAIA) abandons the robust safety controls that were featured in its failed 2024 bill and embraces transparency as the leading safeguard for foundational AI models. Other AI laws will require disclosures designed to prevent misleading consumers about the level of human involvement in the technology.
Lawmaking is but one of several controls that the state is exerting over AI and privacy. The CPPA wide-ranging regulations covering AI, privacy assessments, and cybersecurity audits. Further, the CPPA and the California Attorney General’s Office continue to exercise their enforcement authority over data privacy with increasing activity.
Manatt will continue to monitor this space closely as lawmakers across the country introduce legislation designed to regulate uses of personal data and AI technologies, and it will provide additional guidance along the way.