Regulation of AI Systems is Already Here – Look to Data Protection Laws

Privacy and Data Security

As the scramble continues to reckon with the rapid advancement of generative artificial intelligence models like ChatGPT, Bard, DALL-E, and MidJourney, and the promise of other AI and machine learning systems (for simplicity, “AI systems”) to come, legal and business professionals across industries are asking whether these systems will be regulated, and if so, how.

A partial answer is that some regulations are here already the form of U.S. and international privacy laws. And more are coming. While many government agencies, legislators, and industry groups have called for standardized AI best practices and further research through policy statements, privacy laws impose requirements on data collection, use, and sharing today. Privacy laws, for example, require organizations to publicly and specifically disclose some of their uses of  AI systems and allow users to opt-out in certain cases. Privacy laws also may require organizations to disclose and respond to individual privacy rights—such as the right to access, delete, limit , and correct personal data fed into AI system prompts, outputs, or large language training modules (LLMs). In addition, many states and non-U.S. jurisdictions are drafting new regulations and laws governing the use of AI systems from a data protection perspective.

Below, we provide a short summary of some of the most notable requirements:

  • The right to opt out of automated profiling. Since January 2023, Virginia residents have had the right to opt out of any automated processing of their personal data for certain profiling purposes under the Virginia Consumer Data Protection Act (VCDPA). The opt-out right applies to any automated processing by a business to evaluate, analyze or predict an individual’s profile, including their economic situation, health, personal preferences, interests, behavior, location or movements, if such profiling is used to produce decisions with legal or similarly significant effects, such as the provision or denial of financial services, education, criminal justice, employment, health care or access to basic necessities. Colorado and Connecticut residents will have a substantially similar right beginning in July 2023. The Colorado Privacy Act’s (CPA) new regulations, in particular, distinguish between “solely automated processing,” “human reviewed automated processing” and “human involved automated processing,” with varying opt-out rights and disclosure obligations for each type of activity.
  • Public disclosures of automated profiling and inferences. California and Virginia currently require specific disclosures of the automated profiling activity discussed above, with Colorado and Connecticut to follow in July 2023. For example, the California Consumer Privacy Act (CCPA) requires businesses to specifically identify the collection of any “inferences” that are derived from other personal data, defined as any “derivation of information, data, assumptions, or conclusions” from other data. Many AI systems produce such inferences and therefore may impose a disclosure duty.
  • Consent to use. In Colorado, the CPA will require businesses to obtain express consent from individuals before using AI to analyze their personal data, if resulting inferences indicate sensitive attributes like their race or ethnicity, religious beliefs, mental or physical health condition or diagnosis, sex life or sexual orientation or citizenship status. Colorado, Connecticut and Virginia laws also explicitly call for opt-in consent for any secondary uses of personal information, which may include building LLMs or AI “learning” sets. Such opt-in consent is generally expected across all privacy and consumer protection laws (like the FTC Act or the GDPR) if the new AI use is incompatible with prior privacy notices. (See Purpose Limitation below).
  • Restrictions on sharing with third-party AI platforms. California and Virginia require, and Colorado and Connecticut will soon also require, businesses to disclose if they share personal information with third parties, including AI platforms. If those platforms do not qualify as traditional vendors or service providers, the laws may offer individuals the right to opt-out of such sharing. In such circumstances, specific contract terms between the business and these platforms also may be required.
  • Chatbot disclosure laws. California and New Jersey currently require businesses to disclose the use of automated chat messaging services—chatbots—to end users. Accordingly, many deployments of AI-powered chatbots must be clearly labeled as such.
  • Rights to delete, access and correct. California and Virginia residents, and soon Colorado and Connecticut residents, have the right to request that regulated businesses take various actions regarding the personal data they hold, which can include a broad swath of data from basic contact information to device identifiers, IP addresses, purchasing and browsing histories, visual and audio data, and biometrics. These include the right to request that all personal data be deleted, corrected or disclosed in an access report, subject to various exceptions and limitations. Businesses must evaluate whether these rights will extend to copies of personal data used in training or prompts to an AI system.
  • Purpose limitation, data minimization, and avoidance of dark patterns. The CCPA and similar laws now codify several long-standing privacy principles, which include limiting the purposes for which personal data is used (purpose limitation) and limiting the collection of  personal data to only what is needed to accomplish those specific purposes (known as “data minimization”). State laws and the FTC also forbid the use of user interfaces that subvert or impair user choice or autonomy—known “dark patterns.”  These principles have newfound relevance for many use cases of AI systems.
  • International law. The General Data Protection Regulation (GDPR) in Europe and the similar UK GDPR generally require organizations to obtain consent before using personal data for decisions or profiling, based solely on automated processing which produces legal or significant effects on individuals. On April 1, 2023, the Italian Data Protection Authority ordered OpenAI to stop processing Italians’ personal data due to alleged GDPR violations, and reportedly lifted the order once the company addressed the Italian regulator’s concerns. Other privacy regulators in Canada, Germany, France, and Spain have reportedly commenced similar investigations, and the European Data Protection Board has launched a dedicated AI task force “to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.”

What’s to come

The range of potential laws governing AI systems is expansive, reaching beyond privacy to encompass intellectual property, bias and discrimination in many sectors (employment, housing, health care and financial services), technology transactions, and national security. However, data protection authorities are racing in front of many regulators. For example, in early 2023 the California Privacy Protection Agency (CPPA) issued an Invitation for Preliminary Comments on Proposed Rulemaking to address opt-out rights related to automated systems and decision-making technology. We can expect the CPPA to provide additional clarity on the use of AI when they issue draft rules, potentially later this year.

New U.S. state privacy laws, like those discussed above, will affect AI use and will become effective in the months to come, including laws in Utah (effective December 31, 2023), Iowa (January 1, 2024), Tennessee (July 1, 2024), likely Texas (March 1, 2024), Montana (October 1, 2024) and Indiana (Jan. 1, 2026). Efforts to pass new laws regarding AI in 2023 remain ongoing by states like Connecticut and Texas, while other efforts, including California’s AB 331 and Minnesota’s SF 1441, have been unsuccessful.

Future regulation is not limited to the United States. The European Union continues to work on its Artificial Intelligence Act (AIA), which would supplement the GDPR and introduce a risk-based approach to data protection and transparency in AI systems. The AIA is projected for adoption in 2024 at the earliest.

How we can help

In what has become a dynamic and volatile AI environment, it is vital to keep abreast of new legal developments, including new regulations and judicial interpretation. We recommend that organizations of all types and sizes begin creating an internal governance structure. This would start with a permissible use policy, an AI impact assessment framework, and a small team with baseline standards and procedures to address new AI technologies and use cases as they emerge.

manatt-black

ATTORNEY ADVERTISING

pursuant to New York DR 2-101(f)

© 2024 Manatt, Phelps & Phillips, LLP.

All rights reserved