To close out 2021, the European Data Protection Board (EDPB) adopted additional General Data Protection Regulation (GDPR) data breach notification guidelines in Guidelines 01/2021 on Examples regarding Personal Data Breach Notification (Guidelines). The Guidelines supplement—and do not replace—the 2018 Article 29 Working Party guidance on breach notification. While the EDPB provides realistic examples and practical guidance for security incidents, the Guidelines oversimplify the complexities of breach notification analysis and what is required to get to a decision point. The Guidelines’ simplistic approach may risk magnifying legal exposure for organizations that respond to a security incident without thorough technical and legal analysis informed by an intensive investigation.
Regulators’ efforts to clarify what can be very challenging matters, such as GDPR breach reporting requirements, should be welcomed; oversimplification, however, brings its own frustrations, risks and challenges. The Guidelines present various practical scenarios that organizations may face today. But it is important that legal and security professionals avoid the temptation to interpret (and implement) the Guidelines as black-and-white rules, and instead recognize that security incident investigations operate in the world of “grey.”
In addition, U.S. companies subject to the GDPR must recognize the distinctions between the U.S. and EU breach notification frameworks when considering the EDPB’s Guidelines. Generally speaking, a “security breach” in the United States typically requires unauthorized access, acquisition and/or use of personal information. In contrast, GDPR focuses on breaches of confidentiality (unauthorized or accidental disclosure or access), integrity (unauthorized or accidental alteration) and availability (unauthorized or accidental access or destruction) of personal data. Further, information protected by most U.S. breach laws is limited to finite data sets, whereas GDPR’s definition of “personal data” is open-ended and not exhaustive: “any information relating to an identified or identifiable natural person.”
Under the GDPR, determining whether breach notification would be required is an intensive multistep inquiry, which requires, at minimum, the following analysis. Each of these questions can be incredibly challenging and expensive to answer, requiring legal and forensic investigators.
- Is personal data protected by the GDPR at issue?
- In effect, has a “breach” (as defined by law) occurred—i.e., has there been a “breach of security leading to the accidental or unlawful destruction, loss, alteration, [or] unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed”?
- Does the incident present risk to the rights and freedoms of the impacted individuals? If yes, then (according to the EDPB), regulators must be notified, and if there is a high risk to the individuals’ rights and freedoms, such individuals must be notified as well.
- What other legal obligations may be triggered? These may include, for example, public reporting to the markets, contractual obligations and more.
With these frameworks in mind, the following information highlights both beneficial and potentially challenging takeaways from the Guidelines.
Beneficial Takeaways From the Guidelines
(1) Proper investigation is necessary to determine whether there was a security breach.
The GDPR’s principles-based method of regulating data handling underscores the importance of taking a careful and thoughtful approach to preparing for, and responding to, security incidents. The Guidelines make clear that a company should not jump from discovering a security event or incident to declaring a “breach” without pausing to investigate; risk cannot be assessed without context or legal and forensic investigation. The Guidelines acknowledge that “a short period of investigation [can be necessary] in order to establish whether or not a breach has in fact occurred.” The investigation should inform the company’s risk analysis and understanding of the what, why, when and how of the incident. Only then can the company truly evaluate the incident’s potential impact and whether it is, indeed, a data breach, and if so, whether notification is required under the GDPR.
(2) Facts matter—and legal counsel does too.
The Guidelines’ scenarios demonstrate the criticality of factual and legal analysis. The Guidelines highlight that not all security incidents are created equal; a single change in fact pattern can alter the company’s legal obligations—a reminder that the determination of whether a security incident is a “security breach” is a legal one. Companies should ensure they conduct this analysis under cover of the attorney-client privilege to protect their deliberations and decision making. The actions taken and words chosen in evaluating and defining a “breach” matter. Perhaps the EDPB put it simplest in its introductory commentary: “As part of any attempt to address a breach[,] the controller and processor should first be able to recognize one.”
(3) Basic security principles remain critical.
Making the assessment more complex, the EDPB expects companies to assess an incident against the company’s technical, administrative and physical controls to determine whether a breach actually occurred. As a reminder, the GDPR requires companies to protect personal data using reasonable technical and organizational controls, measuring what is reasonable based on the importance and sensitivity of the personal data at issue. This means that, in evaluating the incident, the investigation also must assess the controls that are (or are not) in place to determine whether they protect against the vulnerability exploited. This is a good consideration, too, for evaluating security incidents under U.S. law, which often describes a breach in terms of both access or acquisition and a breach of a system’s confidentiality, integrity or availability. But one notable distinction is this: The EDPB makes clear that a lack of sufficient log data may require a company, when conducting the risk assessment, to assume data was accessed even if forensic evidence is not available to prove it.
This approach reiterates the importance of proactive investment in a comprehensive information security program: Higher or better security safeguards inversely correlate to legal and operational risk. While acknowledging there is no “one-size-fits-all” approach, the Guidelines recommend some unsurprising technical controls, such as encryption, multifactor authentication, patch management, appropriate logging, reliable backups and security testing. Legal frameworks and regulators increasingly incorporate prescriptive technical controls in defining what constitutes “reasonable” security. Of course, “reasonable” controls may look different depending on the organization and (for purposes of the GDPR) the types and sensitivity of the data the controls are protecting. “It is always better to prevent data breaches by preparing in advance,” the Guidelines state, “since several consequences of them are by nature irreversible.”
Potential Challenges Created by the Guidelines
On the other hand, the Guidelines set forth certain positions that could make it more challenging for companies to navigate a security incident:
(1) The Guidelines may undervalue the forensic investigation.
Although the Guidelines rightfully acknowledge that an investigation is necessary, they also potentially overlook the importance of the forensic investigation:
The breach should be notified when the controller is of the opinion that it is likely to result in a risk to the rights and freedoms of the data subject. Controllers should make this assessment at the time they become aware of the breach. The controller should not wait for a detailed forensic examination and (early) mitigation steps before assessing whether or not the data breach is likely to result in a risk and thus should be notified.
This approach ignores that, in many cases, a risk analysis cannot (meaningfully) be conducted without the forensic investigation. Indeed, the Guidelines concede that identifying a root cause is necessary to fully assess the risks posed by an incident. While it may be possible to make an initial assessment, determining the risk impacts prematurely may result in unnecessary alarm (if the incident turns out to be low risk) or in confusing and inaccurate communications (if the forensic investigation reveals new, material facts that increase the risk). Coupled with EDPB’s recommendation that companies should assume the worst-case scenario (see item 3 below), this approach runs counter to basic investigative principles, endorsing reliance on conjecture rather than fact, which is potentially harmful to both the organization and the data subjects if the company gets it wrong.
(2) Certain recommendations ignore technical realities and threat actor behavior.
In other places, the Guidelines seem to overlook technical realities of incident mitigation and remediation. For example, the EDPB opines that a ransomware attack—where no data was exfiltrated and where the data was able to be restored—requires notification to a supervisory authority, in part due to the time it took to restore the data. The Guidelines point to the “lack of an electronic backup database” as cause for regulatory notice, notwithstanding that the data ultimately was restored. The EDPB seemingly views five days as too long for a system restoration due to the potential for “some delays” in customer orders. This position appears to ignore the technical realities—and complexities—of the restoration process, particularly in a scenario that does not threaten physical harm (such as a hospital system) or involve sensitive information. Even for organizations with mature security programs, the restoration process can take time to ensure not only system functionality but also security. Restoring a system too quickly can lead to reinfection, thus resulting in increased security risk. This is a consequence of the EDPB’s determination that an incident that impacts the availability of data, even in the absence of any other impact, constitutes a “data breach” under GDPR.
The Guidelines at times also do not take into account threat actor behavior. In particular, in a scenario where there was no data exfiltration, and data was restored within two days, the Guidelines recommend a public disclosure of a ransomware attack on a hospital. In that scenario, a subset of patients were impacted by service disruptions, but the remaining population’s personal data was only encrypted (and ultimately restored). Nonetheless, the EDPB believes the latter population (former patients spanning at least 20 years) should be notified—because, the EDPB reasons, the ransomware represented a high risk to former patients’ rights and freedoms—via public disclosure. Presumably, this guidance is focused on potential challenges in reaching this population as well as transparency for and the protection of impacted data subjects, but this comes at the cost of potentially provoking the threat actor or impeding law enforcement efforts. Threat actors often study how their victims respond to an attack and react accordingly . . . sometimes months later. During this period, law enforcement may be conducting an investigation, and depending on the threat actor group, the company may have restrictions on what it can and cannot say. The EDPB’s recommendations here do not seem to address those considerations.
The EDPB also appears to conclude that any incident impacting the availability of a special category of personal data under the GDPR presents a high risk to the data subjects, even though it is difficult to identify the actual risk to a data subject whose data is unlikely to be needed during the period of unavailability, and the data was (quickly) restored to its original state. There is a difference between current patients whose care could be reasonably expected to be impacted and past patients whose care would not—and describing it as “a residual risk of high severity to the confidentiality of the patient data” does not answer the question. This is not to say that public notice ultimately should not be provided in this scenario, perhaps after the investigation is complete or after working with the applicable supervisory authority or law enforcement organization to reach a degree of comfort that public disclosure will not increase the risk to the data subjects. But the Guidelines ignore context about the factors, and timing, that should be considered before announcing the attack publicly in its focus on a strict interpretation of a GDPR breach notification exception.
(3) Aspects of the EDPB’s approach may increase an organization’s legal risk and could negatively impact data subjects.
While the Guidelines recognize that not every security incident warrants breach notification, it is apparent that the bar for notification, at least to EU supervisory authorities, is relatively low. The Guidelines seemingly take this view: If unsure, notify now, and correct later. In fact, the EDPB states: “When uncertain about the specifics of the illegitimate access, the worse scenario should be considered and the risk should be assessed accordingly.” In theory, any security incident could present a risk to data subjects’ rights and freedoms (the threshold for notifying an EU supervisory authority), and a direction to err on the side of assuming the risk exists effectively requires an organization to inform a supervisory authority over something as simple as a server rack going offline for reasons and a time yet unknown.
This simplistic approach could do more harm than good. Notifying data subjects of an incident that ultimately is innocuous may cause needless panic, stress or concern and could lead to the sort of unnecessary data breach disclosures and notification fatigue that the prior European guidance cautions against. Similarly, in the hospital ransomware scenario above, a public disclosure made in an abundance of caution and without the strategic legal, forensic and—potentially—law enforcement guidance that a GDPR recital suggests might be appropriate could cause a threat actor to take actions causing more harm to data subjects (such as prompting a data dump on the Dark Web).
Likewise, the company may increase its legal risk exposure by opening itself up to regulatory inquiry for an event that did not actually require legal disclosure, or by making misrepresentations or inaccurate statements based on premature findings (once again, words matter). These risks are potentially elevated for companies with U.S. operations, as the United States tends to have a more active and contentious litigation environment, especially in the data security context. Mere announcements of a security incident can result in multiple civil suits within days of the disclosure. This dichotomy raises complex legal considerations, such as whether notice in one jurisdiction impacts obligations in another. For example, breach notification may not be legally required in the United States for many of the scenarios requiring notification in the EU under the Guidelines. This evaluation becomes even more complicated for publicly listed companies, where there are additional disclosure obligations to shareholders.
It should be noted that it is extremely difficult (or impossible) to account for every possible security incident on paper, and for the Guidelines to be practical, the EDPB certainly needed to try to simplify its guidance. Overall, it is critical that companies recognize that the Guidelines are just that: general guidance. They should not be construed as black-and-white, rigid rules. Every security incident is different, and to effectively respond to one requires careful and thorough technical and legal analysis informed by an intensive investigation.
Relatedly, in preparing for an incident, evaluating organizational controls is not a box-checking exercise. Rather, it requires considering the controls in the context of the organization’s business operations and risk appetite. Some controls may be valid in some contexts but impractical or unfeasible in others. In developing and maturing the security program, the organization should implement those controls that appropriately manage the risk in the context of the organization’s operations. A control that mitigates only a small amount of risk at a high operational cost, even if it appears on multiple recommended control lists, may not be a control that is appropriate for that organization. Have an honest conversation: How much risk is this control addressing, what risk is it, and what is the operational cost of doing so?
The bottom line in preparing for or responding to any security risk is this: Do not rush to conclusions or through a checklist. Investigate, evaluate, take stock and then act—appropriately.