Personal data: a shapeshifter that changes depending on who uses it

What has become commonly known as the “ Deloitt judgment ” is a decision of the Court of Justice of the EU relating to case C-413/23 P published in early September 2025.
The judges reiterated the concept that the status of personal data is not absolute but depends on the holder's actual ability to identify or make identifiable a natural person . In other words, if someone receives pseudonymous data (and therefore does not know who it refers to) but possesses additional information that, when cross-referenced with the data received, reveals the identity of the individuals involved, this data is once again, for all intents and purposes, classified as "personal" and therefore fully protected by law.
A (non)revolutionary decisionThe "Deloitte" ruling has been hailed as "revolutionary," but in reality it is nothing new at all. The Court had affirmed the same principle in the Breyer case , which is also cited in the decision and analyzed further below. Furthermore, and most importantly, the law, for once, was already clear.
Be that as it may, this new decision no longer allows national data protection authorities to apply questionable interpretations of the rules and requires them to address head-on the issue of Big Tech's accumulation of user data.
The difference between personal data, pseudonymized data, and anonymized dataTo understand the nature of the problem resolved by the Deloitte ruling, a legal introduction to a topic characterized by a certain confusion is necessary.
The reference regulation is the General Data Protection Regulation (GDPR), which defines personal data as any data or information that, alone or together with other data, identifies or makes identifiable natural persons. This data is subject to a complex and detailed set of requirements to ensure the protection of the rights and freedoms of data subjects.
Enough information can be removed from a piece of personal data (Mario Rossi, engineer, born in Vattelapesca on May 20, 1980) to make it no longer referable to a specific person (Mario, born in 1980). Therefore, the data loses its "personal" nature and is therefore freely transferable without any regulatory restrictions.
The devil, however, is in the details and therefore one must be really certain that the situation is as described, considering two possible options.
The first is that whoever receives the data "Mario born in 1980" has no way of recovering the other information initially associated with the person. In this case, the data is anonymized for the person providing it and anonymous for the person receiving it. The law does not apply.
The second option is that the recipient of the data initially "disconnected" from the identity of the individuals to whom it refers can recover it in another way with reasonable effort, or possess other data, perhaps anonymous, that, when cross-referenced with the new data, makes the individual identifiable again. The data then becomes entirely personal again as defined by law, and its use is subject to duties and responsibilities.
The principle of law expressed by the judgmentTo put it simply, European judges have held that when it comes to personal data, 2+2 can equal 4 or 5 because the sum of two individually anonymous databases can result in a personal database, i.e., one with a greater information content than the individual components.
To decide whether the result is 4 or 5, the judges write, it is therefore necessary to verify on a case-by-case basis whether the person who communicated the data actually stripped it of the elements that identify the people and whether the recipient has, equally effectively, the concrete possibility of recreating the informational identity of the individual using reasonable means.
Access to IP-associated data makes the differenceThis conclusion is best understood by analyzing the aforementioned Breyer case regarding the treatment of dynamic IP numbers of those accessing the websites of German public administrations.
When a terminal connects to a network it receives an IP number which can always be the same (static IP) or change with each single connection (dynamic IP).
The mobile access operator is certainly able to associate the SIM card—and therefore the contract holder—with the device used and the assigned dynamic IP number. This means that for the operator, the IP number (along with other information) is personal data.
Conversely, whoever manages the network resource you connect to—for example, a newspaper—receives only the IP number and some other technical information about the browser used, the operating system, and so on. It's clear that in this case, the same IP number that was part of a set of personal data for the access operator is anonymous for the operator of the platform hosting the newspaper. Reassociating a dynamic IP to the person using it would, in fact, mean gaining access to the telecommunications operator's information, which is obviously not possible.
However, if the user has registered by declaring their identity or has bypassed a paywall, then the IP number, even if dynamic, becomes personal data again.
The sensitive topic of analyticsThis reasoning is even more true for decentralized analytics services.
Anyone who installs a simple plugin to manage access statistics for their site can choose to do so without knowing who is connecting. However, when using a third-party service, these parties may have additional information that, as mentioned, "unmasks" the anonymous user.
Here comes the most critical aspect of the entire discussion on the responsibilities of the links in the data collection chain.
Applying the principle reaffirmed by the "Deloitte ruling," in such a case , the person processing personal data (and therefore subject to obligations and responsibilities) is the analytics service provider, not the person sending anonymous data (for example, because there is no paywall, or because the user is using the freely accessible portion, or because they have blocked trackers and name-based profiling tools).
According to a commonly held interpretation, however, it doesn't matter if a website collects only technical data without having information about the user's identity. It's enough that someone in the supply chain can cross-reference that data with other data to extend the obligations to all links in the chain. However, following this interpretation, although once again rejected by the European Court, it would virtually never be possible to have truly anonymous data. Therefore, for example, scientific research into treatments and therapies for (even) incurable diseases would find itself mired in the quicksand of bureaucratic requirements that add nothing to the protection of rights but reduce the hope of a cure for those who suffer. Regulations such as the AI law currently being approved partially solve the problem, which, however, in structural terms, remains unchanged.
What's changing for Big TechUntil now, Big Tech has built its technological and industrial models on the assumption that the data it receives from external sources is collected anonymously and, therefore, evades European regulatory obligations. The industry's adoption of systems based on differential privacy and the widespread use of privacy-enhancing technologies among users are attempting to address the problem, but the underlying issues remain the same.
With this ruling, but in reality always, it is no longer possible to apply this "rule" indiscriminately. It will be necessary to verify on a case-by-case basis whether the flow of information received allows them to remain anonymous or whether, as mentioned, it allows the user to be "unmasked."
At the same time, data protection authorities will no longer be able to apply automatic mechanisms and consider entities that forward non-personal data to large analytics providers as subject to the regulation. Therefore, they will have to determine on a case-by-case basis whether those using such services can actually determine who the person behind the display is.
The impact of reaffirming a long-known but long-neglected principle could be significant.
This will certainly be the case for Big Tech , which will have to address the issue of reviewing the way it interacts with the links in the profiling chain.
It will be the same for national data protection authorities, who will have to decide whether and how to impose sanctions not only on the tech giants, but also on all those who, at the behest of these giants, contribute to satisfying their endless hunger for information by sending countless small, anonymous pieces of our informational body.
repubblica