If Personal Information is Privacy’s Gatekeeper, then Risk of Harm is the Key: A proposed method for determining what counts as personal information

Abstract 

In the late sixties, with the development of automated data banks and the growing use of computers in the private and public sector, privacy was conceptualized as having individuals “in control over their personal information” (Westin, 1967). The principles of Fair Information Practices were elaborated during this period and have been incorporated in data protection laws (“DPLs”) adopted in various jurisdictions around the world ever since. These DPLs protect personal information, which is defined similarly in various DPLs (such as in Europe and Canada) as “information relating to an identifiable individual”. In the U.S., information is accorded special recognition through a series of sectoral privacy statutes focused on protecting “personally identifiable information” (or PII), a notion close to personal information. Going back to the early seventies, we can note that identical or very similar definitions of personal information were already used in DPLs, illustrating that this definition of personal information dates back to this period.

In recent days, with the Internet and the circulation of new types of information, the efficiency of this definition may be challenged. Recent technological developments are triggering the emergence of new identification tools allowing for easier identification of individuals. Data-mining techniques and capabilities are reaching new levels of sophistication. In the era of Big Data, because it is now possible to interpret almost any data as personal information (any data can in one way or another be related to some individual) the question arises as to how much and which data should be considered as personal information.

In section 1, I elaborate on how a literal interpretation of the definition of personal information is no longer workable. In light of this, I present the proposed approach to interpreting the definition of personal information, under which the ultimate purpose behind DPLs should be taken into account. I then demonstrate that the ultimate purpose of DPLs was to protect individuals against a risk of harm triggered by organizations collecting, using and disclosing their information. In section 2, I demonstrate how this risk of harm can be subjective or objective, depending on the data handling activity at stake and I offer a way forward, proposing a decision-tree test useful when deciding whether certain information should qualify as personal information. The objective of my work is to come to a common understanding of the notion of personal information, the situations in which DPLs should be applied, and the way they should be applied. The approach is meant to provide for a useful framework under which DPLs remain efficient in light of modern Internet technologies.

This content has been updated on August 23, 2014 at 14 h 12 min.