50 Years of Data Privacy Law in 5 Minutes
The Dawn of Data Privacy
The struggle for data privacy started when Charles W. Bachman designed the first computerized database in 1960. With the advent of shared computing—especially mobile networking—keeping data in the right hands has become something that requires a method and a plan. Until recently, however, protecting the individual’s right to data privacy has been seen as an issue affecting primarily information and technology industries, and not something that an individual consumer could affect. It’s no accident that most significant privacy legislation for consumer data in the United States—like the forward-thinking California Privacy Act (CCPA)—has originated from either the tech industry itself, or a state densely populated by information technology workers. By tracing a timeline of privacy legislation in the United States in comparison to Europe since the 1970’s, interesting regional trends emerge.
One of the first privacy victories for individuals came in 1970 when a prophetic piece of legislation was passed. In 1970 the Fair Credit Reporting Act (FCRA) was borne out of concern over “fairness, impartiality, and a respect for the consumer’s right to privacy.” Its purpose was to protect the confidentiality, integrity, and availability of consumer credit information, but it wasn’t until the 2000’s that the right to privacy became a point of popular focus. The 2017 Equifax data breach that exposed the personal information of an estimated 145.5 million Americans (and the resulting $700m settlement agreement Equifax reached with the Federal Trade Commission) suggests that the purpose of FCRA is still as relevant today as 50 years ago.
As of the 1970’s, those involved in computing and early networking could see a rising tide for the individual person’s rights to data privacy. In 1972 the Canadian Departments of Communications and Justice co-authored the Privacy and Computers report. The same year the Swedish Committee on Automated Personal Systems released Data and Privacy, and the National Academy of Sciences published Databanks in a Free Society. But in 1973 the US Department of Health, Education, and Welfare (HEW) published a groundbreaking report entitled Records Computers and the Rights of Citizens.
The report—issued by the HEW’s Advisory Committee on Automated Personal Data Systems—predicted that computerized, interconnected databases would soon become the modus operandi for storing the personal information of American citizens. As a precaution the Committee concluded that “special safeguards should be developed to protect against potentially harmful consequences for privacy and due process.” The Committee identified principles to form the basis of these safeguards and codified them into what later became the Fair Information Practices (FIPs):
- There must be no personal data record-keeping systems whose very existence is a secret.
- There must be a way for an individual to find out which of their information exists in a record and how it is used.
- There must be a way for an individual to prevent information that was obtained for one purpose from being used or made available for other purposes without consent.
- There must be a way for an individual to correct or amend a record of identifiable information.
- Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuse of the data.
In the United States FIPs principles were codified in the Privacy Act of 1974, the year after Records Computers and the Rights of Citizens was published. While many considered this to be a victory for individuals’ digital rights, the Privacy Act of 1974 fell significantly short of fulfilling the recommendations made by the HEW Secretary’s Committee in one crucial detail: the scope of 1974’s Privacy Act was limited to federal agencies only.
Limited Scope, Limited Protection
Limiting the scope of the privacy act was a major victory for U.S. commercial lobbyists who argued the importance of commerce over the individual’s right to privacy. Its restriction contrasted the Committee’s vision of the universal applicability of FIPs principles, signaling a turning point for American privacy policies to come. In the decades that have followed there have been many examples of United States favoring the importance of commerce or national security over the sovereignty of individual rights. Though the roots of FIPs principles extend through virtually all privacy laws, by the 1980’s a clear divergence in strategy had emerged between the United States and European approaches to data protection.
In parallel across the Atlantic, the Organisation for Economic Cooperation and Development (OECD) released the Guidelines on the Protection of Privacy and Transborder Flows of Personal Data in 1980. Michael Kirby, then-Chair of the OECD Expert Group on Transborder Data Flows and the Protection of Privacy, noted one of the primary drivers for the OECD’s privacy guidelines “was anxiety that differing national legal regulations, superimposed on interconnecting communications technology, would produce serious inefficiencies and economic costs, as well as harm to the value of personal privacy.” Although the OECD’s guidelines ensured trans-border flow of personal data for commerce, it also stated the importance of privacy by design and by default.
The following year the Council of Europe (COE) ratified the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data. The COE Convention proved to be closer in scope to what the HEW Committee originally envisioned, taking FIPs principles and applying them to both the public and private sector. It set the tone for sure; the breadth and depth of the European privacy legislation of the 1980s became a trend that has continued to the present day, ultimately resulting in what may be the most comprehensive data protection and digital privacy regulation to date.
The Rise of GDPR
The architecture of the COE’s Convention extends to present-day. The Charter of Fundamental Rights of the European Union states that “everyone has the right to the protection of personal data concerning him or her.” In 2016 Regulation (EU) 2016/679, better known as the General Data Protection Regulation (GDPR), built on the Charter by passing seven key principles:
- Lawfulness, fairness and transparency
- Purpose limitation
- Data minimization
- Storage limitation
- Integrity and confidentiality
GDPR further defined eight individual rights to data protection and digital privacy:
- The right to be informed
- The right of access
- The right to rectification
- The right to erasure
- The right to restrict processing
- The right to data portability
- The right to object
- Rights related to automated decision-making including profiling.
GDPR was greeted with mixed enthusiasm by the security and privacy community, but regardless of the lack of industry consensus as to its merits: there is no argument with respect to its teeth. To date GDPR fines have totaled in the upper millions of Euros, with some of the heaviest penalties being levied against such titans of industry as Google, British Airways, and Marriott as well as investigations into Facebook. Resulting fines have the potential to reach upwards of $2.23 billion. While some tech giants such as Apple have a history of embracing privacy as a fundamental human right others with a more concerning track record—such as Facebook, after Cambridge Analytica—have just started to warm up to the need for data protection by design and by default. Nevertheless, the fact remains that the decibel level of public expectations to respect their digital rights has become too loud to ignore.
Looking back across the Atlantic, one can see the fragmentation beginning in United States privacy legislation. Almost immediately after passing the Privacy Act of 1974 the United States recognized the need for additional privacy regulations but opted for a very different approach than its European counterparts. Rather than passing comprehensive federal legislation in an attempt to address privacy holistically, the United States began using sector-specific regulations to target narrower areas of concern. The first of such sector-specific laws came in the form of the Family Educational Rights and Privacy Act of 1974 (FERPA), which regulated access to student records. In 1978 Congress passed the Right to Financial Privacy Act (RFPA) requiring government officials to obtain a warrant or subpoena to gain access to financial information.
Privacy via Patchwork
The United States saw an increase of privacy regulations in that plugged various holes instead of fixing the structure of data privacy itself:
- 1980 – The Privacy Protection Act (for journalists)
- The Cable Communications Act of 1984
- The Electronic Communications Privacy Act of 1986 (digital surveillance)
- The Video Privacy Protection Act of 1988 (for privacy of citizens’ digital media viewing habits)
- The Driver’s Privacy Protection Act of 1994
- The Health Insurance Portability Accountability Act of 1996
- The Children’s Online Privacy Protection Act of 1998
- The Gramm-Leach-Bliley Act of 1999
- The 2001 Patriot Act
- 2002 – Creation of the Homeland Security Privacy Office
- 2003 – The Fair and Accurate Credit Transactions Act (FACTA)
- 2003 – FTC’s National Do Not Call Registry; also, CAN-SPAM Act
- 2004 – Intelligence Reform and Terrorism Prevention Act
By present-day, the U.S. patchwork of privacy regulations had become a dizzying labyrinth for policymakers, law enforcement agencies, security and privacy practitioners, and concerned citizens alike.
Of course, corporations have paid close attention. These days both Amazon and Apple have expressed support for broad-scope federal privacy regulation, but skeptics are concerned their reasoning may be firmly rooted in protecting their own best interests. In many ways federal privacy legislation may be the lesser of two evils for big tech firms. Borrowing generously from GDPR, the state of California recently passed Assembly Bill No. 375, better known as the California Consumer Privacy Act (CCPA), arguably the most comprehensive privacy law in the history of the United States. It is worth noting that the first state to take such bold steps to protect individuals’ digital rights was California (once the center of big tech) but more states have since followed suit.
If stricter regulation is unavoidable it makes sense why many titans of the technology industry would want to throw their support behind federal oversight. In 2018 Victor Tangermann summarized some of the factors at play:
- Federal privacy legislation — if it ever solidifies into an actual Bill — is bound to be shaped by the interest of those companies. Congress listens to tech execs, while consumer-level advocacy groups are locked out of the discussions, as Wired theorizes.
- If weak federal privacy laws are able to supersede stronger state laws like California’s Privacy Act, they could end up benefiting private tech companies, protecting them from more heavy-handed state laws in the future.
- It’s in everyone’s interest to streamline the process of adhering to privacy laws — it’s easier to comply with a single legal framework, rather than 50 different state laws.
How California’s CCPA is a Game-Changer
Privacy practices in the United States have reached a pivotal point. California’s governor prepares to sign CCPA into state law by the end of October, 2019, and many residents are lobbying for tougher privacy laws beyond those supported by the tech industry giants. For the first time since FIPs principles were introduced in 1970 U.S. policymakers are beginning to see the need for more comprehensive privacy measures applicable to both the public and private sector. This shift in strategy signals the beginning of a convergence between the sector-specific, “plug-the leak” approach to privacy that has been the hallmark of the U.S. in the past and the “severe and sincere” breadth-and-depth approach seen in the European Union. These trends are cause for cautious optimism, but a long-term data privacy strategy should keep up with the same protections we offer to other ideas.
Consider our regulations governing copyright and intellectual property as it applies to a product: if I repackage someone else’s cookies (or a lookalike) with my own label claiming I have made and baked those cookies, I owe the original cookie company money or I can be sued for infringement. Yet, the same boundaries do not apply to an individual’s data; companies like Google or Facebook use our data, habits, and whereabouts each moment. These data points are collectively a digital fingerprint. Taking this logic a step further, a person could also demand to be compensated for use of their data if a company profits from it; we grant this permission to ideas and to intellectual property, but not to a person’s private consumer data.
Think of how our world would change if companies had to pay a person for use of their most personal information. Our data is perhaps the most private thing about us, deserving well-defined protection from those who might exploit it.
“The right to be let alone is indeed the beginning of all freedom.”