Tuesday, 05 June 2012 07:33

Statement to the Standing Committee on Access to Information, Privacy and Ethics

Written by  Teresa Scassa
Rate this item
(0 votes)

Below is the statement I made to the House of Commons Standing Committee on Access to Information, Privacy and Ethics on May 31, 2012. The Standing Committee had convened hearings on the following motion:

Be it Resolved: That the Committee study the efforts and the measures taken by Google, Facebook and other social media to protect the personal information of Canadians, and that the Committee report its findings back to the House.

I would like to begin by saying that I think it is very important that more attention be given to data protection and privacy in relation to the activities of social media companies. I do find it somewhat ironic, however, that the Committee’s mandate has been framed in terms of studying the efforts and measures taken by social media companies to protect the personal information of Canadians. It is a bit like studying the efforts made by foxes to protect the lives of the chickens.

I note that to the extent that Google, Facebook and other social media companies attempt to protect the personal information of Canadians, these efforts are shaped by data protection law. The adequacy of our data protection legislation must therefore be a focus of attention. The amendments from the first five year review of 2006 have yet to make it through Parliament; the second five year review is already late in getting underway. These should be matters for concern, particularly since the data protection environment has changed substantially since the law was first enacted. The current law is particularly weak with respect to enforcement. The Commissioner has no order making powers and lacks the ability to impose fines or other penalties in the case of particularly egregious conduct.

The focus on social media and privacy has two broad aspects. The first relates to how individuals use these tools to communicate amongst themselves. In this regard we hear concerns about employers accessing Facebook pages, people posting the personal information of other people online, criminals exploiting Facebook information, and so on. These are concerns about information that individuals choose to share, the consequences of that sharing, and the norms that should govern this new mode of interpersonal exchange. The second aspect, and the one on which I will focus my attention is on the role of these companies in harvesting – or in facilitating the harvesting – of massive amounts of information about us in order to track our online activity, consumption habits, and even patterns of movement. In this respect, attention given to large corporations such as Facebook and Google is important, but there are also many other players in the digital environment who are engaging in these practices.

The business models of social media companies are generally highly dependent upon the personal data of their users. In fact, social networking, search engines, email and many other services are offered to us for free. By hosting our content and tracking our activities, these services are able to extract a significant volume of personal data. The nature and quality of this data is enhanced by new innovations. For example, information about the location and movements of individuals is highly coveted. More and more individuals carry with them location enabled smart phones and they use these devices for social networking and other online activities. Even computer browsers are now location-enabled, and thus information about our location is routinely gathered in the course of ordinary internet activities.

The point is that more and more data of increasingly varied kinds are being sought, collected, used and disclosed. This data is compiled, matched and mined in order to profile consumers for various purposes including targeted behavioural marketing. In some cases, this data may be shared with third party advertisers, with application developers or with related companies. Even where the data is de-identified, its fine-textured nature may still leave individuals identifiable, as companies such as AOL and Netflix have learned the hard way. Individuals may also still be identifiable from detailed profile information, and the substantial volumes of information gathered about us make us highly vulnerable to data security breaches of all kind.

It has become very difficult to protect our personal data, particularly in contexts where privacy preferences are set once (and often by default) and the service is one which we use daily or even multiple times each day. It is often difficult to determine what information is being collected, how it is being shared and with whom. Privacy policies are often too long, unclear, and remote for anyone to actually read and understand. We now enter into a myriad of transactions each day and there simply isn’t time or energy to properly “manage” our data. It is a bit like walking through a swamp and being surrounded by a cloud of mosquitoes. To avoid being bitten we can swat away; we can even use insect repellents or other devices, but in the end we are inevitably going to be bitten, often multiple times.

It is also becoming increasingly difficult to avoid entering this swamp. People use social media to keep family and friends close, regardless of how far apart they live, or because the social network communities have become a part of how their own peer groups communicate and interact. Increasingly businesses, schools, and even governments are developing presences in social media, which give even more impetus to individuals to participate in these environments. Traditional information content providers are also moving to the Internet and to Facebook and Twitter, and are encouraging their readers/viewers/listeners to access their news and other information online and in interactive formats. These tools are rapidly replacing traditional modes of communication.

To date, our main protection from the exploitation of our personal information in these contexts has been data protection law. Data protection laws are premised on the need to balance the privacy interests of consumers with the needs of businesses to collect and use personal data. But in the time since PIPEDA was enacted, this need has become a voracious hunger for more and more data, retained for longer and longer periods of time. The need for data has shifted from information required to complete transactions or to maintain client relationships to a demand for data as a resource to be exploited. This shift risks gutting the consent model on which the legislation is based. This new paradigm deserves special attention and may require different legal norms and approaches.

Under the traditional data protection model, the goal was to enable consumers to make informed choices about their personal data. In the big data context, informed choices are virtually impossible to make. Beyond this, there is an element of servitude that is deeply disturbing. Nancy Obermeyer uses the term “volunteered geoslavery” to describe a context where location-enabled devices report on our movements to any number of companies without us necessarily being aware of this constant stream of data. She makes the point that equipping individuals with sensors that report on their activities leaves them vulnerable to dominance and exploitation; yet this is a growing reality in our everyday lives. Going beyond the simple collection of data, social networking services encourage users to make these sites the hub of their daily activities and communications.

Our personal data is a resource that businesses large and small regularly exploit. The data is used to profile us so as to define our consumption habits, to determine our suitability for insurance or other services, or to apply price discrimination in the delivery of wares or services. We become data “subjects” in the fullest sense of the word. There are few transactions or activities that do not leave a data trail.

As noted earlier, many so-called “free” services such as social networking sites, document sharing sites, cool applications, and even internet searching, are actually premised upon the ability to extract user data. In the 2011 decision of the Quebec Superior Court in St. Arnaud c. Facebook a judge refused to certify a class action law suit against Facebook. To do so would have required classifying the terms of use for the site as a consumer contract so that Quebec law could override the clause that provided that all disputes would be settled under the laws of California and adjudicated by California courts. The Quebec Court found that there was no consumer contract because the Facebook service is entirely free, whereas a consumer contract “is premised on payment and consideration.” The judge found that there was no obligation placed on users that could be regarded as a form of consideration.

This case demonstrates how the provision of personal data is overlooked as an element of the contract between the company and the individual. It is treated as a matter governed by the tangential privacy policies. This lack of transparency regarding the quid pro quo makes it the consumer’s sole responsibility to manage their personal information. Concerns that excessive amounts of personal information are being collected can then be met by assertions that people just don’t care about privacy. To regard the sharing of personal data as part of a consumer contract for services, by contrast, places both competition law and consumer protection concerns much more squarely in the forefront. In my view, it is time to explicitly address these concerns.

Another social harm potentially posed by big data is of course, discrimination. Oscar Gandy has written about this in his most recent book. We understand how racial profiling leads to injustice in the application of criminal laws. Profiling, whether based on race, sex, sexual orientation, religion, ethnicity, socio-economic status or other grounds, is a growing concern in how we are offered goods or services. Through big data, corporations develop profiles of our tastes and consumption habits; they channel these back to us in targeted advertising, recommendations and special promotions. When we search for goods or services, we are presented first with those things which we are believed to want. We are told that profiling is good because it means we don’t have to be inundated with marketing material for products or services that are of little interest. Yet there is also a flip side to profiling. It can be used to characterize individuals as unworthy of special discounts or promotional prices; unsuitable for credit or insurance; uninteresting as a market for particular kinds of products and services. Profiling can and will exclude some and privilege others.

I have argued that big data alters the data protection paradigm, and that social networking services, along with many other “free” internet services are major players in this regard. To conclude my remarks, I would like to focus on the following key points.

1) The collection, use and disclosure of personal information is no longer simply an issue of privacy, but raises issues of consumer protection, competition law, and human rights;

2) The nature and volume of personal information collected from social media sites and other “free” internet services goes well beyond transaction information and relates to the activities, relationships, preferences, interests and location of individuals;

3) Data protection law reform is overdue, and may now require a reconsideration or modification of the consent-based approach, particularly in contexts where personal data is treated as a resource and personal data collection extends to movements, activities and interests;

4) Changes to PIPEDA should include greater powers of enforcement for data protection norms, which might include order-making powers, and the power to levy fines or impose penalties in the case of egregious or repeated transgressions.

Teresa Scassa

Latest from Teresa Scassa

Related items (by tag)

back to top