Teresa Scassa - Blog

Displaying items by tag: Privacy

Some years ago, a reporter from the Toronto Star filed an access to information request to obtain the names of the top 100 physician billers to Ontario’s Health Insurance Program (OHIP). She also sought the amounts billed, and the physicians’ fields of specialization. The information was in the hands of the Ministry of Health and Long-Term Care, and the request was made under Ontario’s Freedom of Information and Protection of Privacy Act (FIPPA). The Ministry refused to disclose the records on the basis that they constituted the physicians’ personal information. An adjudicator with the Ontario Information and Privacy Commissioner’s Office disagreed, and ordered disclosure. An appeal by the Ontario Medical Association (OMA) to the Ontario Divisional Court was unsuccessful (discussed here). On August 3, 2018, the Ontario Court of Appeal dismissed the OMA’s further appeal of that decision.

The relatively brief and unanimous Court of Appeal decision made short work of the OMA’s arguments. The Court found that the adjudicator’s determination that the information was not personal information was reasonable. FIPPA specifically excludes from the definition of personal information “the name, title, contact information or designation of an individual that identifies the individual in a business, professional or official capacity”. The OMA had argued that the disclosure of the names in conjunction with the billing information meant that the disclosure would include personal information that “describes an individual’s finances, income, assets, liabilities…”. FIPPA provides in s. 21(3) that the disclosure of personal information is presumptively an invasion of privacy when it falls within this category. However, the Court found that the billing information constituted “the affected physicians’ gross revenue before allowable business expenses such as office, personnel, lab equipment, facility and hospital expenses.” (at para 25) The Court agreed with the adjudicator that the gross billing information did not reveal the actual income of the physicians. It stated: “where, as here, an individual’s gross professional or business income is not a reliable indicator of the individual’s actual personal finances or income, it is reasonable to conclude not only that the billing information is not personal information as per s. 2(1), but also that it does not describe “an individual’s finances [or] income”, for the purpose of s. 21(3)(f).” (at para 26)

The OMA had resisted disclosure because the billing information might give the public, who might not understand the costs associated with running a medical practice, a distorted idea of the physicians’ personal finances. Ironically, the Court found that the differences between billing information and actual income were so different that it did not amount to personal information. The OMA had objected to what it considered to be the OIPC’s changed position on the nature of this type of information; in the past, the OIPC had accepted that this information was personal information and had not ordered disclosure. The Ontario Court of Appeal observed that the adjudicator was not bound to follow precedent; it also observed that there were differences of opinion in past OIPC decisions on this issue, and no clear precedent existed in any event.

The decision is an important one for access to information. A publicly funded health care system consumes substantial resources, and there is a public interest in understanding, analyzing, critiquing and discussing how those resources are spent. The OMA was understandably concerned that public discussions not centre on particular individuals. However, governments have been moving towards greater transparency when it comes to monies paid to specific individuals and businesses, whether they are contractors or even public servants. As the Court of Appeal noted, FIPPA balances access to information with the protection of personal privacy. The public interest clearly prevailed in this instance.

Published in Privacy

A recent Finding from the Office of the Privacy Commissioner of Canada contains a consideration of the meaning of “publicly available information”, particularly as it relates to social media profiles. This issue is particularly significant given a recent recommendation by the ETHI committee in its Report on PIPEDA reform. PIPEDA currently contains a very narrowly framed exception to the requirement of consent for “publicly available information”. ETHI had recommended amending the definition to make it “technologically neutral”. As I argued here, such a change would make it open-season for the collection, use and disclosure of social media profiles of Canadians.

The Finding, issued on June 12, 2018, came after multiple complaints were filed by Canadians about the practices of a New Zealand-based social media company, Profile Technology Ltd (PTL). The company had obtained Facebook user profile data from 2007 and 2008 under an agreement with Facebook. While their plan might have originally been to create a powerful search engine for Facebook, in 2011 they launched their own social media platform. They used the Facebook data to populate their platform with profiles. Individuals whose profiles were created on the site had the option of ‘claiming’ them. PTL also provided two avenues for individuals who wished to delete the profiles. If an email address had been part of the original data obtained from Facebook and was associated with the PTL profile, a user could log in using that email address and delete the account. If no email address was associated with the profile, the company required individuals to set up a helpdesk ticket and to provide copies of official photo identification. A number of the complainants to the OPC indicated that they were unwilling to share their photo IDs with a company that had already collected, used and disclosed their personal information without their consent.

The complainants’ concerns were not simply that their personal information had been taken and used to populate a new social media platform without their consent. They also felt harmed by the fact that the data used by PTL was from 2007-2008, and did not reflect any changes or choices they had since made. One complaint received by the OPC related to the fact that PTL had reproduced a group that had been created on Facebook, but that since had been deleted from Facebook. Within this group, allegations had been made about the complainant that he/she considered defamatory and bullying. The complainant objected to the fact that the group persisted on PTL and that the PTL platform did not permit changes to public groups and the behest of single individuals on the basis that they treated the group description “as part of the profile of every person who has joined that group, therefore modifying the group would be like modifying all of those people’s profiles and we cannot modify their profiles without their consent.” (at para 55)

It should be noted that although the data was initially obtained by PTL from Facebook under licence from Facebook, Facebook’s position was that PTL had used the data in violation of the licence terms. Facebook had commenced proceedings against PTL in 2013 which resulted in a settlement agreement. There was some back and forth over whether the terms of the agreement had been met, but no information was available regarding the ultimate resolution.

The Finding addresses a number of interesting issues. These include the jurisdiction of the OPC to consider this complaint about a New Zealand based company, the sufficiency of consent, and data retention limits. This post focuses only on the issue of whether social media profiles are “publicly available information” within the meaning of PIPEDA.

PTL argued that it was entitled to benefit from the “publicly available information” exception to the requirement for consent for collection and use of personal information because the Facebook profiles of the complainants were “publicly available information”. The OPC disagreed. It noted that the exception for “publicly available information”, found in ss. 7(1)(d) and 7(2)(c.1) of PIPEDA, is defined by regulation. The applicable provision is s. 1(e) of the Regulations Specifying Publicly Available Information, which requires that “the personal information must appear in a publication, the publication must be available to the public, and the personal information has to have been provided by the individual.”(at para 87) The OPC rejected PTL’s argument that “publication” included public Facebook profiles. In its view, the interpretation of “publicly available information” must be “in light of the scheme of the Act, its objects, and the intention of the legislature.” (at para 89) It opined that neither a Facebook profile nor a ‘group’ was a publication. It noted that the regulation makes it clear that “publicly available information” must receive a restrictive interpretation, and reflects “a recognition that information that may be in the public domain is still worthy of privacy protection.” (at para 90) The narrow interpretation of this exception to consent is consistent with the fact that PIPEDA has been found to be quasi-constitutional legislation.

In finding that the Facebook profile information was not publicly available information, the OPC considered that the profiles at issue “were created at a time when Facebook was relatively new and its policies were in flux.” (at para 92) Thus it would be difficult to determine that the intention of the individuals who created profiles at that time was to share them broadly and publicly. Further, at the time the profiles were created, they were indexable by search engines by default. In an earlier Finding, the OPC had determined that this default setting “would not have been consistent with users’ reasonable expectations and was not fully explained to users” (at para 92). In addition, the OPC noted that Facebook profiles were dynamic, and that their ‘owners’ could update or change them at will. In such circumstances, “treating a Facebook profile as a publication would be counter to the intention of the Act, undermining the control users otherwise maintain over their information at the source.” (at para 93) This is an interesting point, as it suggests that the dynamic nature of a person’s online profile prevents it from being considered a publication – it is more like an extension of a user’s personality or self-expression.

The OPC also noted that even though the profile information was public, to qualify for the exception it had to be contributed by the individual. This is not always the case with profile information – in some cases, for example, profiles will include photographs that contain the personal information of third parties.

This Finding, which is not a decision, and not binding on anyone, shows how the OPC interprets the “publicly available information” exception in its home statute. A few things are interesting to note:

· The OPC finds that social media profiles (in this case from Facebook) are different from “publications” in the sense that they are dynamic and reflect an individual’s changing self-expression

· Allowing the capture and re-use, without consent, of self-expression from a particular point in time, robs the individual not only of control of their personal information by of control over how they present themselves to the public. This too makes profile data different from other forms of “publicly accessible information” such as telephone or business directory information, or information published in newspapers or magazines.

· The OPC’s discussion of Facebook’s problematic privacy practices at the time the profiles were created muddies the discussion of “publicly available information”. A finding that Facebook had appropriate rules of consent should not change the fact that social media profiles should not be considered “publicly available information” for the purposes of the exception.

 

It is also worth noting that a complaint against PTL to the New Zealand Office of the Privacy Commissioner proceeded on the assumption that PTL did not require consent because the information was publicly available. In fact, the New Zealand Commissioner ruled that no breach had taken place.

Given the ETHI Report’s recommendation, it is important to keep in mind that the definition of “publicly accessible information” could be modified (although the government’s response to the ETHI report indicates some reservations about the recommendation to change the definition of publicly available information). Because the definition is found in a regulation, a modification would not require legislative amendment. As is clear from the ETHI report, there are a number of industries and organizations that would love to be able to harvest and use social media platform personal information without need to obtain consent. Vigilance is required to ensure that these regulations are not altered in a way that dramatically undermines privacy protection.

 

Published in Privacy

The Supreme Court of Canada has issued its unanimous decision in The Queen v. Philip Morris International Inc. This appeal arose out of an ongoing lawsuit brought by the province of British Columbia against tobacco companies to recover the health care costs associated with tobacco-related illnesses in the province. Similar suits brought by other provincial governments are at different stages across the country. In most cases, the litigation is brought under provincial legislation passed specifically to enable and to structure this recourse.

The central issue in this case concerned the degree of access to be provided to Philip Morris International (PMI)to the databases relied upon by the province to calculate tobacco-related health care costs. PMI wanted access to the databases in order to develop its own experts’ opinions on the nature and extent of these costs, and to challenge the opinions to be provided by provincial experts who would have full access to the databases. Although the databases contained aggregate, de-identified data, the government refused access, citing the privacy interests of British Columbians in their health care data. As a compromise, they offered limited and supervised access to the databases at Statistics Canada Data Centre. Although the other tobacco company defendants accepted this compromise, PMI did not, and sought a court order granting it full access. The court at first instance and later the Court of Appeal for British Columbia sided with PMI and ordered that access be provided. The SCC overturned this order.

This case had been watched with interest by many because of the broader issues onto which it might have shed some light. On one view, the case raised issues about how to achieve fairness in litigation where one party relies on its own vast stores of data – which might include confidential commercial data – and the other party seeks to test the validity or appropriateness of analytics based on this data. What level of access, if any, should be granted, and under what conditions? Another issue of broader interest was, where potentially re-identifiable personal information is sought, what measures are appropriate to protect privacy, including the deemed undertaking rule. Others were interested in knowing what parameters the court might set for assessing the re-identification risk where anonymized data are disclosed.

Those who hoped for broader take-aways for big data, data analytics and privacy, are bound to be disappointed in the decision. In deciding in favour of the BC government, the Supreme Court largely confined its decision to an interpretation of the specific language of the Tobacco Damages and Health Care Costs Recovery Act. The statute offered the government two ways to proceed against tobacco companies – it could seek damages related to the healthcare costs of specific individuals, in which case the health records of those individuals would be subject to discovery, or it could proceed in a manner that considered only aggregate health care data. The BC government chose the latter route. Section 2(5) set out the rules regarding discovery in an aggregate action. The focus of the Supreme Court’s interpretation was s. 2(5)(b) of the Act which reads:

2(5)(b) the health care records and documents of particular individual insured persons or the documents relating to the provision of health care benefits for particular individual insured persons are not compellable except as provided under a rule of law, practice or procedure that requires the production of documents relied on by an expert witness [My emphasis]

While it was generally accepted that this meant that the tobacco companies could not have access to individual health care records, PMI argued that the aggregate data was not a document “relating to the provision of health care benefits for particular individual insured persons”, and therefore its production could be compelled.

The Supreme Court disagreed. Writing for the unanimous court, Justice Brown defined both “records” and “documents” as “means of storing information” (at para 22). He therefore found that the relevant databases “are both “records” and “documents” within the meaning of the Act.” (at para 22) He stated:

Each database is a collection of health care information derived from original records or documents which relate to particular individual insured persons. That information is stored in the databases by being sorted into rows (each of which pertains to a particular individual) and columns (each of which contains information about the field or characteristic that is being recorded, such as the type of medical service provided). (at para 22)

He also observed that many of the fields in the database were filled with data from individual patient records, making the databases “at least in part, collections of health care information taken from individuals’ clinical records and stored in an aggregate form alongside the same information drawn from the records of others.” (at para 23) As a result, the majority found that the databases qualified under the legislation as “documents relating to the provision of health care benefits for particular individual insured persons”, whether or not those individuals were identified within the database.

Perhaps the most interesting passage in the Court’s decision is the following:

The mere alteration of the method by which that health care information is stored — that is, by compiling it from individual clinical records into aggregate databases — does not change the nature of the information itself. Even in an aggregate form, the databases, to the extent that they contain information drawn from individuals’ clinical records, remain “health care records and documents of particular individual insured persons”. (at para 24)

A reader eager to draw lessons for use in other contexts might be see the Court to be saying that aggregate data derived from personal data are still personal data. This would certainly be important in the context of current debates about whether the deidentification of personal information removes it from the scope of private sector data protection laws such as the Personal Information Protection and Electronic Documents Act. But it would be a mistake to read that much into this decision. The latter part of the quoted passage grounds the Court’s conclusion on this point firmly in the language of the BC tobacco legislation. Later the Court specifically rejects the idea that a “particular” individual under the BC statute is the same as an “identifiable individual”.

Because the case is decided on the basis of the interpretation of s. 2(5)(b), the Court neatly avoids a discussion of what degree of reidentification risk would turn aggregate or anonymized data into information about identifiable individuals. This topic is also of great interest in the big data context, particularly in relation to data protection law. And, although it might have been interesting to know whether any degree of reidentification risk could be sufficiently mitigated by the deemed undertaking rule so as to permit discovery remains unexplored territory, those looking for a discussion of the relationship between re-identification risk and the deemed undertaking rule will also have to wait for a different case.

Published in Privacy

The pressure is on for Canada to amend its Personal Information Protection and Electronic Documents Act. The legislation, by any measure, is sorely out of date and not up to the task of protecting privacy in the big data era. We know this well enough – the House of Commons ETHI Committee recently issued a report calling for reform, and the government, in its response has acknowledge the need for changes to the law. The current and past privacy Commissioners have also repeatedly called for reform, as have privacy experts. There are many deficiencies with the law – one very significant one is the lack of serious measures to enforce privacy obligations. In this regard, a recent private member’s bill proposes amendments that would give the Commissioner much more substantial powers of enforcement. Other deficiencies can be measured against the EU’s General Data Protection Regulation (GDPR). If Canada cannot meet the levels of protection offered by the GDPR, personal data flows from the EU to Canada could be substantially disrupted. Among other things, the GDPR addresses issues such as the right to be forgotten, the right to an explanation of how automated decisions are reached, data portability rights, and many other measures specifically designed to address the privacy challenges of the big data era.

There is no doubt that these issues will be the subject of much discussion and may well feature in any proposals to reform PIPEDA that will be tabled in Parliament, perhaps as early as this autumn. The goal of this post is not to engage with these specific issues of reform, as important as they are; rather, it is to tackle another very basic problem with PIPEDA and to argue that it too should be addressed in any legislative reform. Simply put, PIPEDA is a dog’s-breakfast statute that is difficult to read and understand. It needs a top-to-bottom rewriting according to the best principles of plain-language drafting.

PIPEDA’s drafting has been the subject of commentary by judges of the Federal Court who have the task of interpreting it. For example, in Miglialo v. Royal Bank of Canada, Justice Roy described PIPEDA as a “a rather peculiar piece of legislation”, and “not an easily accessible statute”. The Federal Court of Appeal in Telus v. Englander observed that PIPEDA was a “compromise as to form” and that “The Court is sometimes left with little, if any guidance at all”. In Johnson v. Bell Canada, Justice Zinn observed: “While Part I of the Act is drafted in the usual manner of legislation, Schedule 1, which was borrowed from the CSA Standard, is notably not drafted following any legislative convention.” In Fahmy v. Royal Bank of Canada, Justice Roy noted that it was “hardly surprising” “[t]hat a party would misunderstand the scope of the Act.”

To understand why PIPEDA is such a mess requires some history. PIPEDA was passed by Parliament in 2000. Its enactment followed closely on the heels of the EU’s Data Protection Directive, which, like the GDPR, threatened to disrupt data flows to countries that did not meet minimum standards of private sector data protection. Canada needed private sector data protection legislation and it needed it fast. It was not clear that the federal government really had jurisdiction over private sector data protection, but it was felt that the rapid action needed did not leave time to develop cooperative approaches with the provinces. The private sector did not want such legislation. As a compromise, the government decided to use the CSA Model Code – a voluntary privacy code developed with multi-stakeholder input – as the normative heart of the statute. There had been enough buy-in with the Model Code that the government felt that it avoid excessive pushback from the private sector. The Code, therefore, originally drafted to provide voluntary guidance, was turned into law. The prime minister at the time, the Hon. Jean Chretien, did not want Parliament’s agenda overburdened with new bills, so the data protection bill was grafted onto another bill addressing the completely different issue of electronic documents (hence the long, unwieldy name that gives rise to the PIPEDA acronym).

The result is a legislative Frankenstein. Keep in mind that this is a law aimed at protecting individual privacy. It is a kind of consumer-protection statute that should be user-friendly, but it is not. Most applicants to the Federal Court under PIPEDA are self-represented, and they clearly struggle with the legislation. The sad irony is that if a consumer wants to complain to the Privacy Commissioner about a company’s over-long, horribly convoluted, impossible to understand, non-transparent privacy policy, he or she will have to wade through a statute that is like a performance-art parody of that same privacy policy. Of course, the problem is not just one for ordinary consumers. Lawyers and even judges (as evidenced above) find PIPEDA to be impenetrable.

By way of illustration, if you are concerned about your privacy rights and want to know what they are, you will not find them in the statute itself. Instead, the normative provisions are in the CSA Model Code, which is appended as Schedule I of the Act. Part I of the Act contains some definitions, a few general provisions, and a whole raft of exceptions to the principle of consent. Section 6.1 tells you what consent means “for the purposes of clause 4.3 of Schedule 1”, but you will have to wait until you get to the schedule to get more details on consent. On your way to the Schedule you might get tangled up in Part II of the Act which is about electronic documents, and thus thoroughly irrelevant.

Because the Model Code was just that – a model code – it was drafted in a more conversational style, and includes notes that provide examples and illustrations. For the purposes of the statute, some of these notes were considered acceptable – others not. Hence, you will find the following statement in s. 2(2) of PIPEDA: “In this Part, a reference to clause 4.3 or 4.9 of Schedule 1 does not include a reference to the note that accompanies that clause.” So put a yellow sticky tab on clauses 4.3 and 4.9 to remind you not to consider those notes as part of the law (even though they are in the Schedule).

Then there is this: s. 5(2) of PIPEDA tells us: “The word should, when used in Schedule 1, indicates a recommendation and does not impose an obligation.” So use those sticky notes again. Or cross out “should” each of the fourteen times you find it in Schedule 1, and replace it with “may”.

PIPEDA also provides in ss. 7(4) and 7(5) that certain actions are permissible despite what is said in clause 4.5 of Schedule 1. Similar revisionism is found in s. 7.4. While clause 4.9 of Schedule 1 talks about requests for access to personal information made by individuals, section 8(1) in Part 1of the Act tells us those requests have to be made in writing, and s. 8 goes on to provide further details on the right of access. Section 9 qualifies the right of access with “Despite clause 4.9 of Schedule 1….”. You can begin to see how PIPEDA may have contributed significantly to the sales of sticky notes.

If an individual files a complaint and is not satisfied with the Commissioner’s report of findings, he or she has a right to take the matter to Federal Court if their issue fits within s. 14, which reads:

 

14 (1) A complainant may, after receiving the Commissioner’s report or being notified under subsection 12.2(3) that the investigation of the complaint has been discontinued, apply to the Court for a hearing in respect of any matter in respect of which the complaint was made, or that is referred to in the Commissioner’s report, and that is referred to in clause 4.1.3, 4.2, 4.3.3, 4.4, 4.6, 4.7 or 4.8 of Schedule 1, in clause 4.3, 4.5 or 4.9 of that Schedule as modified or clarified by Division 1 or 1.1, in subsection 5(3) or 8(6) or (7), in section 10 or in Division 1.1. [My emphasis]

 

Enough said.

There are a number of very important substantive privacy issues brought about by the big data era. We are inevitably going to see PIPEDA reform in the relatively near future, as a means of not only addressing these issues but of keeping us on the right side of the GDPR. As we move towards major PIPEDA reform, however, the government should seriously consider a crisp rewrite of the legislation. The maturity of Canada’s data protection regime should be made manifest in a statute that no longer needs to lean on the crutch of a model code for its legitimacy. Quite apart from the substance of such a document, it should:

 

· Set out its basic data protection principles in the body of the statute, near the front of the statute, and in a manner that is clear, readable and accessible to a lay public.

· Be a free-standing statute that deals with data protection and that does not deal with unrelated extraneous matters (such as electronic documents).

 

It is not a big ask. British Columbia and Alberta managed to do it when they created their own substantially similar data protection statutes. Canadians deserve good privacy legislation, and they deserve to have it drafted in a manner that is clear and accessible. Rewriting PIPEDA (and hence renaming it) should be part of the coming legislative reform.

Published in Privacy

On June 13, 2018 the Supreme Court of Canada handed down a decision that may have implications for how issues of bias in algorithmic decision-making in Canada will be dealt with. Ewert v. Canada is the result of an eighteen-year struggle by Mr. Ewert, a federal inmate and Métis man, to challenge the use of certain actuarial risk-assessment tools to make decisions about his carceral needs and about his risk of recidivism. His concerns, raised in his initial grievance in 2000, have been that these tools were “developed and tested on predominantly non-Indigenous populations and that there was no research confirming that they were valid when applied to Indigenous persons.” (at para 12) After his grievances went nowhere, he eventually sought a declaration in Federal Court that the tests breached his rights to equality and to due process under the Canadian Charter of Rights and Freedoms, and that they were also a breach of the Corrections and Conditional Release Act (CCRA), which requires the Correctional Service of Canada (CSC) to “take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to data and complete as possible.” (s. 24(1)). Although the Charter arguments were unsuccessful, the majority of the Supreme Court of Canada agreed with the trial judge that CSC had breached its obligations under the CCRA. Two justices in dissent agreed with the Federal Court of Appeal that neither the Charter nor the CCRA had been breached.

Although this is not explicitly a decision about ‘algorithmic decision-making’ as the term is used in the big data and artificial intelligence (AI) contexts, the basic elements are present. An assessment tool developed and tested using a significant volume of data is used to generate predictive data to aid in decision-making in individual cases. The case also highlights a common concern in the algorithmic decision-making context: that either the data used to develop and train the algorithm, or the assumptions coded into the algorithm, create biases that can lead to inaccurate predictions about individuals who fall outside the dominant group that has influenced the data and the assumptions.

As such, my analysis is not about the particular circumstances of Mr. Ewert, nor is it about the impact of the judgement within the correctional system in Canada. Instead, I parse the decision to see what it reveals about how courts might approach issues of bias in algorithmic decision-making, and what impact the decision may have in this emerging context.

1. ‘Information’ and ‘accuracy’

A central feature of the decision of the majority in Ewert is its interpretation of s. 24(1) of the CCRA. To repeat the wording of this section, it provides that “The Service shall take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to date and complete as possible.” [My emphasis] In order to conclude that this provision was breached, it was necessary for the majority to find that Mr. Ewert’s test results were “information” within the meaning of this section, and that the CCRA had not taken all reasonable steps to ensure its accuracy.

The dissenting justices took the view that when s. 24(1) referred to “information” and to the requirement to ensure its accuracy, the statute included only the kind of personal information collected from inmates, information about the offence committed, and a range of other information specified in s. 23 of the Act. The dissenting justices preferred the view of the CSC that “information” meant ““primary facts” and not “inferences or assessments drawn by the Service”” (at para 107). The majority disagreed. It found that when Parliament intended to refer to specific information in the CCRA it did so. When it used the term “information” in an unqualified way, as it did in s. 24(1), it had a much broader meaning. Thus, according to the majority, “the knowledge the CSC might derive from the impugned tools – for example, that an offender has a personality disorder or that there is a high risk than an offender will violently reoffend – is “information” about that offender” (at para 33). This interpretation of “information” is an important one. According to the majority, profiles and predictions applied to a person are “information” about that individual.

In this case, the Crown had argued that s. 24(1) should not apply to the predictive results of the assessment tools because it imposed an obligation to ensure that “information” is “as accurate” as possible. It argued that the term “accurate” was not appropriate to the predictive data generated by the tools. Rather, the tools “may have “different levels of predictive validity, in the sense that they predict poorly, moderately well or strongly””. (at para 43) The dissenting justices were clearly influenced by this argument, finding that: “a psychological test can be more or less valid or reliable, but it cannot properly be described as being “accurate” or “inaccurate”.” (at para 115) According to the dissent, all that was required was that accurate records of an inmate’s test scores must be maintained – not that the tests themselves must be accurate. The majority disagreed. In its view, the concept of accuracy could be adapted to different types of information. When applied to psychological assessment tools, “the CSC must take steps to ensure that it relies on test scores that predict risks strongly rather than those that do so poorly.” (at para 43)

It is worth noting that the Crown also argued that the assessment tools were important in decision-making because “the information derived from them is objective and thus mitigates against bias in subjective clinical assessments” (at para 41). While the underlying point is that the tools might produce more objective assessments than individual psychologists who might bring their own biases to an assessment process, the use of the term “objective” to describe the output is troubling. If the tools incorporate biases, or are not appropriately sensitive to cultural differences, then the output is ‘objective’ in only a very narrow sense of the word, and the use of the word masks underlying issues of bias. Interestingly, the majority took the view that if the tools are considered useful “because the information derived from them can be scientifically validated. . . this is all the more reason to conclude that s. 24(1) imposes an obligation on the CSC to take reasonable steps to ensure that the information is accurate.” (at para 41)

It should be noted that while this discussion all revolves around the particular wording of the CCRA, Principle 4.6 of Schedule I of the Personal Information Protection and Electronic Documents Act (PIPEDA) contains the obligation that: “Personal information shall be as accurate, complete, and up-to-date as is necessary for the purposes for which it is to be used.” Further, s. 6(2) of the Privacy Act provides that: “A government institution shall take all reasonable steps to ensure that personal information that is used for an administrative purpose by the institution is as accurate, up-to-date and complete as possible.A similar interpretation of “information” and “accuracy” in these statutes could be very helpful in addressing issues of bias in algorithmic decision-making more broadly.

2. Reasonable steps to ensure accuracy

According to the majority, “[t]he question is not whether the CSC relied on inaccurate information, but whether it took all reasonable steps to ensure that it did not.” (at para 47). This distinction is important – it means that Mr. Ewert did not have to show that his actual test scores were inaccurate, something that would be quite burdensome for him to do. According to the majority, “[s]howing that the CSC failed to take all reasonable steps in this respect may, as a practical matter, require showing that there was some reason for the CSC to doubt the accuracy of information in its possession about an offender.” (at para 47, my emphasis) The majority noted that the trial judge had found that “the CSC had long been aware of concerns regarding the possibility of psychological and actuarial tools exhibiting cultural bias.” (at para 49) The concerns had led to research being carried out in other jurisdictions about the validity of the tools when used to assess certain other cultural minority groups. The majority also noted that the CSC had carried out research “into the validity of certain actuarial tools other than the impugned tools when applied to Indigenous offenders” (at para 49) and that this research had led to those tools no longer being used. However, in this case, in spite of concerns, the CSC had taken no steps to assess the validity of the tools, and it continued to apply them to Indigenous offenders. The majority noted that the CCRA, which set out guiding principles in s. 4, specifically required correctional policies and practices to respect cultural, linguistic and other differences and to take into account “the special needs of women, aboriginal peoples, persons requiring mental health care and other groups” (s. 4(g)) The majority found that this principle “represents an acknowledgement of the systemic discrimination faced by Indigenous persons in the Canadian correctional system.” (at para 53) As a result, it found it incumbent on CSC to give “meaningful effect” to this principle “in performing all of its functions”. In particular, the majority found that “this provision requires the CSC to ensure that its practices, however neutral they may appear to be, do not discriminate against Indigenous persons.”(at para 54) The majority observed that although it has been 25 years since this principle was added to the legislation, “there is nothing to suggest that the situation has improved in the realm of corrections” (at para 60). It expressed dismay that “the gap between Indigenous and non-Indigenous offenders has continued to widen on nearly every indicator of correctional performance”. (at para 60) It noted that “Although many factors contributing to the broader issue of Indigenous over-incarceration and alienation from the criminal justice system are beyond the CSC’s control, there are many matters within its control that could mitigate these pressing societal problems. . . Taking reasonable steps to ensure that the CSC uses assessment tools that are free of cultural bias would be one.”(at para 61) [my emphasis]

According to the majority of the Court, therefore, what is required by s. 24(1) of the CCRA is for the CSC to carry out research into whether and to what extent the assessment tools it uses “are subject to cross-cultural variance when applied to Indigenous offenders.” (at para 67) Any further action would depend on the results of the research.

What is interesting here is that the onus is placed on the CSC (influenced by the guiding principles in the CCRA) to take positive steps to verify the validity of the assessment tools on which it relies. The Court does not specify who is meant to carry out the research in question, what standards it should meet, or how extensive it should be. These are important issues. It should be noted that discussions of algorithmic bias often consider solutions involving independent third-party assessment of the algorithms or the data used to develop them.

3. The Charter arguments

Two Charter arguments were raised by counsel for Mr. Ewert. The first was a s. 7 due process argument. Counsel for Mr. Ewert argued that reliance on the assessment tools violated his right to liberty and security of the person in a manner that was not in accordance with the principles of fundamental justice. The tools were argued to fall short of the principles of fundamental justice because of their arbitrariness (lacking any rational connection to the government objective) and overbreadth. The court was unanimous in finding that reliance on the tools was not arbitrary, stating that “The finding that there is uncertainty about the extent to which the tests are accurate when applied to Indigenous offenders is not sufficient to establish that there is no rational connection between reliance on the tests and the relevant government objective.” (at para 73) Without further research, the extent and impact of any cultural bias could not be known.

Mr. Ewert also argued that the results of the use of the tools infringed his right to equality under s. 15 of the Charter. The Court gave little time or attention to this argument, finding that there was not enough evidence to show that the tools had a disproportionate impact on Indigenous inmates when compared to non-Indigenous inmates.

The Charter is part of the Constitution and applies only to government action. There are many instances in which governments may come to rely upon algorithmic decision-making. While concerns might be raised about bias and discriminatory impacts from these processes, this case demonstrates the challenge faced by those who would raise such arguments. The decision in Ewert suggests that in order to establish discrimination, it will be necessary either to demonstrate discriminatory impacts or effects, or to show how the algorithm itself and/or the data used to develop it incorporate biases or discriminatory assumptions. Establishing any of these things will impose a significant evidentiary burden on the party raising the issue of discrimination. Even where the Charter does not apply and individuals must rely upon human rights legislation, establishing discrimination with complex (and likely inaccessible or non-transparent algorithms and data) will be highly burdensome.

Concluding thoughts

This case raises important and interesting issues that are relevant in algorithmic decision-making of all kinds. The result obtained in this case favoured Mr. Ewert, but it should be noted that it took him 18 years to achieve this result, and he required the assistance of a dedicated team of lawyers. There is clearly much work to do to ensure that fairness and transparency in algorithmic decision-making is accessible and realizable.

Mr. Ewert’s success was ultimately based, not upon human rights legislation or the Charter, but upon federal legislation which required the keeping of accurate information. As noted above, PIPEDA and the Privacy Act impose a similar requirement on organizations that collect, use or disclose personal information to ensure the accuracy of that information. Using the interpretive approach of the Supreme Court of Canada in Ewert v. Canada, this statutory language may provide a basis for supporting a broader right to fair and unbiased algorithmic decision-making. Yet, as this case also demonstrates, it may be challenging for those who feel they are adversely impacted to make their case, absent evidence of long-standing and widespread concerns about particular tests in specific contexts.

 

Published in Privacy

The issue of the application of privacy/data protection laws to political parties in Canada is not new – Colin Bennett and Robin Bayley wrote a report on this issue for the Office of the Privacy Commissioner of Canada in 2012. It gained new momentum in the wake of the Cambridge Analytica scandal when it was brought home to the public in a fairly dramatic way the extent to which personal information might be used not just to profile and target individuals, but to sway their opinions in order to influence the outcome of elections.

In the fallout from Cambridge Analytica there have been a couple of recent developments in Canada around the application of privacy laws to political parties. First, the federal government included some remarkably tepid provisions into Bill C-76 on Elections Act reform. These provisions, which I critique here, require parties to adopt and post a privacy policy, but otherwise contain no normative requirements. In other words, they do not hold political parties to any particular rules or norms regarding their collection, use or disclosure of personal information. There is also no provision for independent oversight. The only complaint that can be made – to the Commissioner of Elections – is about the failure to adopt and post a privacy policy. The federal government has expressed surprise at the negative reaction these proposed amendments have received and has indicated a willingness to do something more, but that something has not yet materialized. Meanwhile, it is being reported that the Bill, even as it stands, is not likely to clear the Senate before the summer recess, putting in doubt the ability of any amendments to be in place and implemented in time for the next election.

Meanwhile, on June 6 2018, the Quebec government introduced Bill no 188 into the National Assembly. If passed, this Bill would give the Quebec Director General of Elections the duty to examine and evaluate the practices of the provincial political parties’ collection, use and disclosure of personal information. The Director General must also assess their information security practices. If the Bill is passed into law, he will be required to report his findings to the National Assembly no later than the first of October 2019. The Director General will make any recommendations in this report that he feels are appropriate in the circumstances. The Bill also modifies laws applicable to municipal and school board elections so that the Director-General can be directed by the National Assembly to conduct a similar assessment and report back. While this Bill would not make any changes to current practices in the short term, it is clearly aimed at gathering data with a view to informing any future legislative reform that might be deemed necessary.

 

Published in Privacy

In the wake of the Cambridge Analytica scandal, Canada’s federal government has come under increased criticism for the fact that Canadian political parties are not subject to existing privacy legislation. This criticism is not new. For example, Prof. Colin Bennett and Robin Bayley wrote a report on the issue for the Office of the Privacy Commissioner of Canada in 2012.

The government’s response, if it can be called a response, has come in Bill C-76, the Act to amend the Canada Elections Act and other Acts and to make certain consequential amendments which was introduced in the House of Commons on April 30, 2018. This Bill would require all federal political parties to have privacy policies in order to become or remain registered. It also sets out what must be included in the privacy policy.

By way of preamble to this critique of the legislative half-measures introduced by the government, it is important to note that Canada already has both a public sector Privacy Act and a private sector Personal Information Protection and Electronic Documents Act (PIPEDA). Each of these statutes sets out rules for collection, use and disclosure of personal information and each provides for an oversight regime and a complaints process. Both statutes have been the subject of substantial critique for not going far enough to address privacy concerns, particularly in the age of big data. In February 2018, the House of Commons Standing Committee on Access to Information, Privacy and Ethics issued a report on PIPEDA, and recommended some significant amendments to adapt the statute to protecting privacy in a big data environment. Thus, the context in which the provisions regarding political parties’ privacy obligations are introduced is one in which a) we already have privacy laws that set data protection standards; b) these laws are generally considered to be in need of significant amendment to better address privacy; and c) the Cambridge Analytica scandal has revealed just how complex, problematic and damaging the misuse of personal information in the context of elections can be.

Once this context is understood, the privacy ‘obligations’ that the government proposes to place on political parties in the proposed amendments can be seen for what they are: an almost contemptuous and entirely cosmetic quick fix designed to deflect attention from the very serious privacy issues raised by the use of personal information by political parties.

First, the basic requirement placed on political parties will be to have a privacy policy. The policy will also have to be published on the party’s internet site. That’s pretty much it. Are you feeling better about your privacy yet?

To be fair, the Bill also specifies what the policy must contain:

(k) the party’s policy for the protection of personal information [will include]:

(i) a statement indicating the types of personal information that the party collects and how it collects that information,

(ii) a statement indicating how the party protects personal information under its control,

(iii) a statement indicating how the party uses personal information under its control and under what circumstances that personal information may be sold to any person or entity,

(iv) a statement indicating the training concerning the collection and use of personal information to be given to any employee of the party who could have access to personal information under the party’s control,

(v) a statement indicating the party’s practices concerning

(A) the collection and use of personal information created from online activity, and

(B) its use of cookies, and

(vi) the name and contact information of a person to whom concerns regarding the party’s policy for the protection of personal information can be addressed; and

(l) the address of the page — accessible to the public — on the party’s Internet site where its policy for the protection of personal information is published under subsection (4).

It is particularly noteworthy that unlike PIPEDA (or any other data protection law, for that matter), there is no requirement to obtain consent to any collection, use or disclosure of personal information. A party’s policy simply has to tell you what information it collects and how. Political parties are also not subject to any of the other limitations found in PIPEDA. There is no requirement that the purposes for collection, use or disclosure meet a reasonableness standard; there is no requirement to limit collection only to what is necessary to achieve any stated purposes; there is nothing on data retention limits; and there is no right of access or correction. And, while there is a requirement to identify a contact person to whom any concerns or complaints may be addressed, there is no oversight of a party’s compliance with their policy. (Note that it would be impossible to oversee compliance with any actual norms, since none are imposed). There is also no external complaints mechanism available. If a party fails to comply with requirements to have a policy, post it, and provide notice of any changes, it can be deregistered. That’s about it.

This is clearly not good enough. It is not what Canadians need or deserve. It does not even come close to meeting the standards set in PIPEDA, which is itself badly in need of an overhaul. The data resources and data analytics tools available to political parties have created a context in which data protection has become important not just to personal privacy values but to important public values as well, such as the integrity and fairness of elections. Not only are these proposed amendments insufficient to meet the privacy needs of Canadians, they are shockingly cynical in their attempt to derail the calls for serious action on this issue.

Published in Privacy

What is the proper balance between privacy and the open courts principle when it comes to providing access to the decisions of administrative tribunals? This is the issue addressed by Justice Ed Morgan in a recent Ontario Superior Court decision. The case arose after the Toronto Star brought an application to have parts of Ontario’s Freedom of Information and Protection of Privacy Act (FIPPA) declared unconstitutional. To understand this application, some background may be helpful.

Courts in Canada operate under the “open courts principle”. This principle has been described as “one of the hallmarks of a democratic society” and it is linked to the right of freedom of expression guaranteed by s. 2(b) of the Canadian Charter of Rights and Freedoms. The freedom of expression is implicated because in order for the press and the public to be able to debate and discuss what takes place in open court, they must have access to the proceedings and to the records of proceedings. As Justice Morgan notes in his decision, the open courts principle applies not just to courts, but also to administrative tribunals, since the legitimacy of the proceedings before such tribunals requires similar transparency.

Administrative bodies are established by legislation to carry out a number of different functions. This can include the adjudication of matters related to the subject matter of their enabling legislation. As the administrative arm of government has expanded, so too has the number and variety of administrative tribunals at both the federal and provincial levels. Examples of tribunals established under provincial legislation include landlord-tenant boards, human rights tribunals, securities commissions, environmental review tribunals, workers’ compensation tribunals, labour relations boards, and criminal injury compensations boards – to name just a very few. These administrative bodies are often charged with the adjudication of disputes over matters that are of fundamental importance to individuals, impacting their livelihood, their housing, their human rights, and their compensation and disability claims.

Because administrative tribunals are established by provincial legislation, they are public bodies, and as such, are subject to provincial (or, as the case may be, federal) legislation governing access to information and the protection of personal information in the hands of the public sector. The applicability of Ontario’s Freedom of Information and Protection of Privacy Act is at the heart of this case. The Toronto Star brought its application with respect to the 14 administrative tribunals found in the list of institutions to which FIPPA applies in a Schedule to that Act. It complained that because FIPPA applied to these tribunals, the public presumptively had to file access to information requests under that statute in order to access the adjudicative records of the tribunals. It is important to note that the challenge to the legislation was limited a) to administrative tribunals, and b) to their adjudicative records (as opposed to other records that might relate to their operations). Thus the focus was really on the presumptive right of the public, according to the open courts principles, to have access to the proceedings and adjudicative records of tribunals.

Justice Morgan noted that the process under FIPPA requires an applicant to make a formal request for particular records and to pay a fee. The head of the institution then considers the request and has 30 days in which it must advise the applicant as to whether access will be granted. The institution may also notify the applicant that a longer period of time is required to respond to the request. It must give notice to anyone who might be affected by the request and must give that person time in which to make representations. The institution might refuse to disclose records or it might disclose records with redactions; a dissatisfied applicant has a right of appeal to the Information and Privacy Commissioner.

In addition to the time required for this process to unfold, FIPPA also sets out a number of grounds on which access can be denied. Section 42(1) provides that “An institution shall not disclose personal information in its custody or under its control”. While there are some exceptions to this general rule, none of them relates to adjudicative bodies specifically. Justice Morgan noted that the statute provides a broad definition of personal information. While the default rule is non-disclosure, the statute gives the head of an institution some discretion to disclose records containing personal information. Thus, for example, the head of an institution may disclose personal information if to do so “does not constitute an unjustified invasion of personal privacy” (s. 21(1)(f)). The statute sets out certain circumstances in which an unjustified invasion of personal privacy is presumed to occur (s. 21(3)), and these chiefly relate to the sensitivity of the personal information at issue. The list includes many things which might be part of adjudication before an administrative tribunal, including employment or educational history, an individual’s finances, income, or assets, an individual’s eligibility for social service or welfare benefits, the level of such benefits, and so on. The Toronto Star led evidence that “the personal information exemption is so widely invoked that it has become the rule rather than an exemption to the rule.” (at para 27). Justice Morgan agreed, characterizing non-disclosure as having become the default rule.

FIPPA contains a “public interest override” in s. 23, which allows the head of an institution to release records notwithstanding the applicability of an exception to the rule of disclosure, where “a compelling public interest in the disclosure of the record clearly outweighs the purpose of the exemption.” However, Justice Morgan noted that the interpretation of this provision has been so narrow that the asserted public interest must be found to be more important than the broad objective of protecting personal information. In the case of adjudicative records, the Information and Privacy Commissioner’s approach has been to require the requester to demonstrate “that there is a public interest in the Adjudicative Record not simply to inform the public about the particular case, but for the larger societal purpose of aiding the public in making political choices” (at para 31). According to Justice Morgan, “this would eliminate all but the largest and most politically prominent of cases from media access to Adjudicative Records and the details contained therein” (at para 32).

The practice of the 14 adjudicative bodies at issue in this case showed a wide variance in the ways in which they addressed issues of access. Justice Morgan noted that 8 of the 14 bodies did not require a FIPPA application to be made; requests for access to and copies of records could be directed by applicants to the tribunal itself. According to Justice Morgan, this is not a problem. He stated: “their ability to fashion their own mechanism for public access to Adjudicative Records, and to make their own fine-tuned determinations of the correct balance between openness and privacy, fall within the power of those adjudicative institutions to control their own processes” (at para 48). The focus of the court’s decision is therefore on the other 6 adjudicative bodies that require those seeking access to adjudicative records to follow the process set out in the legislation. The Star emphasized the importance of timeliness when it came to reporting on decisions of adjudicative bodies. It led evidence about instances where obtaining access to records from some tribunals took many weeks or months, and that when disclosure occurred, the documents were often heavily redacted.

Justice Morgan noted that the Supreme Court of Canada has already found that s. 2(b) protects “guaranteed access to the courts to gather information” (at para 53, citing Canadian Broadcasting Corp. v. New Brunswick (A.G.)), and that this right includes access to “exhibits entered into evidence, photocopies of all such records, and the ability to disseminate those records by means of broadcast or other publication” (at para 53). He found that FIPPA breaches s. 2(b) because it essentially creates a presumption of non-disclosure of personal information “and imposes an onus on the requesting party to justify the disclosure of the record” (at para 56). He also found that the delay created by the FIPPA system “burdens freedom of the press and amounts in the first instance to an infringement” of s. 2(b) of the Charter (at para 70). However, it is important to note that under the Charter framework, the state can still justify a presumptive breach of a Charter right by showing under s. 1 of the Charter that it is a reasonable limit, demonstrably justified in a free and democratic society.

In this case, Justice Morgan found that the ‘reverse onus’ placed on the party requesting access to an adjudicative record to show why the record should be released could not be justified under s. 1 of the Charter. He noted that in contexts outside of FIPPA – for example, where courts consider whether to impose a publication ban on a hearing – the presumption is openness, and the party seeking to limit disclosure or dissemination of information must show how a limitation would serve the public interest. He stated that the case law makes it clear “that it is the openness of the system, and not the privacy or other concerns of law enforcement, regulators, or innocent parties, that takes primacy in this balance” (at para 90). Put another way, he states that “The open court principle is the fundamental one and the personal information and privacy concerns are secondary to it” (at para 94).

On the delays created by the FIPPA system, Justice Morgan noted that “Untimely disclosure that loses the audience is akin to no disclosure at all” (at para 95). However, he was receptive to submissions made by the Ontario Judicial Council (OJC) which had “admonished the court to be cognizant of the complex task of fashioning a disclosure system for a very diverse body of administrative institutions” (at para 102). The OJC warned the court of the potential for “unintended consequences” if it were to completely remove tribunals from the FIPPA regime. The concern here was not so much for privacy; rather it was for the great diversity of administrative tribunals, many of which are under-resourced and under-staffed, and who might find themselves “overwhelmed in a suddenly FIPPA-free procedural environment” (at para 103). Justice Morgan also noted that while the Toronto Star was frustrated with the bureaucracy involved in making FIPPA applications, “bureaucracy in and of itself is not a Charter violation. It’s just annoying.” (at para 104) He noted that the timelines set out in FIPPA were designed to make the law operate fairly, and that “Where the evidence in the record shows that there have been inordinate delays, the source of the problems may lie more with the particular administrators or decision-makers who extend the FIPPA timelines than with the statutory system itself” (at para 105). He expressed hope that by removing the ‘reverse onus’ approach, any issues of delay might be substantially reduced.

As a result, Justice Morgan found the “presumption of non-disclosure for producing Adjudicative Records containing “personal information” as defined in s. 2(1)” to violate the Charter. Given the complexity of finding a solution to this problem, he gave the legislature one year in which to amend FIPPA. He makes it clear that tribunals are not required to follow the FIPPA request process in providing access to their Adjudicative Records, but it does not breach the Charter for them to do so.

This is an interesting decision that addresses what is clearly a problematic approach to providing access to decisions of administrative tribunals. What the case does not address are the very real issues of privacy that are raised by the broad publication of administrative tribunal decisions. Much ink has already been spilled on the problems with the publication of personal information in court and tribunal decisions. Indeed the Globe24hr case considered by both the Office of the Privacy Commissioner of Canada and the Federal Court reveals some of the consequences for individual privacy when such decisions are published in online repositories. Of course, nothing in Justice Morgan’s decision requires online publication, but openness must be presumed to include such risks. In crafting a new legislative solution for adjudicative records, the Ontario government might be well advised to look at some of the materials produced regarding different strategies to protect privacy in open tribunal decisions and might consider more formal guidance for tribunals in this regard.

 

**********************

Interested in the issues raised by this case? Here is a sampling of some other decisions that also touch on the open courts principle in the context of administrative tribunals:

Canadian Broadcasting Corp. v. Canada (Attorney General)

United Food & Commercial Workers Union Local 1518 v. Sunrise Poultry Processors Ltd.

These three cases deal with individuals trying to get personal information redacted from tribunal decisions destined to be published online in order to protect their personal information: Fowlie v. Canada; A.B. v. Brampton (City); Pakzad v. The Queen

Published in Privacy
Tuesday, 17 April 2018 08:50

New Study on Whistleblowing in Canada

Earlier this year, uOttawa’s Florian Martin-Bariteau and Véronique Newman released a study titled Whistleblowing in Canada. The study was funded by SSHRC as part of its Knowledge Synthesis program. The goal of this program is to provide an incisive overview of a particular area to synthesize key research and to identify knowledge gaps. The report they have produced does just that. Given the very timeliness of the topic (after all, the Cambridge Analytica scandal was disclosed by a whistleblower), and the relative paucity of legal research in the area, this report is particularly important.

The first part of the report provides an inventory of existing whistleblower frameworks across public and private sectors in Canada, including those linked to administrative agencies. This on its own makes a significant contribution. The authors refer to the existing legislative and policy framework as a “patchwork”. They note that the public sector framework is characterized by fairly stringent criteria that must be met to justify disclosures to authorities. At the same time, there are near universal restrictions against disclosure to the broader public. The authors note that whistleblower protection in the private sector is relatively thin, with a few exceptions in areas such as labour relations, health and environmental standards.

The second part of the report identifies policy issues and knowledge gaps. Observing that Canada lags behind its international partners in providing whistleblower protection, the authors are critical of narrow statutory definitions of whistleblowing, legal uncertainties faced by whistleblowers, and an insufficient framework for the private sector. The authors are also critical of the general lack of protection for public disclosures, noting that “internal mechanisms in government agencies are often unclear or inefficient and may fail to ensure the anonymity of the whistleblower” (at p. 5). Indeed, the authors are also critical of how existing regimes make anonymity difficult or impossible. The authors call for more research on the subject of whistleblowing, and highlight a number of important research gaps.

Among other things, the authors call for research to help draw the line between leaks, hacks and whistleblowing. This too is important given the different ways in which corporate or government wrongdoing has been exposed in recent years. There is no doubt that the issues raised in this study are important, and it is a terrific resource for those interested in the topic.

Published in Privacy

The post is the second in a series that looks at the recommendations contained in the report on the Personal Information Protection and Electronic Documents Act (PIPEDA) issued by the House of Commons Standing Committee on Access to Information and Privacy Ethics (ETHI). My first post considered ETHI’s recommendation to retain consent at the heart of PIPEDA with some enhancements. At the same time, ETHI recommended some new exceptions to consent. This post looks at one of these – the exception relating to publicly available information.

Although individual consent is at the heart of the PIPEDA model – and ETHI would keep it there – the growing number of exceptions to consent in PIPEDA is reason for concern. In fact, the last round of amendments to PIPEDA in the 2015 Digital Privacy Act, saw the addition of ten new exceptions to consent. While some of these were relatively uncontroversial (e.g. making it clear that consent was not needed to communicate with the next of kin of an injured, ill or deceased person) others were much more substantial in nature. In its 2018 report ETHI has made several recommendations that continue this trend – creating new contexts in which individual consent will no longer be required for the collection, use or disclosure of personal information. In this post, I focus on one of these – the recommendation that the exception to consent for the use of “publicly available information” be dramatically expanded to include content shared by individuals on social media. In light of the recent Facebook/Cambridge Analytica scandal, this recommended change deserves some serious resistance.

PIPEDA already contains a carefully limited exception to consent to the collection, use or disclosure of personal information where it is “publicly available” as defined in the Regulations Specifying Publicly Available Information. These regulations identify five narrowly construed categories of publicly available information. The first is telephone directory information (but only where the subscriber has the option to opt out of being included in the directory). The second is name and contact information that is included in a professional business directory listing that is available to the public; nevertheless, such information can only be collected, used or disclosed without consent where it relates “directly to the purpose for which the information appears in the registry” (i.e. contacting the individual for business purposes). There is a similar exception for information in a public registry established by law (for example, a land titles registry); this information can similarly only be collected, used or disclosed for purposes related to those for which it appears in the record or document. Thus, consent is not required to collect land registry information for the purposes of concluding a real estate transaction. However, it is not permitted to extract personal information from such a registry, without consent, to use for marketing. A fourth category of publicly available personal information is information appearing in court or tribunal records or documents. This respects the open courts principle, but the exception is limited to collection, use or disclosure that relates directly to the purpose for which the information appears in the record or document. This means that online repositories of court and tribunal decisions cannot be mined for personal information; however, personal information can be used without consent to further the open courts principle (for example, a reporter gathering information to use in a newspaper story).

This brings us to the fifth category of publicly available information – the one ETHI would explode to include vast quantities of personal information. Currently, this category reads:

e) personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.

ETHI’s recommendation is to make this “technologically neutral” by having it include content shared by individuals over social media. According to ETHI, a “number of witnesses considered this provision to be “obsolete.” (at p. 27) Perhaps not surprisingly, these witnesses represented organizations and associations whose members would love to have unrestricted access to the contents of Canadians’ social media feeds and pages. The Privacy Commissioner was less impressed with the arguments for change. He stated: “we caution against the common misconception that simply because personal information happens to be generally accessible online, there is no privacy interest attached to it.” (at p. 28) The Commissioner recommended careful study with a view to balancing “fundamental individual and societal rights.” This cautious approach seems to have been ignored. The scope of ETHI’s proposed change is particularly disturbing given the very carefully constrained exceptions that currently exist for publicly available information. A review of the Regulations should tell any reader that this was always intended to be a very narrow exception with tightly drawn boundaries; it was never meant to create a free-for-all open season on the personal information of Canadians.

The Cambridge Analytica scandal reveals the harms that can flow from unrestrained access to the sensitive and wide-ranging types and volumes of personal information that are found on social media sites. Yet even as that scandal unfolds, it is important to note that everyone (including Facebook) seems to agree that user consent was both required and abused. What ETHI recommends is an exception that would obviate the need for consent to the collection, use and disclosure of the personal information of Canadians shared on social media platforms. This could not be more unwelcome and inappropriate.

Counsel for the Canadian Life and Health Insurance Association, in addressing ETHI, indicated that the current exception “no longer reflects reality or the expectations of the individuals it is intended to protect.” (at p. 27) A number of industry representatives also spoke of the need to make the exception “technologically neutral”, a line that ETHI clearly bought when it repeated this catch phrase in its recommendation. The facile rhetoric of technological neutrality should always be approached with enormous caution. The ‘old tech’ of books and magazines involved: a) relatively little exposure of personal information; b) carefully mediated exposure (through editorial review, fact-checking, ethical policies, etc.); c) and time and space limitations that tended to focus publication on the public interest. Social media is something completely different. It is a means of peer-to-peer communication and interaction which is entirely different in character and purpose from a magazine or newspaper. To treat it as the digital equivalent is not technological neutrality, it is technological nonsensicality.

It is important to remember that while the exception to consent for publicly available information exists in PIPEDA; the definition of its parameters is found in a regulation. Amendments to legislation require a long and public process; however, changes to regulations can happen much more quickly and with less room for public input. This recommendation by ETHI is therefore doubly disturbing – it could have a dramatic impact on the privacy rights of Canadians, and could do so more quickly and quietly than through the regular legislative process. The Privacy Commissioner was entirely correct in stating that there should be no change to these regulations without careful consideration and a balancing of interests, and perhaps no change at all.

Published in Privacy
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 8 of 18

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law