Teresa Scassa - Blog

The Alberta Court of Queen’s Bench has issued a decision in Clearview AI’s application for judicial of an Order made by the province’s privacy commissioner. The Commissioner had ordered Clearview AI to take certain steps following a finding that the company had breached Alberta’s Personal Information Protection Act (PIPA) when it scraped billions of images – including those of Albertans – from the internet to create a massive facial recognition database marketed to police services around the world. The court’s decision is a partial victory for the commissioner. It is interesting and important for several reasons – including for its relevance to generative AI systems and the ongoing joint privacy investigation into OpenAI. These issues are outlined below.

Brief Background

Clearview AI became notorious in 2020 following a New York Times article which broke the story on the company’s activities. Data protection commissioners in Europe and elsewhere launched investigations, which overwhelmingly concluded that the company violated applicable data protection laws. In Canada, the federal privacy commissioner joined forces with the Quebec, Alberta and British Columbia (BC) commissioners, each of which have private sector jurisdiction. Their joint investigation report concluded that their respective laws applied to Clearview AI’s activities as there was a real and substantial connection to their jurisdictions. They found that Clearview collected, used and disclosed personal information without consent, and that no exceptions to consent applied. The key exception advanced by Clearview AI was the exception for “publicly available information”. The Commissioners found that the scope of this exception, which was similarly worded in the federal, Alberta and BC laws, required a narrow interpretation and that the definition in the regulations enacted under each of these laws did not include information published on the internet. The commissioners also found that, contrary to shared legislative requirements, the collection and use of the personal information by Clearview AI was not for a purpose that a reasonable person would consider appropriate in the circumstances. The report of findings made a number of recommendations that Clearview ultimately did not accept. The Quebec, BC and Alberta commissioners all have order making powers (which the federal commissioner does not). Each of these commissioners ordered Clearview to correct its practices, and Clearview sought judicial review of each of these orders. The decision of the BC Supreme Court (which upheld the Commissioner’s order) is discussed in an earlier post. The decision from Quebec has yet to be issued.

In Alberta, Clearview AI challenged the commissioner’s jurisdiction on the basis that Alberta’s PIPA did not apply to its activities. It also argued that that the Commissioner’s interpretation of “publicly available information” was unreasonable. In the alternative, Clearview AI argued that ‘publicly available information’, as interpreted by the Commissioner, was an unconstitutional violation of its freedom of expression. It also contested the Commissioner’s finding that Clearview did not have a reasonable purpose for collecting, using and disclosing the personal information.

The Jurisdictional Question

Courts have established that Canadian data protection laws will apply where there is a real and substantial connection to the relevant jurisdiction. Clearview AI argued that it was a US-based company that scraped most of its data from social media websites mainly hosted outside of Canada, and that therefore its activities took place outside of Canada and its provinces. Yet, as Justice Feasby noted, “[s]trict adherence to the traditional territorial conception of jurisdiction would make protecting privacy interests impossible when information may be located everywhere and nowhere at once” (at para 50). He noted that there was no evidence regarding the actual location of the servers of social media platforms, and that Clearview AI’s scraping activities went beyond social media platforms. Justice Feasby ruled that he was entitled to infer from available evidence that images of Albertans were collected from servers located in Canada and in Alberta. He observed that in any event, Clearview marketed its services to police in Alberta, and its voluntary decision to cease offering those services did not alter the fact that it had been doing business in Alberta and could do so again. Further, the information at issue in the order was personal information of Albertans. All of this gave rise to a real and substantial connection with Alberta.

Publicly Available Information

The federal Personal Information Protection and Electronic Documents Act (PIPEDA) contains an exception to the consent requirement for “publicly available information”. The meaning of this term is defined in the Regulations Specifying Publicly Available Information. The relevant category is found in s. 1(e) which specifies “personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.” Alberta’s PIPA contains a similar exception (as does BC’s law), although the wording is slightly different. Section 7(e) of the Alberta regulations creates an exception to consent where:

(e) the personal information is contained in a publication, including, but not limited to, a magazine, book or newspaper, whether in printed or electronic form, but only if

(i) the publication is available to the public, and

(ii) it is reasonable to assume that the individual that the information is about provided that information; [My emphasis]

In their joint report of findings, the Commissioners found that their respective “publicly available information” exceptions did not include social media platforms.

Clearview AI made much of the wording of Alberta’s exception, arguing that even if it could be said that the PIPEDA language excluded social media platforms, the use of the words “including but not limited to” in the Alberta regulation made it clear that the list was not closed, nor was it limited to the types of publications referenced.

In interpreting the exceptions for publicly available information, the Commissioners emphasized the quasi-constitutional nature of privacy legislation. They found that the privacy rights should receive a broad and expansive interpretation and the exceptions to those rights should be interpreted narrowly. The commissioners also found significant differences between social media platforms and the more conventional types of publications referenced in their respective regulations, making it inappropriate to broaden the exception. Justice Feasby, applying reasonableness as the appropriate standard of review, found that the Alberta Commissioner’s interpretation of the exception was reasonable.

Freedom of Expression

Had the court’s decision ended there, the outcome would have been much the same as the result in the BC Supreme Court. However, in this case, Clearview AI also challenged the constitutionality of the regulations. It sought a declaration that if the exception were interpreted as limited to books, magazines and comparable publications, then this violated its freedom of expression under s. 2(b) of the Canadian Charter of Rights and Freedoms.

Clearview AI argued that its commercial purposes of scraping the internet to provide information services to its clients was expressive and was therefore protected speech. Justice Feasby noted that Clearview’s collection of internet-based information was bot-driven and not engaged in by humans. Nevertheless, he found that “scraping the internet with a bot to gather images and information may be protected by s. 2(b) when it is part of a process that leads to the conveyance of meaning” (at para 104).

Interestingly, Justice Feasby noted that since Clearview no longer offered its services in Canada, any expressive activities took place outside of Canada, and thus were arguably not protected by the Charter. However, he acknowledged that the services had at one point been offered in Canada and could be again. He observed that “until Clearview removes itself permanently from Alberta, I must find that its expression in Alberta is restricted by PIPA and the PIPA Regulation” (at para 106).

Having found a prima facie breach of s. 2(b), Justice Feasby considered whether this was a reasonable limit demonstrably justified in a free and democratic society, under s. 1 of the Charter. The Commissioner argued that the expression at issue in this case was commercial in nature and thus of lesser value. Justice Feasby was not persuaded by category-based assumptions of value; rather, he preferred an approach in which the regulation of commercial expression is consistent with and proportionate to its character.

Justice Feasby found that the Commissioner’s reasonable interpretation of the exception in s. 7 of the regulations meant that it would exclude social media platforms or “other kinds of internet websites where images and personal information may be found” (at para 118). He noted that this is a source-based exception – in other words that some publicly available information may be used without knowledge or consent, but not other similar information. The exclusion depends on the source and not the purpose of use for the personal information. Justice Feasby expressed concern that the same exception that would exclude the scraping of images from the internet for the creation of a facial recognition database would also apply to the activities of search engines widely used by individuals to gain access to information on the internet. He thus found that the publicly available information exception was overbroad, stating: “Without a reasonable exception to the consent requirement for personal information made publicly available on the internet without use of privacy settings, internet search service providers are subject to a mandatory consent requirement when they collect, use and disclose such personal information by indexing and delivering search results” (at para 138). He stated: “I take judicial notice of the fact that search engines like Google are an important (and perhaps the most important) way individuals access information on the internet” (at para 144).

Justice Feasby also noted that while it was important to give individuals some level of control over their personal information, “it must also be recognized that some individuals make conscious choices to make their images and information discoverable by search engines and that they have the tools in the form of privacy settings to prevent the collection, use, and disclosure of their personal information” (at para 143). His constitutional remedy – to strike the words “including, but not limited to magazines, books, and newspapers” from the regulation was designed to allow “the word ‘publication’ to take its ordinary meaning which I characterize as ‘something that has been intentionally made public’” (at para 149).

The Belt and Suspenders Approach

Although excising part of the publicly available information definition seems like a major victory for Clearview AI, in practical terms it is not. This is because of what the court refers to as the law’s “belt and suspenders approach”. This metaphor suggests that there are two routes to keep up privacy’s pants – and loosening the belt does not remove the suspenders. In this case, the suspenders are located in the clause found in PIPA, as well as in its federal and BC counterparts, that limits the collection, use and disclosure of personal information to only that which “a reasonable person would consider appropriate in the circumstances”. The court ruled that the Commissioner’s conclusion that the scraping of personal information was not for purposes that a reasonable person would consider appropriate in the circumstances was reasonable and should not be overturned. This approach, set out in the joint report of findings, emphasized that the company’s mass data scraping involved over 3 billion images of individuals, including children. It was used to create biometric face prints that would remain in Clearview’s databases even if the source images were removed from the internet, and it was carried out for commercial purposes. The commissioners also found that the purposes were not related to the reasons why individuals might have shared their photographs online, could be used to the detriment of those individuals, and created the potential for a risk of significant harm. Continuing with his analogy to search engines, Justice Feasby noted that Clearview AI’s use of publicly available images was very different from the use of the same images by search engines. The different purposes are essential to the reasonableness determination. Justice Feasby states: “The “purposes that are reasonable” analysis is individualized such that a finding that Clearview’s use of personal information is not for reasonable purposes does not apply to other organizations and does not threaten the operations of the internet” (at para 159). He noted that the commercial dimensions of the use are not determinative of reasonableness. However, he observed that “where images and information are posted to social media for the purpose of sharing with family and friends (or prospective friends), the commercialization of such images and information by another party may be a relevant consideration in determining whether the use is reasonable” (at para 160).

The result is that Clearview AI’s scraping of images from the public internet violates Alberta’s PIPA. The court further ruled that the Commissioner’s order was clear and specific, and capable of being implemented. Justice Feasby required Clearview AI to report within 50 days on its good faith progress in taking steps to cease the collection, use and disclosure of images and biometric data collected from individuals in Alberta, and to delete images and biometric data in its database that are from individuals in Alberta.

Harmonized Approaches to Data Protection Law in Canada

This decision highlights some of the challenges to the growing collaboration and cooperation of privacy commissioners in Canada when it comes to interpreting key terms and concepts in substantially similar legislation. Increasingly, the commissioners engage in joint investigations where complaints involve organizations operating in multiple jurisdictions in Canada. While this occurs primarily in the private sector context, it is not exclusively the case, as a recent joint investigation between the BC and Ontario commissioners into a health data breach demonstrates. Joint investigations conserve regulator resources and save private sector organizations from having to respond to multiple similar and concurrent investigations. In addition, joint investigations can lead to harmonized approaches and interpretations of shared concepts in similar legislation. This is a good thing for creating certainty and consistency for those who do business across Canadian jurisdictions.

However, harmonized approaches are vulnerable to multiple judicial review applications, as was the case following the Clearview AI investigation. Although the BC Supreme Court found that the BC Commissioner’s order was reasonable, what the Alberta King’s Bench decision demonstrates is that a common front can be fractured. Justice Feasby found that a slight difference in wording between Alberta’s regulations and those in BC and at the federal level was sufficient to justify finding the scope of Alberta’s publicly available information exception to be unconstitutional.

Harmonized approaches may also be vulnerable to unilateral legislative change. In this respect, it is worth noting that an Alberta report on the impending reform of PIPA recommends “that the Government take all necessary steps, including through proposing amendments to the Personal Information Protection Act, to improve alignment of all provincial privacy legislation, including in the private, public and health sectors” (at p. 13).

The Elephant in the Room: Generative AI and Data Protection Law in Canada

In his reasons, Justice Feasby made Google’s search functions a running comparison for Clearview AI’s data scraping practices. Perhaps a better example would have been the data scraping that takes place in order to train generative AI models. However, the court may have avoided that example because there is an ongoing investigation by the Alberta, Quebec, BC and federal commissioners into OpenAI’s practices. The findings in that investigation are overdue – perhaps the delay has, at least in part, been caused by anticipation of what might happen with the Alberta Clearview AI judicial review. The Alberta decision will likely present a conundrum for the commissioners.

Reading between the lines of Justice Feasby’s decision, it is entirely possible that he would find that the scraping of the public internet to gather training data for generative AI systems would both fall within the exception for publicly available information and be for a purpose that a reasonable person would consider appropriate in the circumstances. Generative AI tools are now widely used – more widely even than search engines since these tools are now also embedded in search engines themselves. To find that the collection and use of personal information that may be indiscriminately found on the internet cannot be used in this way because consent is required is fundamentally impractical. In the EU, the legitimate interest exception in the GDPR provides latitude for use in this way without consent, and recent guidance from the European Data Protection Supervisor suggestions that legitimate interests combined, where appropriate with Data Protection Impact Assessments may address key data protection issues.

In this sense, the approach taken by Justice Feasby seems to carve a path for data protection in a GenAI era in Canada by allowing data scraping of publicly available sources on the Internet in principle, subject to the limit that any such collection or any ensuing use or disclosure of the personal information must be for purposes that a reasonable person would consider appropriate in the circumstances. However, this is not a perfect solution. In the first place, unlike the EU approach, which ensures that other privacy protective measures (such as privacy impact assessments) govern this kind of mass collection, Canadian law remains outdated and inadequate. Further, the publicly available information exceptions – including Alberta’s even after its constitutional nip and tuck – also emphasize that, to use the language of Alberta’s PIPA, it must be “reasonable to assume that the individual that the information is about provided the information”. In fact, there will be many circumstances in which individuals have not provided the information posted online about them. This is the case with photos from parties, family events and other social interactions. Further, social media – and the internet as a whole – is full of non-consensual images, gossip, anecdotes and accusations.

The solution crafted by the Alberta Court of King’s Bench is therefore only a partial solution. A legitimate interest exception would likely serve much better in these circumstances, particularly if it is combined with broader governance obligations to ensure that privacy is adequately considered and assessed. Of course, before this happens, the federal government’s privacy reform measures in Bill C-27 must be resuscitated in some form or another.

 

Published in Privacy

The Commission d’accès à l’information du Québec (CAI) has released a decision regarding a pilot project to use facial recognition technology (FRT) in Métro stores in Quebec. When this is paired with a 2023 investigation report of the BC Privacy Commissioner regarding the use of FRT in Canadian Tire Stores in that province, there seems to be an emerging consensus around how privacy law will apply to the use of FRT in the retail sector in Canada.

Métro had planned to establish a biometric database to enable the use of FRT at certain of its stores operating under the Métro, Jean Coutu and Super C brands, on a pilot basis. The objective of the system was to reduce shoplifting and fraud. The system would function in conjunction with video surveillance cameras installed at the entrances and exits to the stores. The reference database would consist of images of individuals over the age of majority who had been linked to security incidents involving fraud or shoplifting. Images of all shoppers entering the stores would be captured on the video surveillance cameras and then converted to biometric face prints for matching with the face prints in the reference database.

The CAI initiated an investigation after receiving notice from Métro of the creation of the biometric database. The company agreed to put its launch of the project on hold pending the results of the investigation.

The Quebec case involved the application of Quebec’s the Act respecting the protection of personal information in the private sector (PPIPS) as well as its Act to establish a legal framework for information technology (LFIT) The LFIT requires an organization that is planning to create a database of “biometric characteristics and measurements” to disclose this fact to the CAI no later than 60 days before it is to be used. The CAI can impose requirements and can also order the use suspended or the database destroyed if it is not in compliance with any such orders or if it “otherwise constitutes an invasion of privacy” (LFIT art. 45).

Métro argued that the LFIT required individual consent only for the use of a biometric database to ‘confirm or verify’ the identity of an individual (LFIT s. 44). It maintained that its proposed use was different – the goal was not to confirm or verify the identities of shoppers; rather, it was to identify ‘high risk’ shoppers based on matches with the reference database. The CAI rejected this approach, noting the sensitivity of biometric data. Given the quasi-constitutional status of Canadian data protection laws, the CAI found that a ‘large and liberal’ approach to interpretation of the law was required. The CAI found that Métro was conflating the separate concepts of “verification” and “confirmation” of identity. In this case, the biometric faceprints in the probe images would be used to search for a match in the “persons of interest” database. Even if the goal of the generation of the probe images was not to determine the precise identity of all customers – or to add those face prints to the database – the underlying goal was to verify one attribute of the identity of shoppers – i.e., whether there was a match with the persons of interest database. This brought the system within the scope of the LTIF. The additional information in the persons of interest database, which could include the police report number, a description of the past incident, and related personal information would facilitate the further identification of any matches.

Métro also argued that the validation or confirmation of identity did not happen in one single process and that therefore s. 44 of the LTIF was not engaged. The CAI dismissed what it described as the compartmentalisation of the process. Instead, the law required a consideration of the combined effect of all the steps in the operation of the system.

The company also argued that they had obtained the consent required under art 12 of the PPIPS. It maintained that the video cameras captured shoppers’ images with their consent, as there was notice of use of the cameras and the shoppers continued into the stores. It argued that the purposes for which it used the biometric data were consistent with the purposes for which the security cameras were installed, making it a permissible secondary use under s. 12(1) of PPIPS. The CAI rejected this argument noting that it was not a question of a single collection and a related secondary use. Rather, the generation of biometric faceprints from images captured on video is an independent collection personal of data. That collection must comply with data protection requirements and cannot be treated a secondary use of already collected data.

The system proposed by Métro would be used on any person entering the designated stores, and as such it was an entry requirement. Individuals would have no ability to opt out and still shop, and there were no alternatives to participation in the FRT scheme. Not only is consent not possible for the general population entering the stores, those whose images become part of the persons of interest database would also have no choice in the matter.

Métro argued that its obligation to protect its employees and the public outweighed the privacy interests of its customers. The CAI rejected this argument, noting that this was not the test set out in the LTIF, which asked instead whether the database of biometric characteristics “otherwise constitutes an invasion of privacy” (art 45). The CAI was of the view that to create a database of biometric characteristics and to match these characteristics against face prints generated from data captured from the public without their consent in circumstances where the law required consent amounted to a significant infringement of privacy rights. The Commission emphasized again the highly sensitive character of the personal data and issued an order prohibiting the implementation of the proposed system.

The December 2023 BC investigation report was based on that province’s Personal Information Protection Act. It followed a commissioner-initiated investigation into the use by several Canadian Tire Stores in BC of FRT systems integrated with video surveillance cameras. Like the Métro pilot, biometric face prints were generated from the surveillance footage and matched against a persons-of-interest database. The stated goals of the systems were similar as well – to reduce shoplifting and enhance the security of the stores. As was the case in Quebec, the BC Commissioner found that the generation of biometric face prints was a new collection of personal information that required express consent. The Commissioner had found that the stores had not provided adequate notice of collection, making the issue of consent moot. However, he went on to find that even if there had been proper notice, express consent had not been obtained, and consent could not be implied in the circumstances. The collection of biometric faceprint data of everyone entering the stores in question was not for a purpose that a reasonable person would consider appropriate, given the acute sensitivity of the data collected and the risks to the individual that might flow from its misuse, inaccuracy, or from data breaches. Interestingly, in BC, the four stores under investigation removed their FRT systems soon after receiving the notice of investigation. During the investigation, the Commissioner found little evidence to support the need for the systems, with store personnel admitting that the systems added little to their normal security functions. He chastised the retailers for failing both to conduct privacy impact assessments prior to adoption and to put in place measures to evaluate the effectiveness and performance of the systems.

An important difference between the two cases relates to the ability of the CAI to be proactive. In Quebec, the LTIF requires notice to be provided to the Commissioner of the creation of a biometric database in advance of its implementation. This enabled it to rule on the appropriateness of the system before privacy was adversely impacted on a significant scale. By contrast, the systems in BC were in operation for three years before sufficient awareness surfaced to prompt an investigation. Now that powerful biometric technologies are widely available for retail and other uses, governments should be thinking seriously about reforming private sector privacy laws to provide for advance notice requirements – at the very least, for biometric systems.

Following both the Quebec and the BC cases, it is difficult to see how broad-based FRT systems integrated with store security cameras could be deployed in a manner consistent with data protection laws – at least under current shopping business models. This suggests that such uses may be emerging as a de facto no-go zone in Canada. Retailers may argue that this reflects a problem with the law, to the extent that it interferes with their business security needs. Yet if privacy is to mean anything, there must be reasonable limits on the collection of personal data – particularly sensitive data. Just because something can be done, does not mean it should be. Given the rapid advance of technology, we should be carefully attuned to this. Being FRT face-printed each time one goes to the grocery store for a carton of milk may simply be an unacceptably disproportionate response to an admittedly real problem. It is a use of technology that places burdens and risks on ordinary individuals who have not earned suspicion, and who may have few other choices for accessing basic necessities.

 

Published in Privacy

The Clearview AI saga has a new Canadian instalment. In December 2024, the British Columbia Supreme Court rendered a decision on Clearview AI’s application for judicial review of an order issued by the BC Privacy Commissioner. This post explores that decision and some of its implications. The first part sets the context, the next discusses the judicial review decision, and part three looks at the ramifications for Canadian privacy law of the larger (and ongoing) legal battle.

Context

Late in 2021, the Privacy Commissioners of BC, Alberta, Quebec and Canada issued a joint report on their investigation into Clearview AI (My post on this order is here). Clearview AI, a US-based company, had created a massive facial recognition (FRT) database from images scraped from the internet that it marketed to law enforcement agencies around the world. The investigation was launched after a story broke in the New York Times about Clearview’s activities. Although Canadian police services initially denied using Clearview AI, the RCMP later admitted that it had purchased two licences. Other Canadian police services made use of promotional free accounts.

The joint investigation found that Clearview AI had breached the private sector data protection laws of the four investigating jurisdictions by collecting and using sensitive personal information without consent and by doing so for purposes that a reasonable person would not consider appropriate in the circumstances. The practices also violated Quebec’s Act to establish a legal framework for information technology. Clearview AI disagreed with these conclusions. It indicated that it would temporarily cease its operations in Canada but maintained that it was entitled to scrape content from the public web. After failing to respond to the recommendations in the joint report, the Commissioners of Quebec, BC and Alberta issued orders against the company. These orders required Clearview AI to cease offering its services in their jurisdictions, to make best efforts to stop collecting the personal information of those within their respective provincial boundaries, and to delete personal information in its databases that had been improperly collected from those within their boundaries. No order issued from the federal Commissioner, who does not have order making powers under the Personal Information Protection and Electronic Documents Act (PIPEDA). He could have applied to the Federal Court for an order but chose not to do so (more on that in Part 3 of this post).

Clearview AI declined to comply with the provincial orders, other than to note that it had already temporarily ceased operations in Canada. It then applied for judicial review of the orders in each of the three provinces.

To date, only the challenge to the BC Order has been heard and decided. In the BC application, Clearview argued that the Commissioner’s decision was unreasonable. Specifically, it argued that BC’s Personal Information Protection Act (PIPA) did not apply to Clearview AI, that the information it scraped was exempt from consent requirements because it was “publicly available information”, and that the Commissioner’s interpretation of purposes that a reasonable person would consider appropriate in the circumstances was unreasonable and failed to consider Charter values. In his December 2024 decision, Justice Shergill of the BC Supreme Court disagreed, upholding the Commissioner’s order.

The BC Supreme Court Decision on Judicial Review

Justice Shergill confirmed that BC’s PIPA applies to Clearview AI’s activities, notwithstanding the fact that Clearview AI is a US-based company. He noted that applying the ‘real and substantial connection’ test – which considers the nature and extent of connections between a party’s activities and the jurisdiction in which proceedings are initiated – leads to that conclusion. There was evidence that Clearview AI’s database had been marketed to and used by police services in BC, as well as by the RCMP which polices many parts of the province. Further, Justice Shergill noted that Clearview’s data scraping practices were carried out worldwide and captured data about BC individuals including, in all likelihood, data from websites hosted in BC. Interestingly, he also found that Clearview’s scraping of images from social media sites such as Facebook, YouTube and Instagram also created sufficient connection, as these sites “undoubtedly have hundreds of thousands if not millions of users in British Columbia” (at para 91). In reaching his conclusion, Justice Shergill emphasized “the important role that privacy plays in the preservation of our societal values, the ‘quasi-constitutional’ status afforded to privacy legislation, and the increasing significance of privacy laws as technology advances” (at para 95). He also found that there was nothing unfair about applying BC’s PIPA to Clearview AI, as the company “chose to enter British Columbia and market its product to local law enforcement agencies. It also chooses to scrape data from the Internet which involves personal information of people in British Columbia” (at para 107).

Sections 12(1)(e), 15(1)(e) and 18(1)(e) of PIPA provide exceptions to the requirement of knowledge and consent for the collection, use and disclosure of personal information where “the personal information is available to the public” as set out in regulations. The PIPA Regulations include “printed or electronic publications, including a magazine, book, or newspaper in printed or electronic form.” Similar exceptions are found in the federal PIPEDA and in Alberta’s Personal Information Protection Act. Clearview AI had argued that public internet websites, including social media sites, fell within the category of electronic publications and their scraping was thus exempt from consent requirements. The commissioners disagreed, and Clearview AI challenged this interpretation as unreasonable.

Justice Shergill found that the Commissioners’ conclusion that social media websites fell outside the exception for publicly available information was reasonable. The BC Commissioner was entitled to read the list in the PIPA Regulations as a “narrow set of sources” (at para 160). Justice Shergill reviewed the reasoning in the joint report for why social media sites should be treated differently from other types of publications mentioned in the exception. These include the fact that social media sites are dynamic and not static and that individuals exercise a different level of control over their personal information on social media platforms than on news or other such sites. Although the legislation may require a balancing of privacy rights with private sector interests, Justice Shergill found that it was reasonable for the Commissioner to conclude that privacy rights should be given precedence over commercial interests in the overall context of the legislation. Referencing the Supreme Court of Canada’s decision in Lavigne, Justice Shergill noted that “it is the protection of individual privacy that supports the quasi-constitutional status of privacy legislation, not the right of the organization to collect and use personal information” (at para 174). An individual’s ability to control what happens to their personal information is fundamental to the autonomy and dignity protected by privacy rights and “it is thus reasonable to conclude that any exception to these important rights should be interpreted narrowly” (at para 175).

Clearview AI argued that posting photos to social media sites reflected an individual’s autonomous choice to surrender the information to the public domain. Justice Shergill preferred the Commissioner’s interpretation, which considered the sensitivity of the biometric information, and the impact its collection and use could have on individuals. He referenced the Supreme Court of Canada’s decision in R. v. Bykovets (my post on this case is here), which emphasized that “individuals ‘may choose to divulge certain information for a limited purpose, or to a limited class of persons, and nonetheless retain a reasonable expectation of privacy” (at para 162, citing para 46 of Bykovets).

Clearview AI also argued that the Commissioner was unreasonable in not taking into account Charter values in his interpretation of PIPA. In particular, the company was of the view that the freedom of expression, which guarantees the right both to communicate and to receive information, extended to the ability to access and use publicly available information without restriction. Although Justice Shergill found that the Commissioner could have been more direct in his consideration of Charter values, his decision was still not unreasonable on this point. The Commissioner did not engage with the Charter values issues at length because he did not consider the law to be ambiguous – Charter values-based interpretation comes into play in helping to resolve ambiguities in the law. As Justice Shergill noted, “It is difficult to understand how Clearview’s s. 2(b) Charter rights are infringed through an interpretation of ‘publicly available’ which excludes it from collecting personal information from social media websites without consent” (at para 197).

Like its counterpart legislation in Alberta and at the federal level, BC’s PIPA contains a section that articulates the overarching principle, that any collection, use or disclosure of personal information must be for purposes that a reasonable person would consider appropriate in the circumstances. This means, among other things, that even if the exception to consent had applied in this case, the collection and use of the scraped personal information would still have had to have been for a reasonable purpose.

The Commissioners had found that overall, Clearview’s scraping of vast quantities of sensitive personal information from the internet to build a massive facial recognition database was not one that a reasonable person would find appropriate in the circumstances. Clearview AI preferred to characterize its purpose as providing a service to the benefit of law enforcement and national security. In their joint report, the Commissioners had rejected this characterization noting that it did not justify the massive, widespread scraping of personal information by a private sector company. Further, the Commissioners had noted that such an activity could have negative consequences for individuals, including cybersecurity risks and risks that errors could lead to reputational harm. They also observed that the activity contributed to “broad-based harm inflicted on all members of society, who find themselves under continual mass surveillance by Clearview based on its indiscriminate scraping and processing of their facial images” (at para 253). Justice Shergill found that the record supported these conclusions, and that the Commissioners’ interpretation of reasonable purposes was reasonable.

Clearview AI also argued that the Commissioner’s Order was “unnecessary, unenforceable or overbroad”, and should thus be quashed (at para 258). Justice Shergill accepted the Commissioner’s argument that the order was necessary because Clearview had only temporarily suspended its services in Canada, leaving open the possibility that it would offer its services to Canadian law enforcement agencies in the future. He also accepted the Commissioner’s argument that compliance with the order was possible, noting that Clearview had accepted certain steps for ceasing collection and removing images in its settlement of an Illinois class action lawsuit. The order required the company to use “best efforts”, in an implicit acknowledgement that a perfect solution was likely impossible. Clearview argued that a “best efforts” standard was too vague to be enforceable; Justice Shergill disagreed, noting that courts often used “best efforts language”. Further, and quite interestingly, Justice Shergill noted that “if it is indeed impossible for Clearview to sufficiently identify personal information sourced from people in British Columbia, then this is a situation of Clearview’s own making” (at para 279). He noted that “[i]t is not an answer for Clearview to say that because the data was indiscriminately collected, any order requiring it to cease collecting data of persons present in a particular jurisdiction is unenforceable” (at para 279).

Implications

This is a significant decision as it upholds interpretations of important provisions of BC PIPA. These provisions are similar to ones in Alberta’s PIPA and in the federal PIPEDA. However, it is far from the end of the Clearview AI saga, and there is much to continue to watch.

In the first place, the BC Supreme Court decision is already under appeal to the BC Court of Appeal. If the Court of Appeal upholds this decision, it will be a major victory for the BC Commissioner. Yet, either way, there is likely to be a further application for leave to appeal to the Supreme Court of Canada. It may be years before the issue is finally resolved. In this time, data protection laws in BC, Alberta and at the federal level might well be reformed. It will therefore also be important to examine any new bills to see whether the provisions at issue in this case are addressed in any way or left as is.

In the meantime, Clearview AI has also filed for judicial review of the orders of the Quebec and Alberta commissioners, and these applications are moving forward. All three orders (BC, Alberta and Quebec) are based on the same joint findings. A decision by either or both the Quebec or Alberta superior courts that the orders are unreasonable could strike a significant blow for the united front that Canada’s commissioners are increasingly showing on privacy issues that affect all Canadians. There is therefore a great deal riding on the outcomes of these applications. In any event, regardless of the outcomes, expect applications for leave to appeal to the Supreme Court of Canada. Leave to appeal is less likely to be granted if all three provincial courts of appeal take a similar approach to the issues. It is at this point impossible to predict how this litigation will play out.

It is notable that the Privacy Commissioner of Canada, who has no order making powers under PIPEDA but who can apply to Federal Court for an order, declined to do so. Under PIPEDA, such an application requires a hearing de novo by the Federal Court – this means that unlike the judicial review proceedings in the other provinces, the Federal Court need not show any deference to the federal Commissioner’s findings. Instead, the Court would proceed to a determination of the issues after hearing and considering the parties’ evidence and argument. One might wonder whether the rather bruising decision of the Federal Court in Privacy Commissioner v. Facebook (which was subsequently overturned by the Federal Court of Appeal) might have influenced the Commissioner to not roll the dice to seek an order with so much at stake. That a hearing de novo before the Federal Court could upset the apple cart of the Commissioners’ attempts to co-ordinate efforts, reduce duplication and harmonize interpretation, is sobering. Yet, it also means that if this litigation saga ends with the conclusion that the orders are reasonable and enforceable, BC, Alberta and Quebec residents will have received results in the form of orders requiring Clearview to delete images and to geo-fence any future collection of images to protect those within those provinces (which will still need to be made enforceable in the US) – while Canadians elsewhere in the country will not. Canadians will need long promised but as yet undelivered reform of PIPEDA to address the ability of the federal Commissioner to issue orders – ones that will be subject to judicial review with appropriate deference, rather than second guessed by the Personal Information and Data Protection Tribunal proposed in Bill C-27.

Concluding thoughts

Despite rulings from privacy and data protection commissioners around the world that Clearview AI is in breach of their respective laws, and notwithstanding two class action lawsuits in the US under the Illinois Biometric Information Privacy Act, the company has continued to grow its massive FRT database. At the time of the Canadian investigation, the database was said to hold 3 billion images. Current reports place this number at over 50 billion. Considering the resistance of the company to compliance with Canadian law, this raises the question of what it will take to motivate compliance by resistant organizations. As the proposed amendments to Canada’s federal private sector privacy laws wither on the vine after neglect and mismanagement in their journey through Parliament, this becomes a pressing and important question.

 

Published in Privacy

A battle over the protection of personal information in the hands of federal political parties (FPPs) has been ongoing now for several years in British Columbia. The BC Supreme Court has just released a decision which marks a significant defeat for the FPPs in their quest to ensure that only minimal privacy obligations apply to their growing collection, use and disclosure of personal information. Although the outcome only green-lights the investigation by BC’s Office of the Information and Privacy Commissioner into the Liberal, New Democrat and Conservative parties’ compliance with the province’s Personal Information Protection Act (PIPA), it is still an important victory for the complainants. The decision affirms the constitutional applicability of PIPA to the FPPs. The tone of the decision also sends a message. Its opens with: “The ability of an individual to control their personal information is intimately connected to their individual autonomy, dignity and privacy.” Justice Weatherill confirms that “These fundamental values lie at the heart of democracy” (at para 1).

The dispute originated with complaints brought in 2019 by three BC residents (the complainants) who sought access under PIPA to their personal information in the hands of each of the three main FPPs in their BC ridings. They wanted to know what information had been collected about them, how it was being used, and to whom it was being disclosed. This access right is guaranteed under PIPA. By contrast no federal law – whether relating to privacy or to elections – provides an equivalent right with respect to political parties. The Canada Elections Act (CEA) was amended in 2018 to include a very limited obligation for FPPs to have privacy policies approved by the Chief Electoral Officer (CEO), published, and kept up to date. These provisions did not include access rights, oversight, or a complaints mechanism. When the responses of the FPPs to the complainants’ PIPA requests proved inadequate, the complainants filed complaints with the OIPC, which initiated an investigation.

Disappointingly, the FPPs resisted this investigation from the outset. They challenged the constitutional basis for the investigation, arguing that the BC law could not apply to FPPs. This issue was referred to an outside adjudicator, who heard arguments and rendered a decision in March 2022. He found that the term “organization” in PIPA included FPPs that collected information about BC residents and that PIPA’s application to the FPPs was constitutional. In April 2022, the FPPs individually filed applications for judicial review of this decision. The adjudicator ruled that he would pause his investigation until the constitutional issues were resolved.

In June of 2023, while the judicial review proceedings were ongoing, the government tabled amendments to the CEA in Bill C-47. These amendments (now passed) permit FPPs to “collect, use, disclose, retain and dispose of personal information in accordance with the party’s privacy policy” (s. 385.1). Section 385.2(3) states: “The purpose of this section is to provide for a national, uniform, exclusive and complete regime applicable to registered parties and eligible parties respecting their collection, use, disclosure, retention and disposal of personal information”. The amendments were no doubt intended to reinforce the constitutional arguments being made in the BC litigation.

In his discussion of these rather cynical amendments, Justice Weatherill quoted extensively from statements of the Chief Electoral Officer of Canada before the Senate Standing Committee on Legal and Constitutional Affairs in which he discussed the limitations of the privacy provisions in the CEA, including the lack of substantive rights and the limited oversight/enforcement. The CEO is quoted as stating “Not a satisfactory regime, if I’m being perfectly honest” (at para 51).

Support for extension of privacy obligations to political parties has been gaining momentum, particularly considering increasingly data-driven strategies, the use of profiling and targeting by political parties, concerns over the security of such detailed information and general frustration over politicians being able to set their own rules for conduct that would be considered unacceptable by any other actors in the public and private sectors. Perhaps sensing this growing frustration, the federal government introduced Bill C-65 in March of 2024. Among other things, this bill would provide some enforcement powers to the CEO with respect to the privacy obligations in the CEA. Justice Weatherill declined to consider this Bill in his decision, noting that it might never become law and was thus irrelevant to the proceedings.

Justice Weatherill ruled that BC’s PIPA applies to organizations, and that FPPS active in the province fall within the definition of “organization”. The FPPs argued that PIPA should be found inoperative to the extent that it is incompatible with federal law under the constitutional doctrine of paramountcy. They maintained that the CEA addressed the privacy obligations of political parties and that the provincial legislation interfered with that regime. Justice Weatherill disagreed, citing the principle of cooperative federalism. Under this approach, the doctrine of paramountcy receives a narrow interpretation, and where possible “harmonious interpretations of federal and provincial legislation should be favoured over interpretations that result in incompatibility” (at para 121). He found that while PIPA set a higher standard for privacy protection, the two laws were not incompatible. PIPA did not require FPPs to do something that was prohibited under the federal law – all it did was provide additional obligations and oversight. There was no operational conflict between the laws – FPPs could comply with both. Further, there was nothing in PIPA that prevented the FPPs from collecting, using or disclosing personal information for political purposes. It simply provided additional protections.

Justice Weatherill also declined to find that the application of PIPA to FPPs frustrated a federal purpose. He found that there was no evidence to support the argument that Parliament intended “to establish a regime in respect of the collection and use of personal information by FPPs” (at para 146). He also found that the evidence did not show that it was a clear purpose of the CEA privacy provisions “to enhance, protect and foster the FPPs’ effective participation in the electoral process”. He found that the purpose of these provisions was simply to ensure that the parties had privacy policies in place. Nothing in PIPA frustrated that purpose; rather, Justice Weatherill found that even if there was a valid federal purpose with respect to the privacy policies, “PIPA is in complete alignment with that purpose” (at para 158).

Justice Weatherill also rejected arguments that the doctrine of interjurisdictional immunity meant that the federal government’s legislative authority over federal elections could not be allowed to be impaired by BC’s PIPA. According to this argument the Chief Electoral Officer was to have the final say over the handling of personal information by FPPs. The FPPs argued that elections could be disrupted by malefactors who might use access requests under PIPA in a way that could lead to “tying up resources that would otherwise be focused on the campaign and subverting the federal election process” (at para 176). Further, if other provincial privacy laws were extended to FPPs, it might mean that FPPs would have to deal with multiple privacy commissioners, bogging them down even further. Justice Weatherill rejected these arguments, stating:

Requiring FPPs to disclose to British Columbia citizens, on request, the personal information they have about the citizen, together with information as to how it has been used and to whom it has been disclosed has no impact on the core federal elections power. It does not “significantly trammel” the ability of Canadian citizens to seek by lawful means to influence fellow electors, as was found to have been the case in McKay. It does not destroy the right of British Columbians to engage in federal election activity. At most, it may have a minimal impact on the administration of FPPs. This impact is not enough to trigger interjurisdictional immunity. All legislation carries with it some burden of compliance. The petitioners have not shown that this burden is so onerous as to impair them from engaging with voters. (at para 182).

Ultimately, Justice Weatherill ruled that there was no constitutional barrier to the application of PIPA. The result is that the matter goes back to the OIPC for investigation and determination on the merits. It has been a long, drawn out and expensive process so far, but at least this decision is an unequivocal affirmation of the application of basic privacy principles (at least in BC) to the personal information handling practices of FPPs. It is time for Canada’s political parties to accept obligations similar to those imposed on private sector organizations. If they want to collect, use and disclose data in increasingly complex data-driven voter profiling and targeting activities they need to stop resisting the commensurate obligations to treat that information with care and to be accountable for their practices.

Published in Privacy

Ontario’s Information and Privacy Commissioner has released a report on an investigation into the use by McMaster University of artificial intelligence (AI)-enabled remote proctoring software. In it, Commissioner Kosseim makes findings and recommendations under the province’s Freedom of Information and Protection of Privacy Act (FIPPA) which applies to Ontario universities. Interestingly, noting the absence of provincial legislation or guidance regarding the use of AI, the Commissioner provides additional recommendations on the adoption of AI technologies by public sector bodies.

AI-enabled remote proctoring software saw a dramatic uptake in use during the pandemic as university classes migrated online. It was also widely used by professional societies and accreditation bodies. Such software monitors those writing online exams in real-time, recording both audio and video, and using AI to detect anomalies that may indicate that cheating is taking place. Certain noises or movements generate ‘flags’ that lead to further analysis by AI and ultimately by the instructor. If the flags are not resolved, academic integrity proceedings may ensue. Although many universities, including the respondent McMaster, have since returned to in-person exam proctoring, AI-enabled remote exam surveillance remains an option where in-person invigilation is not possible. This can include in courses delivered online to students in diverse and remote locations.

The Commissioner’s investigation related to the use by McMaster University of two services offered by the US-based company Respondus: Respondus Lockdown Browser and Respondus Monitor. Lockdown Browser consists of software downloaded by students onto their computers that blocks access to the internet and to other files on the computer during an exam. Respondus Monitor is the AI-enabled remote proctoring application. This post focuses on Respondus Monitor.

AI-enabled remote proctoring systems have raised concerns about both privacy and broader human rights issues. These include the intrusiveness of the constant audio and video monitoring, the capturing of data from private spaces, uncertainty over the treatment of personal data collected by such systems, adverse impacts on already marginalised students, and the enhanced stress and anxiety that comes from both constant surveillance and easily triggered flags. The broader human rights issues, however, are an uncomfortable fit with public sector data protection law.

Commissioner Kosseim begins with the privacy issues, finding that Respondus Monitor collects personal information that includes students’ names and course information, images of photo identification documents, and sensitive biometric data in audio and video recordings. Because the McMaster University Act empowers the university to conduct examinations and appoint examiners, the Commissioner found that the collection was carried out as part of a lawfully authorized activity. Although exam proctoring had chiefly been conducted in-person prior to the pandemic, she found that there was no “principle of statute or common law that would confine the method by which the proctoring of examinations may be conducted by McMaster to an in-person setting” (at para 48). Further, she noted that even post-pandemic, there might still be reasons to continue to use remote proctoring in some circumstances. She found that the university had a legitimate interest in attempting to curb cheating, noting that evidence suggested an upward trend in academic integrity cases, and a particular spike during the pandemic. She observed that “by incorporating online proctoring into its evaluation methods, McMaster was also attempting to address other new challenges that arise in an increasingly digital and remote learning context” (at para 50).

The collection of personal information must be necessary to a lawful authorized activity carried out by a public body. Commissioner Kosseim found that the information captured by Respondus Monitor – including the audio and video recordings – was “technically necessary for the purpose of conducting and proctoring the exams” (at para 60). Nevertheless, she expressed concerns over the increased privacy risks that accompany this continual surveillance of examinees. She was also troubled by McMaster’s assertion that it “retains complete autonomy, authority, and discretion to employ proctored online exams, prioritizing administrative efficiency and commercial viability, irrespective of necessity” (at para 63). She found that the necessity requirement in s. 38(2) of FIPPA applied, and that efficiency or commercial advantage could not displace it. She noted that the kind of personal information collected by Respondus Monitor was particularly sensitive, creating “risks of unfair allegations or decisions being made about [students] based on inaccurate information” (at para 66). In her view, “[t]hese risks must be appropriately mitigated by effective guardrails that the university should have in place to govern its adoption and use of such technologies” (at para 66).

FIPPA obliges public bodies to provide adequate notice of the collection of personal information. Commissioner Kosseim reviewed the information made available to students by McMaster University. Although she found overall that it provided students with useful information, students had to locate different pieces of information on different university websites. The need to check multiple sites to get a clear picture of the operation of Respondus Monitor did not satisfy the notice requirement, and the Commissioner recommended that the university prepare a “clear and comprehensive statement either in a single source document, or with clear cross-references to other related documents” (at para 70).

Section 41(1) of FIPPA limits the use of personal information collected by a public body to the purpose for which it was obtained or compiled, or for a consistent purpose. Although the Commissioner found that the analysis of the audio and video recordings to generate flags was consistent with the collection of that information, the use by Respondus of samples of the recordings to improve its own systems – or to allow third party research – was not. On this point, there was an important difference in interpretation. Respondus appeared to define personal information as personal identifiers such as names and ID numbers; it treated audio and video clips that lacked such identifiers as “anonymized”. However, under FIPPA audio and video recordings of individuals are personal information. No provision was made for students either to consent to or opt out of this secondary use of their personal information. Commissioner Kosseim noted that Respondus had made public statements that when operating in some jurisdictions (including California and EU members states) it did not use audio or video recordings for research or to improve its products or services. She recommended that McMaster obtain a similar undertaking from Respondus to not use its students’ information for these purposes. The Commissioner also noted that Respondus’ treating the audio and video recordings as anonymized data meant that it did not have adequate safeguards in place for this personal information.

Respondus’ Terms of Service provide that the company reserved the right to disclose personal information for law enforcement purposes. Commissioner Kosseim found that McMaster should require, in its contact with Respondus, that Respondus notify it promptly of any compelled disclosure of its students’ personal information to law enforcement or to government, and to limit any such disclosure to the specific information it is legally required to disclose. She also set a retention limit for the audio and video recordings at one year, with confirmation to be provided by Respondus of deletions after the end of this period.

One of the most interesting aspects of this report is the section titled “Other Recommendations” in which the Commissioner addresses the adoption of an AI-enabled technology by a public institution in a context in which “there is no current law or binding policy specifically governing the use of artificial intelligence in Ontario’s public sector.” (at para 134). The development and adoption of these technologies is outpacing the evolution of law and policy, leaving important governance gaps. In May 2023, the Commissioner Kosseim and Commissioner DeGuire of the Ontario Human Rights Commission issued a joint statement urging the Ontario government to take action to put in place an accountability framework for public sector AI. Even as governments acknowledge that these technologies create risks of discriminatory bias and other potential harms, there remains little to govern AI systems outside the piecemeal coverage offered by existing laws such as, in this case, FIPPA. Although the Commissioner’s interpretation and application of FIPPA addressed issues relating to the collection, use and disclosure of personal information, there remain important issues that cannot be addressed through privacy legislation.

Commissioner Kosseim acknowledged that McMaster University had “already carried out a level of due diligence prior to adopting Respondus Monitor” (at para 138). Nevertheless, given the risks and potential harms of AI-enabled technologies, she made a number of further recommendations. The first was to conduct an Algorithmic Impact Assessment (AIA) in addition to a Privacy Impact Assessment. She suggested that the federal government’s AIA tool could be a useful guide while waiting for one to be developed for Ontario. An AIA could allow the adopter of an AI system to have better insight into the data used to train the algorithms, and could assess impacts on students going beyond privacy (which might include discrimination, increased stress, and harms from false positive flags). She also called for meaningful consultation and engagement with those affected by the adoption of the technology taking place both before the adoption of the system and on an ongoing basis thereafter. Although the university may have had to react very quickly given that the first COVID shutdown occurred shortly before an exam period, an iterative engagement process even now would be useful “for understanding the full scope of potential issue that may arise, and how these may impact, be perceived, and be experienced by others” (at para 142). She noted that this type of engagement would allow adopters to be alert and responsive to problems both prior to adoption and as they arise during deployment. She also recommended that the consultations include experts in both privacy and human rights, as well as those with technological expertise.

Commissioner Kosseim also recommended that the university consider providing students with ways to opt out of the use of these technologies other than through requesting accommodations related to disabilities. She noted “AI-powered technologies may potentially trigger other protected grounds under human rights that require similar accommodations, such as color, race or ethnic origin” (at para 147). On this point, it is worth noting that the use of remote proctoring software creates a context in which some students may need to be accommodated for disabilities or other circumstances that have nothing to do with their ability to write their exam, but rather that impact the way in which the proctoring systems read their faces, interpret their movements, or process the sounds in their homes. Commissioner Kosseim encouraged McMaster University “to make special arrangements not only for students requesting formal accommodation under a protected ground in human rights legislation, but also for any other students having serious apprehensions about the AI-enabled software and the significant impacts it can have on them and their personal information” (at para 148).

Commissioner Kosseim also recommended that there be an appropriate level of human oversight to address the flagging of incidents during proctoring. Although flags were to be reviewed by instructors before deciding whether to proceed to an academic integrity investigation, the Commissioner found it unclear whether there was a mechanism for students to challenge or explain flags prior to escalation to the investigation stage. She recommended that there be such a procedure, and, if there already was one, that it be explained clearly to students. She further recommended that a public institution’s inquiry into the suitability for adoption of an AI-enabled technology should take into account more than just privacy considerations. For example, the public body’s inquiries should consider the nature and quality of training data. Further, the public body should remain accountable for its use of AI technologies “throughout their lifecycle and across the variety of circumstances in which they are used” (at para 165). Not only should the public body monitor the performance of the tool and alert the supplier of any issues, the supplier should be under a contractual obligation to inform the public body of any issues that arise with the system.

The outcome of this investigation offers important lessons and guidance for universities – and for other public bodies – regarding the adoption of third-party AI-enabled services. For the many Ontario universities that adopted remote proctoring during the pandemic, there are recommendations that should push those still using these technologies to revisit their contracts with vendors – and to consider putting in place processes to measure and assess the impact of these technologies. Although some of these recommendations fall outside the scope of FIPPA, the advice is still sage and likely anticipates what one can only hope is imminent guidance for Ontario’s public sector.

Published in Privacy

On October 26, 2023, I appeared as a witness before the INDU Committee of the House of Commons which is holding hearings on Bill C-27. Although I would have preferred to address the Artificial Intelligence and Data Act, it was clear that the Committee was prioritizing study of the Consumer Protection and Privacy Act in part because the Minister of Industry had yet to produce the text of amendments to the AI and Data Act which he had previously outlined in a letter to the Committee Chair. It is my understanding that witnesses will not be called twice. As a result, I will be posting my comments on the AI and Data Act on my blog.

The other witnesses heard at the same time included Colin Bennett, Michael Geist, Vivek Krishnamurthy and Brenda McPhail. The recording of that session is available here.

__________

Thank you, Mr Chair, for the invitation to address this committee.

I am a law professor at the University of Ottawa, where I hold the Canada Research Chair in Information Law and Policy. I appear today in my personal capacity. I have concerns with both the CPPA and AIDA. Many of these have been communicated in my own writings and in the report submitted to this committee by the Centre for Digital Rights. My comments today focus on the Consumer Privacy Protection Act. I note, however, that I have very substantial concerns about the AI and Data Act and would be happy to answer questions on it as well.

Let me begin by stating that I am generally supportive of the recommendations of Commissioner Dufresne for the amendment of Bill C-27 set out in his letter of April 26, 2023, to the Chair of this Committee. I will also address 3 other points.

The Minister has chosen to retain consent as the backbone of the CPPA, with specific exceptions to consent. One of the most significant of these is the “legitimate interest” exception in s. 18(3). This allows organizations to collect or use personal information without knowledge or consent if it is for an activity in which an organization has a legitimate interest. There are guardrails: the interest must outweigh any adverse effects on the individual; it must be one which a reasonable person would expect; and the information must not be collected or used to influence the behaviour or decisions of the individual. There are also additional documentation and mitigation requirements.

The problem lies in the continuing presence of “implied consent” in section 15(5) of the CPPA. PIPEDA allowed for implied consent because there were circumstances where it made sense, and there was no “legitimate interest” exception. However, in the CPPA, the legitimate interest exception does the work of implied consent. Leaving implied consent in the legislation provides a way to get around the guardrails in s. 18(3) (an organization can opt for the ‘implied consent’ route instead of legitimate interest). It will create confusion for organizations that might struggle to understand which is the appropriate approach. The solution is simple: get rid of implied consent. I note that “implied consent” is not a basis for processing under the GDPR. Consent must be express or processing must fall under another permitted ground.

My second point relates to s. 39 of the CPPA, which is an exception to an individual’s knowledge and consent where information is disclosed to a potentially very broad range of entities for “socially beneficial purposes”. Such information need only be de-identified – not anonymized – making it more vulnerable to reidentification. I question whether there is social licence for sharing de-identified rather than anonymized data for these purposes. I note that s. 39 was carried over verbatim from C-11, when “de-identify” was defined to mean what we understand as “anonymize”.

Permitting disclosure for socially beneficial purposes is a useful idea, but s. 39, especially with the shift in meaning of “de-identify”, lacks necessary safeguards. First, there is no obvious transparency requirement. If we are to learn anything from the ETHI Committee inquiry into PHAC’s use of Canadians’ mobility data, transparency is fundamentally important. At the very least, there should be a requirement that written notice of data sharing for socially beneficial purposes be given to the Privacy Commissioner of Canada; ideally there should also be a requirement for public notice. Further, s. 39 should provide that any such sharing be subject to a data sharing agreement, which should also be provided to the Privacy Commissioner. None of this is too much to ask where Canadians’ data are conscripted for public purposes. Failure to ensure transparency and some basic measure of oversight will undermine trust and legitimacy.

My third point relates to the exception to knowledge and consent for publicly available personal information. Bill C-27 reproduces PIPEDA’s provision on publicly available personal information, providing in s. 51 that “An organization may collect, use or disclose an individual’s personal information without their knowledge or consent if the personal information is publicly available and is specified by the regulations.” We have seen the consequences of data scraping from social media platforms in the case of Clearview AI, which used scraped photographs to build a massive facial recognition database. The Privacy Commissioner takes the position that personal information on social media platforms does not fall within the “publicly available personal information” exception. Yet not only could this approach be upended in the future by the new Personal Information and Data Protection Tribunal, it could also easily be modified by new regulations. Recognizing the importance of s. 51, former Commissioner Therrien had recommended amending it to add that the publicly available personal information be such “that the individual would have no reasonable expectation of privacy”. An alternative is to incorporate the text of the current Regulations Specifying Publicly Available Information into the CPPA, revising them to clarify scope and application in our current data environment. I would be happy to provide some sample language.

This issue should not be left to regulations. The amount of publicly available personal information online is staggering, and it is easily susceptible to scraping and misuse. It should be clear and explicit in the law that personal data cannot be harvested from the internet, except in limited circumstances set out in the statute.

Finally, I add my voice to those of so many others in saying that the data protection obligations set out in the CPPA should apply to political parties. It is unacceptable that they do not.

Published in Privacy

The following is a short excerpt from a new paper which looks at the public sector use of private sector personal data (Teresa Scassa, “Public Sector Use of Private Sector Personal Data: Towards Best Practices”, forthcoming in (2024) 47:2 Dalhousie Law Journal ) The full pre-print version of the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4538632

Governments seeking to make data-driven decisions require the data to do so. Although they may already hold large stores of administrative data, their ability to collect new or different data is limited both by law and by practicality. In our networked, Internet of Things society, the private sector has become a source of abundant data about almost anything – but particularly about people and their activities. Private sector companies collect a wide variety of personal data, often in high volumes, rich in detail, and continuously over time. Location and mobility data, for example, are collected by many different actors, from cellular service providers to app developers. Financial sector organizations amass rich data about the spending and borrowing habits of consumers. Even genetic data is collected by private sector companies. The range of available data is constantly broadening as more and more is harvested, and as companies seek secondary markets for the data they collect.

Public sector use of private sector data is fraught with important legal and public policy considerations. Chief among these is privacy since access to such data raises concerns about undue government intrusion into private lives and habits. Data protection issues implicate both public and private sector actors in this context, and include notice and consent, as well as data security. And, where private sector data is used to shape government policies and actions, important questions about ethics, data quality, the potential for discrimination, and broader human rights questions also arise. Alongside these issues are interwoven concerns about transparency, as well as necessity and proportionality when it comes to the conscription by the public sector of data collected by private companies.

This paper explores issues raised by public sector access to and use of personal data held by the private sector. It considers how such data sharing is legally enabled and within what parameters. Given that laws governing data sharing may not always keep pace with data needs and public concerns, this paper also takes a normative approach which examines whether and in what circumstances such data sharing should take place. To provide a factual context for discussion of the issues, the analysis in this paper is framed around two recent examples from Canada that involved actual or attempted access by government agencies to private sector personal data for public purposes. The cases chosen are different in nature and scope. The first is the attempted acquisition and use by Canada’s national statistics organization, Statistics Canada (StatCan), of data held by credit monitoring companies and financial institutions to generate economic statistics. The second is the use, during the COVID-19 pandemic, of mobility data by the Public Health Agency of Canada (PHAC) to assess the effectiveness of public health policies in reducing the transmission of COVID-19 during lockdowns. The StatCan example involves the compelled sharing of personal data by private sector actors; while the PHAC example involves a government agency that contracted for the use of anonymized data and analytics supplied by private sector companies. Each of these instances generated significant public outcry. This negative publicity no doubt exceeded what either agency anticipated. Both believed that they had a legal basis to gather and/or use the data or analytics, and both believed that their actions served the public good. Yet the outcry is indicative of underlying concerns that had not properly been addressed.

Using these two quite different cases as illustrations, the paper examines the issues raised by the use of private sector data by government. Recognizing that such practices are likely to multiply, it also makes recommendations for best practices. Although the examples considered are Canadian and are shaped by the Canadian legal context, most of the issues they raise are of broader relevance. Part I of this paper sets out the two case studies that are used to tease out and illustrate the issues raised by public sector use of private sector data. Part II discusses the different issues and makes recommendations.

The full pre-print version of the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4538632

Published in Privacy

A recent decision of the Federal Court of Canada ends (subject to any appeal) the federal Privacy Commissioner’s attempt to obtain an order against Facebook in relation to personal information practices linked to the Cambridge Analytica scandal. Following a joint investigation with British Columbia’s Information and Privacy Commissioner, the Commissioners had issued a Report of Findings in 2019. The Report concluded that Facebook had breached Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) and B.C.’s Personal Information Protection Act by failing to obtain appropriate consent, failing to adequately safeguard the data of its users and failing to be accountable for the data under its control. Under PIPEDA, the Privacy Commissioner has no order-making powers and can only make non-binding recommendations. For an order to be issued under PIPEDA, an application must be made to the Federal Court under s. 15, either by the complainant, or by the Privacy Commissioner with the complainant’s permission. The proceeding before the court is de novo, meaning that the court renders its own decision on whether there has been a breach of PIPEDA based upon the evidence presented to it.

The Cambridge Analytica scandal involved a researcher who developed a Facebook app. Through this app, the developer collected user data, ostensibly for research purposes. That data was later disclosed to third parties who used it to develop “psychographic” models for purposes of targeting political messages towards segments of Facebook users” (at para 35). It is important to note here that the complaint was not against the app developer, but rather against Facebook. Essentially, the complainants were concerned that Facebook did not adequately protect its users’ privacy. Although it had put in place policies and requirements for third party app developers, the complainants were concerned that it did not adequately monitor the third-party compliance with its policies.

The Federal Court dismissed the Privacy Commissioner’s application largely because of a lack of evidence to establish that Facebook had failed to meet its PIPEDA obligations to safeguard its users’ personal information. Referring to it as an “evidentiary vacuum” (para 71), Justice Manson found that there was a lack of expert evidence regarding what Facebook might have done differently. He also found that there was no evidence from users regarding their expectations of privacy on Facebook. The Court chastised the Commissioner, stating “ultimately it is the Commissioner’s burden to establish a breach of PIPEDA on the basis of evidence, not speculation and inferences derived from a paucity of material facts” (at para 72). Justice Manson found the evidence presented by the Commissioner to be unpersuasive, speculative, and required the court to draw “unsupported inferences”. He was unsympathetic to the Commissioner’s explanation that it did not use its statutory powers to compel evidence (under s. 12.1 of PIPEDA) because “Facebook would not have complied or would have had nothing to offer” (at para 72). Justice Manson noted that had Facebook failed to comply with requests under s. 12.1, the Commissioner could have challenged the refusal.

Yet there is more to this decision than just a dressing down of the Commissioner’s approach to the case. In discussing “meaningful consent” under PIPEDA, Justice Manson frames the question before the court as “whether Facebook made reasonable efforts to ensure users and users’ Facebook friends were advised of the purposes for which their information would be used by third-party applications” (at para 63). This argument is reflected in the Commissioner’s position that Facebook should have done more to ensure that third party app developers on its site complied with their contractual obligations, including those that required developers to obtain consent from app users to the collection of personal data. Facebook’s position was that PIPEDA only requires that it make reasonable efforts to protect the personal data of its users, and that it had done so through its “combination of network-wide policies, user controls and educational resources” (at para 68). It is here that Justice Manson emphasizes the lack of evidence before him, noting that it is not clear what else Facebook could have reasonably been expected to do. In making this point, he states:

There is no expert evidence as to what Facebook could feasibly do differently, nor is there any subjective evidence from Facebook users about their expectations of privacy or evidence that any user did not appreciate the privacy issues at stake when using Facebook. While such evidence may not be strictly necessary, it would have certainly enabled the Court to better assess the reasonableness of meaningful consent in an area where the standard for reasonableness and user expectations may be especially context dependent and ever-evolving. (at para 71) [My emphasis].

This passage should be deeply troubling to those concerned about privacy. By referring to the reasonable expectation of privacy in terms of what users might expect in an ever-evolving technological context, Justice Manson appears to abandon the normative dimensions of the concept. His comments lead towards a conclusion that the reasonable expectation of privacy is an ever-diminishing benchmark as it becomes increasingly naïve to expect any sort of privacy in a data-hungry surveillance society. Yet this is not the case. The concept of the “reasonable expectation of privacy” has significant normative dimensions, as the Supreme Court of Canada reminds us in R. v. Tessling and in the case law that follows it. In Tessling, Justice Binnie noted that subjective expectations of privacy should not be used to undermine the privacy protections in s. 8 of the Charter, stating that “[e]xpectation of privacy is a normative rather than a descriptive standard.” Although this comment is made in relation to the Charter, a reasonable expectation of privacy that is based upon the constant and deliberate erosion of privacy would be equally meaningless in data protection law. Although Justice Manson’s comments about the expectation of privacy may not have affected the outcome of this case, they are troublesome in that they might be picked up by subsequent courts or by the Personal Information and Data Protection Tribunal proposed in Bill C-27.

The decision also contains at least two observations that should set off alarm bells with respect to Bill C-27, a bill to reform PIPEDA. Justice Manson engages in some discussion of the duty of an organization to safeguard information that it has disclosed to a third party. He finds that PIPEDA imposes obligations on organizations with respect to information in their possession, and information transferred for processing. In the case of prospective business transactions, an organization sharing information with a potential purchaser must enter into an agreement to protect that information. However, Justice Manson interprets this specific reference to a requirement for such an agreement to mean that “[i]f an organization were required to protect information transferred to third parties more generally under the safeguarding principle, this provision would be unnecessary” (at para 88). In Bill C-27, s. 39, for example, permits organizations to share de-identified (not anonymized) personal information with certain third parties without the knowledge or consent of individuals for ‘socially beneficial’ purposes without imposing any requirement to put in place contractual provisions to safeguard that information. The comments of Justice Manson clearly highlight the deficiencies of s. 39 which must be amended to include a requirement for such safeguards.

A second issue relates to the human-rights based approach to privacy which both the former Privacy Commissioner Daniel Therrien and the current Commissioner Philippe Dufresne have openly supported. Justice Manson acknowledges, that the Supreme Court of Canada has recognized the quasi-constitutional nature of data protection laws such as PIPEDA, because “the ability of individuals to control their personal information is intimately connected to their individual autonomy, dignity, and privacy” (at para 51). However, neither PIPEDA nor Bill C-27 take a human-rights based approach. Rather, they place personal and commercial interests in personal data on the same footing. Justice Manson states: “Ultimately, given the purpose of PIPEDA is to strike a balance between two competing interests, the Court must interpret it in a flexible, common sense and pragmatic manner” (at para 52). The government has made rather general references to privacy rights in the preamble of Bill C-27 (though not in any preamble to the proposed Consumer Privacy Protection Act) but has steadfastly refused to reference the broader human rights context of privacy in the text of the Bill itself. We are left with a purpose clause that acknowledges “the right of privacy of individuals with respect to their personal information” in a context in which “significant economic activity relies on the analysis, circulation and exchange of personal information”. The purpose clause finishes with a reference to the need of organizations to “collect, use or disclose personal information for purposes that a reasonable person would consider appropriate in the circumstances.” While this reference to the “reasonable person” should highlight the need for a normative approach to reasonable expectations as discussed above, the interpretive approach adopted by Justice Manson also makes clear the consequences of not adopting an explicit human-rights based approach. Privacy is thrown into a balance with commercial interests without fundamental human rights to provide a firm backstop.

Justice Manson seems to suggests that the Commissioner’s approach in this case may flow from frustration with the limits of PIPEDA. He describes the Commissioner’s submissions as “thoughtful pleas for well-thought-out and balanced legislation from Parliament that tackles the challenges raised by social media companies and the digital sharing of personal information, not an unprincipled interpretation from this Court of existing legislation that applies equally to a social media giant as it may apply to the local bank or car dealership.” (at para 90) They say that bad cases make bad law; but bad law might also make bad cases. The challenge is to ensure that Bill C-27 does not reproduce or amplify deficiencies in PIPEDA.

 

Published in Privacy

A recent decision of the Federal Court of Canada exposes the tensions between access to information and privacy in our data society. It also provides important insights into how reidentification risk should be assessed when government agencies or departments respond to requests for datasets with the potential to reveal personal information.

The case involved a challenge by two journalists to Health Canada’s refusal to disclose certain data elements in a dataset of persons permitted to grow medical marijuana for personal use under the licensing scheme that existed before the legalization of cannabis. [See journalist Molly Hayes’ report on the story here]. Health Canada had agreed to provide the first character of the Forward Sortation Area (FSA) of the postal codes of licensed premises but declined to provide the second and third characters or the names of the cities in which licensed production took place. At issue was whether these location data constituted “personal information” – which the government cannot disclose under s. 19(1) of the Access to Information Act (ATIA). A second issue was the degree of effort required of a government department or agency to maximize the release of information in a privacy-protective way. Essentially, this case is about “the appropriate analytical approach to measuring privacy risks in relation to the release of information from structured datasets that contain personal information” (at para 2).

The licensing scheme was available to those who wished to grow their own marijuana for medical purposes or to anyone seeking to be a “designated producer” for a person in need of medical marijuana. Part of the licence application required the disclosure of the medical condition that justified the use of medical marijuana. Where a personal supply of medical marijuana is grown at the user’s home, location information could easily be linked to that individual. Both parties agreed that the last three characters in a six-character postal code would make it too easy to identify individuals. The dispute concerned the first three characters – the FSA. The first character represents a postal district. For example, Ontario, Canada’s largest province, has five postal districts. The second character indicates whether an area within the district is urban or rural. The third character identifies either a “specific rural region, an entire medium-sized city, or a section of a major city” (at para 12). FSAs differ in size; StatCan data from 2016 indicated that populations in FSAs ranged from no inhabitants to over 130,000.

Information about medical marijuana and its production in a rapidly evolving public policy context is a subject in which there is a public interest. In fact, Health Canada proactively publishes some data on its own website regarding the production and use of medical marijuana. Yet, even where a government department or agency publishes data, members of the public can use the ATI system to request different or more specific data. This is what happened in this case.

In his decision, Justice Pentney emphasized that both access to information and the protection of privacy are fundamental rights. The right of access to government information, however, does not include a right to access the personal information of third parties. Personal information is defined in the ATIA as “information about an identifiable individual” (s. 3). This means that all that is required for information to be considered personal is that it can be used – alone or in combination with other information – to identify a specific individual. Justice Pentney reaffirmed that the test for personal information from Gordon v. Canada (Health) remains definitive. Information is personal information “where there is a serious possibility that an individual could be identified through the use of that information, alone or in combination with other available information.” (Gordon, at para 34, emphasis added). More recently, the Federal Court has defined a “serious possibility” as “a possibility that is greater than speculation or a ‘mere possibility', but does not need to reach the level of ‘more likely than not’” (Public Safety, at para 53).

Geographic information is strongly linked to reidentification. A street address is, in many cases, clearly personal information. However, city, town or even province of residence would only be personal information if it can be used in combination with other available data to link to a specific individual. In Gordon, the Federal Court upheld a decision to not release province of residence data for those who had suffered reported adverse drug reactions because these data could be combined with other available data (including obituary notices and even the observations of ‘nosy neighbors’) to identify specific individuals.

The Information Commissioner argued that to meet the ‘serious possibility’ test, Health Canada should be able to concretely demonstrate identifiability by connecting the dots between the data and specific individuals. Justice Pentney disagreed, noting that in the case before him, the expert opinion combined with evidence about other available data and the highly sensitive nature of the information at issue made proof of actual linkages unnecessary. However, he cautioned that “in future cases, the failure to engage in such an exercise might well tip the balance in favour of disclosure” (at para 133).

Justice Pentney also ruled that, because the proceeding before the Federal Court is a hearing de novo, he was not limited to considering the data that were available at the time of the ATIP request. A court can take into account data made available after the request and even after the decision of the Information Commissioner. This makes sense. The rapidly growing availability of new datasets as well as new tools for the analysis and dissemination of data demand a timelier assessment of identifiability. Nevertheless, any pending or possible future ATI requests would be irrelevant to assessing reidentification risk, since these would be hypothetical. Justice Pentney noted: “The fact that a more complete mosaic may be created by future releases is both true and irrelevant, because Health Canada has an ongoing obligation to assess the risks, and if at some future point it concludes that the accumulation of information released created a serious risk, it could refuse to disclose the information that tipped the balance” (at para 112).

The court ultimately agreed with Health Canada that disclosing anything beyond the first character of the FSA could lead to the identification of some individuals within the dataset, and thus would amount to personal information. Health Canada had identified three categories of other available data: data that it had proactively published on its own website; StatCan data about population counts and FSAs; and publicly available data that included data released in response to previous ATIP requests relating to medical marijuana. In this latter category the court noted that there had been a considerable number of prior requests that provided various categories of data, including “type of license, medical condition (with rare conditions removed), dosage, and the issue date of the licence” (at para 64). Other released data included the licensee’s “year of birth, dosage, sex, medical condition (rare conditions removed), and province (city removed)” (at para 64). Once released, these data are in the public domain, and can contribute to a “mosaic effect” which allows data to be combined in ways that might ultimately identify specific individuals. Health Canada had provided evidence of an interactive map of Canada published on the internet that showed the licensing of medical marijuana by FSA between 2001 and 2007. Justice Pentney noted that “[a]n Edmonton Journal article about the interactive map provided a link to a database that allowed users to search by medical condition, postal code, doctor’s speciality, daily dosage, and allowed storage of marijuana” (at para 66). He stated: “the existence of evidence demonstrating that connections among disparate pieces of relevant information have previously been made and that the results have been made available to the public is a relevant consideration in applying the serious possibility test” (at para 109). Justice Pentney observed that members of the public might already have knowledge (such as the age, gender or address) of persons they know who consume marijuana that they might combine with other released data to learn about the person’s underlying medical condition. Further, he notes that “the pattern of requests and the existence of the interactive map show a certain motivation to glean more information about the administration of the licensing regime” (at para 144).

Health Canada had commissioned Dr Khaled El Emam to produce and expert report. Dr. El Emam determined that “there are a number of FSAs that are high risk if either three or two characters of the FSA are released, there are no high-risk FSAs if only the first character is released” (at para 80). Relying on this evidence, Justice Pentney concluded that “releasing more than the first character of an FSA creates a significantly greater risk of reidentification” (at para 157). This risk would meet the “serious possibility” threshold, and therefore the information amounts to “personal information” and cannot be disclosed under the legislation.

The Information Commissioner raised issues about the quality of other available data, suggesting that incomplete and outdated datasets would be less likely to create reidentification risk. For example, since cannabis laws had changed, there are now many more people cultivating marijuana for personal use. This would make it harder to connect the knowledge that a particular person was cultivating marijuana with other data that might lead to the disclosure of a medical condition. Justice Pentney was unconvinced since the quantities of marijuana required for ongoing medical use might exceed the general personal use amounts, and thus would still require a licence, creating continuity in the medical cannabis licensing data before and after the legalization of cannabis. He noted: “The key point is not that the data is statistically comparable for the purposes of scientific or social science research. Rather, the question is whether there is a significant possibility that this data can be combined to identify particular individuals.” (at para 118) Justice Pentney therefore distinguishes between the issue of data quality from a data science perspective and data quality from the perspective of someone seeking to identify specific individuals. He stated: “the fact that the datasets may not be exactly comparable might be a problem for a statistician or social scientist, but it is not an impediment to a motivated user seeking to identify a person who was licensed for personal production or a designated producer under the medical marijuana licensing regime” (at para 119).

Justice Pentney emphasized the relationship between sensitivity of information and reidentification risk, noting that “the type of personal information in question is a central concern for this type of analysis” (at para 107). This is because “the disclosure of some particularly sensitive types of personal information can be expected to have particularly devastating consequences” (at para 107). With highly sensitive information, it is important to reduce reidentification risk, which means limiting disclosure “as much as is feasible” (at para 108).

Justice Pentney also dealt with a further argument that Health Canada should not be able to apply the same risk assessment to all the FSA data; rather, it should assess reidentification risk based on the size of the area identified by the different FSA characters. The legislation allows for severance of information from disclosed records, and the journalists argued that Health Canada could have used severance to reduce the risk of reidentification while releasing more data where the risks were acceptably low. Health Canada responded that to do a more fine-grained analysis of the reidentification risk by FSA would impose an undue burden because of the complexity of the task. In its submissions as intervenor in the case, the Office of the Privacy Commissioner suggested that other techniques could be used to perturb the data so as to significantly lower the risk of reidentification. Such techniques are used, for example, where data are anonymized.

Justice Pentney noted that the effort required by a government department or agency was a matter of proportionality. Here, the data at issue were highly sensitive. The already-disclosed first character of the FSA provided general location information about the licences. Given these facts, “[t]he question is whether a further narrowing of the lens would bring significant benefits, given the effort that doing so would require” (at para 181). He concluded that it would not, noting the lack of in-house expertise at Health Canada to carry out such a complex task. Regarding the suggestion of the Privacy Commissioner that anonymization techniques should be applied, he found that while this is not precluded by the ATIA, it was a complex task that, on the facts before him, went beyond what the law requires in terms of severance.

This is an interesting and important decision. First, it reaffirms the test for ‘personal information’ in a more complex data society context than the earlier jurisprudence. Second, it makes clear that the sensitivity of the information at issue is a crucial factor that will influence an assessment not just of the reidentification risk, but of tolerance for the level of risk involved. This is entirely appropriate. Not only is personal health information highly sensitive, at the time these data were collected, licensing was an important means of gaining access to medical marijuana for people suffering from serious and ongoing medical issues. Their sharing of data with the government was driven by their need and vulnerability. Failure to robustly protect these data would enhance vulnerability. The decision also clarifies the evidentiary burden on government to demonstrate reidentification risk – something that will vary according to the sensitivity of the data. It highlights the dynamic and iterative nature of reidentification risk assessment as the risk will change as more data are made available.

Indirectly, the decision also casts light on the challenges of using the ATI system to access data and perhaps a need to overhaul that system to provide better access to high-quality public-sector information for research and other purposes. Although Health Canada has engaged in proactive disclosure (interestingly, such disclosures were a factor in assessing the ‘other available data’ that could lead to reidentification in this case), more should be done by governments (both federal and provincial) to support and ensure proactive disclosure that better meets the needs of data users while properly protecting privacy. Done properly, this would require an investment in capacity and infrastructure, as well as legislative reform.

Published in Privacy

This is the second in a series of posts on Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA). The first post looked at the scope of application of the AIDA. This post considers what activities and what data will be subject to governance.

Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA) governs two categories of “regulated activity” so long as they are carried out “in the course of international or interprovincial trade and commerce”. These are set out in s. 5(1):

(a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;

(b) designing, developing or making available for use an artificial intelligence system or managing its operations.

These activities are cast in broad terms, capturing activities related both to the general curating of the data that fuel AI, and the design, development, distribution and management of AI systems. The obligations in the statute do not apply universally to all engaged in the AI industry. Instead, different obligations apply to those performing different roles. The chart below identifies the actor in the left-hand column, and the obligation the column on the right.

 

Actor

Obligation

A person who carries out any regulated activity and who processes or makes available for use anonymized data in the course of that activity

(see definition of “regulated activity” in s. 5(1)

s. 6 (data anonymization, use and management)

s. 10 (record keeping regarding measures taken under s. 6)

A person who is responsible for an artificial intelligence system (see definition of ‘person responsible’ in s. 5(2)

s. 7 (assess whether a system is high impact)

s. 10 (record keeping regarding reasons supporting their assessment of whether the system is high-impact under s. 7)

A person who is responsible for a high-impact system (see definition of ‘person responsible’ in s. 5(2; definition of “high-impact” system, s. 5(1))

s. 8 (measures to identify, assess and mitigate risk of harm or biased output)

s. 9 (measures to monitor compliance with the mitigation measures established under s. 8 and the effectiveness of the measures

s. 10 (record keeping regarding measures taken under ss. 8 and 9)

s. 12 (obligation to notify the Minister as soon as feasible if the use of the system results or is likely to result in material harm)

A person who makes available for use a high-impact system

s. 11(1) (publish a plain language description of the system and other required information)

A person who manages the operation of a high-impact system

s. 11(2) (publish a plain language description of how the system is used and other required information)

 

For most of these provisions, the details of what is actually required by the identified actor will depend upon regulations that have yet to be drafted.

A “person responsible” for an AI system is defined in s. 5(2) of the AIDA in these terms:

5(2) For the purposes of this Part, a person is responsible for an artificial intelligence system, including a high-impact system, if, in the course of international or interprovincial trade and commerce, they design, develop or make available for use the artificial intelligence system or manage its operation.

Thus, the obligations in ss. 7, 8, 9, 10 and 11, apply only to those engaged in the activities described in s. 5(1)(b) (designing, developing or making available an AI system or managing its operation). Further, it is important to note that with the exception of sections 6 and 7, the obligations in the AIDA also apply only to ‘high impact’ systems. The definition of a high-impact system has been left to regulations and is as yet unknown.

Section 6 stands out somewhat as a distinct obligation relating to the governance of data used in AI systems. It applies to a person who carries out a regulated activity and who “processes or makes available for use anonymized data in the course of that activity”. Of course, the first part of the definition of a regulated activity includes someone who processes or makes available for use “any data relating to human activities for the purpose of designing, developing or using” an AI system. So, this obligation will apply to anyone “who processes or makes available for use anonymized data” (s. 6) in the course of “processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system” (s. 5(1)). Basically, then for s. 6 to apply, the anonymized data must be processed for the purposes of development of an AI system. All of this must also be in the course if international or interprovincial trade and commerce.

Note that the first of these two purposes involves data “related to human activities” that are used in AI. This is interesting. The new Consumer Privacy Protection Act (CPPA) that forms the first part of Bill C-27 will regulate the collection, use and disclosure of personal data in the course of commercial activity. However, it provides, in s. 6(5), that: “For greater certainty, this Act does not apply in respect of personal information that has been anonymized.” By using the phrase “data relating to human activities” instead of “personal data”, s. 5(1) of the AIDA clearly addresses human-derived data that fall outside the definition of personal information in the CPPA because of anonymization.

Superficially, at least, s. 6 of the AIDA appears to pick up the governance slack that arises where anonymized data are excluded from the scope of the CPPA. [See my post on this here]. However, for this to happen, the data have to be used in relation to an “AI system”, as defined in the legislation. Not all anonymized data will be used in this way, and much will depend on how the definition of an AI system is interpreted. Beyond that, the AIDA only applies to a ‘regulated activity’ which is one carried out in the course of international and inter-provincial trade and commerce. It does not apply outside the trade and commerce context, nor does it apply to any excluded actors [as discussed in my previous post here]. As a result, there remain clear gaps in the governance of anonymized data. Some of those gaps might (eventually) be filled by provincial governments, and by the federal government with respect to public-sector data usage. Other gaps – e.g., with respect to anonymized data used for purposes other than AI in the private sector context – will remain. Further, governance and oversight under the proposed CPPA will be by the Privacy Commissioner of Canada, an independent agent of Parliament. Governance under the AIDA (as will be discussed in a forthcoming post) is by the Minister of Industry and his staff, who are also responsible for supporting the AI industry in Canada. Basically, the treatment of anonymized data between the CPPA and the AIDA creates a significant governance gap in terms of scope, substance and process.

On the issue of definitions, it is worth making a small side-trip into ‘personal information’. The definition of ‘personal information’ in the AIDA provides that the term “has the meaning assigned by subsections 2(1) and (3) of the Consumer Privacy Protection Act.” Section 2(1) is pretty straightforward – it defines “personal information” as “information about an identifiable individual”. However, s. 2(3) is more complicated. It provides:

2(3) For the purposes of this Act, other than sections 20 and 21, subsections 22(1) and 39(1), sections 55 and 56, subsection 63(1) and sections 71, 72, 74, 75 and 116, personal information that has been de-identified is considered to be personal information.

The default rule for ‘de-identified’ personal information is that it is still personal information. However, the CPPA distinguishes between ‘de-identified’ (pseudonymized) data and anonymized data. Nevertheless, for certain purposes under the CPPA – set out in s. 2(3) – de-identified personal information is not personal information. This excruciatingly-worded limit on the meaning of ‘personal information’ is ported into the AIDA, even though the statutory provisions referenced in s. 2(3) are neither part of AIDA nor particularly relevant to it. Since the legislator is presumed not to be daft, then this must mean that some of these circumstances are relevant to the AIDA. It is just not clear how. The term “personal information” is used most significantly in the AIDA in the s. 38 offense of possessing or making use of illegally obtained personal information. It is hard to see why it would be relevant to add the CPPA s. 2(3) limit on the meaning of ‘personal information’ to this offence. If de-identified (not anonymized) personal data (from which individuals can be re-identified) are illegally obtained and then used in AI, it is hard to see why that should not also be captured by the offence.

 

Published in Privacy
<< Start < Prev 1 2 3 4 5 6 7 Next > End >>
Page 1 of 7

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law