Teresa Scassa - Blog

Teresa Scassa

Teresa Scassa

The federal government has just launched an AI Strategy Task Force and public engagement on a new AI strategy for Canada. Consultation is a good thing – the government took a lot of flak for the lack of consultation leading up to the ill-fated AI and Data Act that was part of the now-defunct Bill C-27. That said, there are consultations and there are consultations. Here are some of my concerns about this one.

The consultation has two parts. First, the government has convened an AI Task Force consisting of some very talented and clearly public-spirited Canadians who have expertise in AI or AI-adjacent areas. Let me be clear that I appreciate the time and energy that these individuals are willing to contribute to this task. However, if you peruse the list, you will see that few of the Task Force members are specialists in the ethical or social science dimensions of AI. There are no experts in labour and employment issues (which are top of mind for many Canadians these days), nor is there representation from those with expertise in the environmental issues we already know are raised by AI innovation. Only three people from a list of twenty-six are tasked with addressing “Building safe AI systems and public trust in AI”. The composition of the Task Force seems clearly skewed towards rapid adoption and deployment of AI technologies. This is an indication that the government already has a new AI Strategy – they are just looking for “bold, pragmatic and actionable recommendations” to bolster it. It is a consultation to make the implicit strategy explicit.

The first part of the process will see the members of the Task Force, “consult their networks to provide actionable insights and recommendations.” That sounds a lot like insider networking which should frankly raise concerns. This does not lend itself to ensuring fair and appropriate representation of diverse voices. It risks creating its own echo chambers. It is also very likely to lack other elements of transparency. It is hard to see how the conversations and interactions between the private citizens who are members of the task force and their networks will produce records that could be requested under the Access to Information Act.

The second part of the consultation is a more conventional one where Canadians who are not insiders are invited to make contributions. Although the press release announcing the consultation directs people to the “Consulting Canadians”, it does not provide a link. Consulting Canadians is actually a Statistics Canada site. What the government probably meant was “Consulting with Canadians”, which is part of the Open Canada portal (and I have provided a link).

The whole process is described in the press release as a “national sprint” (which is much fancier than calling it “a mad rush to a largely predetermined conclusion”). In November, the AI Task Force members “will share the bold, practical ideas they gathered.” That’s asking a lot, but no doubt they will harness the power of Generative AI to transcribe and summarize the input they receive.

If, in the words of the press release, “This moment demands a renewal of thinking—a collective commitment to reimagining how we harness innovation, achieve our artificial intelligence (AI) ambition and secure our digital sovereignty”, perhaps it also demands a bit more time and reflection. That said, if you want to be heard, you now have less than a month to provide input – so get writing and look for the relevant materials in the Consulting with Canadians portal.

 

Canada’s Privacy Commissioner has released a set of findings that recognize a right to be forgotten (RTBF) under the Personal Information Protection and Electronic Documents Act (PIPEDA). The complainant’s long legal journey began in 2017 when they complained that a search of their name in Google’s search engine returned news articles from many years earlier regarding an arrest and criminal charges relating to having sexual activity without disclosing their status as being HIV positive. Although these reports were accurate at the time they were published, the charges were stayed shortly afterwards, because the complainant posed no danger to public health. Charging guidelines for the offence in question indicated that no charges should be laid where there is no realistic possibility that HIV could be transmitted. The search results contain none of that information. Instead, they publicly disclose the HIV status of the complainant, and they create the impression that their conduct was criminal in nature. As a result of the linking of their name to these search results, the complainant experienced – and continues to experience – negative consequences including social stigma, loss of career opportunities and even physical violence.

Google’s initial response to the complaint was to challenge the jurisdiction of the Privacy Commissioner to investigate the matter under PIPEDA, arguing that PIPEDA did not apply to its search engine functions. The Commissioner referred this issue to the Federal Court, which found that PIPEDA applied. That decision was (unsuccessfully) appealed by Google to the Federal Court of Appeal. When the matter was not appealed further to the Supreme Court of Canada, the Commissioner began his investigation which resulted in the current findings. Google has indicated that it will not comply with the Commissioner’s recommendation to delist the articles so that they do not appear in a search using the complainant’s name. This means that it is likely that an application will be made to Federal Court for a binding order. The matter is therefore not yet resolved.

This post considers three issues. The first relates to the nature and scope of the RTBF in PIPEDA, as found by the Commissioner. The second relates to the Commissioner’s woeful lack of authority when it comes to the enforcement of PIPEDA. Law reform is needed to address this, yet Bill C-27, which would have given greater enforcement powers to the Commissioner, died on the order paper. The government’s intentions with respect to future reform and its timing remain unclear. The third point also addresses PIPEDA reform. I consider the somewhat fragile footing for the Commissioner’s version of the RTBF given how Bill C-27 had proposed to rework PIPEDA’s normative core.

The Right to be Forgotten (RTBF) and PIPEDA

In his findings, the Commissioner grounds the RTBF in an interpretation of s. 5(3) of PIPEDA:

5(3) An organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances.

This is a core normative provision in PIPEDA. For example, although organizations may collect personal information with the consent of the individual, they cannot do so if the collection is for purposes that a reasonable person would not consider appropriate in the circumstances. This provision (or at least one very similar to it in Alberta’s Personal Information Protection Act), was recently found to place important limits on the scraping of photographs from the public internet by Clearview AI to create a massive facial recognition (FRT) database. Essentially, even though the court found that photographs posted on the internet were publicly available and could be collected and used without consent, they could not be collected and used to create a FRT database because this was not a purpose a reasonable person would consider appropriate in the circumstances.

The RTBF would function much in the same way when it comes to the operations of platform search engines. Those search engines – such as Google’s – collect, use and disclose information found on the public internet when they return search results to users in response to queries. When searches involve individuals, search results may direct users to personal information about that individual. That is acceptable – as long as the information is being collected, used and disclosed for purposes a reasonable person would consider appropriate in the circumstances. In the case of the RTBF, according to the Commissioner, the threshold will be crossed when the privacy harms caused by the disclosure of the personal information in the search results outweigh the public interest in having that information shared through the search function. In order to make that calculation, the Commissioner articulates a set of criteria that can be applied on a case-by-case basis. The criteria include:

a. Whether the individual is a public figure (e.g. a public office holder, a politician, a prominent business person, etc.);

b. Whether the information relates to an individual’s working or professional life as opposed to their private life;

c. Whether the information relates to an adult as opposed to a minor;

d. Whether the information relates to a criminal charge that has resulted in a conviction or where the charges were stayed due to delays in the criminal proceedings;

e. Whether the information is accurate and up to date;

f. Whether the ability to link the information with the individual is relevant and necessary to the public consideration of a matter under current controversy or debate;

g. The length of time that has elapsed since the publication of the information and the request for de-listing. (at para 109)

In this case, the facts were quite compelling, and the Commissioner had no difficulty finding that the information at issue caused great harm to the complainant while providing no real public benefit. This led to the de-listing recommendation – which would mean that a search for the complainant’s name would no longer turn up the harmful and misleading articles – although the content itself would remain on the web and could be arrived at using other search criteria.

The Privacy Commissioner’s ‘Powers’

Unlike his counterparts in other jurisdictions, including the UK, EU member countries, and Quebec, Canada’s Privacy Commissioner lacks suitable enforcement powers. PIPEDA was Canada’s first federal data protection law, and it was designed to gently nudge organizations into compliance. It has been effective up to a point. Many organizations do their best to comply proactively, and the vast majority of complaints are resolved prior to investigation. Those that result in a finding of a breach of PIPEDA contain recommendations to bring the organization into compliance, and in many cases, organizations voluntarily comply with the recommendations. The legislation works – up to a point.

The problem is that the data economy has dramatically evolved since PIPEDA’s enactment. There is a great deal of money to be made from business models that extract large volumes of data that are then monetized in ways that are beyond the comprehension of individuals who have little choice but to consent to obscure practices laid out in complex privacy policies in order to receive services. Where complaint investigations result in recommendations that run up against these extractive business models, the response is increasingly to disregard the recommendations. Although there is still the option for a complainant or the Commissioner to apply to Federal Court for an order, the statutory process set out in PIPEDA requires the Federal Court to hold a hearing de novo. In other words, notwithstanding the outcome of the investigation, the court hears both sides and draws its own conclusions. The Commissioner, despite his expertise, is owed no deference.

In the proposed Consumer Protection Privacy Act (CPPA) that was part of the now defunct Bill C-27, the Commissioner was poised to receive some important new powers, including order-making powers and the ability to recommend the imposition of steep administrative monetary penalties. Admittedly, these new powers came with some clunky constraints that would have put the Commissioner on training wheels in the privacy peloton of his international counterparts. Still, it was a big step beyond the current process of having to ask the Federal Court to redo his work and reach its own conclusions.

Bill C-27, however, died on the order paper with the last federal election. The current government is likely in the process of pep-talking itself into reintroducing a PIPEDA reform bill, but as yet there is no clear timeline for action. Until a new bill is passed, the Commissioner is going to have to make do with his current woefully inadequate enforcement tools.

The Dangers of PIPEDA Reform

Assuming a PIPEDA reform bill will contain enforcement powers better adapted to a data-driven economy, one might be forgiven for thinking that PIPEDA reform will support the nascent RTBF in Canada (assuming that the Federal Court agrees with the Commissioner’s approach). The problem is, however, there could be some uncomfortable surprises in PIPEDA reform. Indeed, this RTBF case offers a good illustration of how tinkering with PIPEDA may unsettle current interpretations of the law – and might do so at the expense of privacy rights.

As noted above, the Commissioner grounded the RTBF on the strong and simple principle at the core of PIPEDA and expressed in s. 5(3), which I repeat here for convenience:

5(3) An organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances.

The Federal Court of Appeal has told us that this is a normative standard – in other words, the fact that millions of otherwise reasonable people may have consented to certain terms of service does not on its own make those terms something that a reasonable person would consider appropriate in the circumstances. The terms might be unduly exploitative but leave individuals with little or no choice. The reasonableness inquiry sets a standard for the level of privacy protection an individual should be entitled to in a given set of circumstances.

Notably, Bill C-27 sought to disrupt the simplicity of s. 5(3), replacing it with the following:

12 (1) An organization may collect, use or disclose personal information only in a manner and for purposes that a reasonable person would consider appropriate in the circumstances, whether or not consent is required under this Act.

(2) The following factors must be taken into account in determining whether the manner and purposes referred to in subsection (1) are appropriate:

(a) the sensitivity of the personal information;

(b) whether the purposes represent legitimate business needs of the organization;

(c) the effectiveness of the collection, use or disclosure in meeting the organization’s legitimate business needs;

(d) whether there are less intrusive means of achieving those purposes at a comparable cost and with comparable benefits; and

(e) whether the individual’s loss of privacy is proportionate to the benefits in light of the measures, technical or otherwise, implemented by the organization to mitigate the impacts of the loss of privacy on the individual.

Although s. 12(1) is not so different from s. 5(3), the government saw fit to add a set of criteria in s. 12(2) that would shape any analysis in a way that leans the decision-maker towards accommodating the business needs of the organization over the privacy rights of the individual. Paragraph 12(2)(b) and (c) explicitly require the decision-maker to think about the legitimate business needs of the organization and the effectiveness of the particular collection, use or disclosure in meeting those needs. In an RTBF case, this might mean thinking about how indexing the web and returning search results meets the legitimate business needs of a search engine company and does so effectively. It then asks whether there are “less intrusive means of achieving those purposes at a comparable cost and with comparable benefits”. This too focuses on the organization. Not only is this criterion heavily weighted in favour of business in terms of its substance – less intrusive means should be of comparable cost – the issues it raises are ones about which an individual challenging the practice would have great difficulty producing evidence. While the Commissioner has greater resources, these are still limited. The fifth criterion returns us to the issue of privacy, but it asks whether “the individual’s loss of privacy is proportionate to the benefits [to the organization] in light of the measures, technical or otherwise, implemented by the organization to mitigate the impacts of the loss of privacy on the individual”. The criteria in s. 12(2) fall over themselves to nudge a decision-maker towards finding privacy-invasive practices to be “for purposes that a reasonable person would consider appropriate in the circumstances” – not because a reasonable person would find them appropriate in light of the human right to privacy, but because an organization has a commercial need for the data and has fiddled about a bit to attempt to mitigate the worst of the impacts. Privacy essentially becomes what the business model will allow – the reasonable person is now an accountant.

It is also worth noting that by the time a reform bill is reintroduced (and if we dare to imagine it – actually passed), the Federal Court may have weighed in on the RTBF under PIPEDA, putting us another step closer to clarifying whether there is a RTBF in Canada’s private sector privacy law. Assuming that the Federal Court largely agrees with the Commissioner and his approach, if something like s. 12 of the CPPA becomes part of a new law, the criteria developed by the Commissioner for the reasonableness assessment in RTBF cases will be supplanted by the rather ugly list in s. 12(2). Not only will this cast doubt on the continuing existence of a RTBF, it may likely doom one. And this is not the only established interpretation/approach that will be unsettled by such a change.

The Commissioner’s findings in the RTBF investigation demonstrate the flexibility and simplicity of s. 5(3). When a PIPEDA reform bill returns to Parliament, let us hope that the s. 12(2) is no longer part of it.

 

The Alberta Court of Queen’s Bench has issued a decision in Clearview AI’s application for judicial of an Order made by the province’s privacy commissioner. The Commissioner had ordered Clearview AI to take certain steps following a finding that the company had breached Alberta’s Personal Information Protection Act (PIPA) when it scraped billions of images – including those of Albertans – from the internet to create a massive facial recognition database marketed to police services around the world. The court’s decision is a partial victory for the commissioner. It is interesting and important for several reasons – including for its relevance to generative AI systems and the ongoing joint privacy investigation into OpenAI. These issues are outlined below.

Brief Background

Clearview AI became notorious in 2020 following a New York Times article which broke the story on the company’s activities. Data protection commissioners in Europe and elsewhere launched investigations, which overwhelmingly concluded that the company violated applicable data protection laws. In Canada, the federal privacy commissioner joined forces with the Quebec, Alberta and British Columbia (BC) commissioners, each of which have private sector jurisdiction. Their joint investigation report concluded that their respective laws applied to Clearview AI’s activities as there was a real and substantial connection to their jurisdictions. They found that Clearview collected, used and disclosed personal information without consent, and that no exceptions to consent applied. The key exception advanced by Clearview AI was the exception for “publicly available information”. The Commissioners found that the scope of this exception, which was similarly worded in the federal, Alberta and BC laws, required a narrow interpretation and that the definition in the regulations enacted under each of these laws did not include information published on the internet. The commissioners also found that, contrary to shared legislative requirements, the collection and use of the personal information by Clearview AI was not for a purpose that a reasonable person would consider appropriate in the circumstances. The report of findings made a number of recommendations that Clearview ultimately did not accept. The Quebec, BC and Alberta commissioners all have order making powers (which the federal commissioner does not). Each of these commissioners ordered Clearview to correct its practices, and Clearview sought judicial review of each of these orders. The decision of the BC Supreme Court (which upheld the Commissioner’s order) is discussed in an earlier post. The decision from Quebec has yet to be issued.

In Alberta, Clearview AI challenged the commissioner’s jurisdiction on the basis that Alberta’s PIPA did not apply to its activities. It also argued that that the Commissioner’s interpretation of “publicly available information” was unreasonable. In the alternative, Clearview AI argued that ‘publicly available information’, as interpreted by the Commissioner, was an unconstitutional violation of its freedom of expression. It also contested the Commissioner’s finding that Clearview did not have a reasonable purpose for collecting, using and disclosing the personal information.

The Jurisdictional Question

Courts have established that Canadian data protection laws will apply where there is a real and substantial connection to the relevant jurisdiction. Clearview AI argued that it was a US-based company that scraped most of its data from social media websites mainly hosted outside of Canada, and that therefore its activities took place outside of Canada and its provinces. Yet, as Justice Feasby noted, “[s]trict adherence to the traditional territorial conception of jurisdiction would make protecting privacy interests impossible when information may be located everywhere and nowhere at once” (at para 50). He noted that there was no evidence regarding the actual location of the servers of social media platforms, and that Clearview AI’s scraping activities went beyond social media platforms. Justice Feasby ruled that he was entitled to infer from available evidence that images of Albertans were collected from servers located in Canada and in Alberta. He observed that in any event, Clearview marketed its services to police in Alberta, and its voluntary decision to cease offering those services did not alter the fact that it had been doing business in Alberta and could do so again. Further, the information at issue in the order was personal information of Albertans. All of this gave rise to a real and substantial connection with Alberta.

Publicly Available Information

The federal Personal Information Protection and Electronic Documents Act (PIPEDA) contains an exception to the consent requirement for “publicly available information”. The meaning of this term is defined in the Regulations Specifying Publicly Available Information. The relevant category is found in s. 1(e) which specifies “personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.” Alberta’s PIPA contains a similar exception (as does BC’s law), although the wording is slightly different. Section 7(e) of the Alberta regulations creates an exception to consent where:

(e) the personal information is contained in a publication, including, but not limited to, a magazine, book or newspaper, whether in printed or electronic form, but only if

(i) the publication is available to the public, and

(ii) it is reasonable to assume that the individual that the information is about provided that information; [My emphasis]

In their joint report of findings, the Commissioners found that their respective “publicly available information” exceptions did not include social media platforms.

Clearview AI made much of the wording of Alberta’s exception, arguing that even if it could be said that the PIPEDA language excluded social media platforms, the use of the words “including but not limited to” in the Alberta regulation made it clear that the list was not closed, nor was it limited to the types of publications referenced.

In interpreting the exceptions for publicly available information, the Commissioners emphasized the quasi-constitutional nature of privacy legislation. They found that the privacy rights should receive a broad and expansive interpretation and the exceptions to those rights should be interpreted narrowly. The commissioners also found significant differences between social media platforms and the more conventional types of publications referenced in their respective regulations, making it inappropriate to broaden the exception. Justice Feasby, applying reasonableness as the appropriate standard of review, found that the Alberta Commissioner’s interpretation of the exception was reasonable.

Freedom of Expression

Had the court’s decision ended there, the outcome would have been much the same as the result in the BC Supreme Court. However, in this case, Clearview AI also challenged the constitutionality of the regulations. It sought a declaration that if the exception were interpreted as limited to books, magazines and comparable publications, then this violated its freedom of expression under s. 2(b) of the Canadian Charter of Rights and Freedoms.

Clearview AI argued that its commercial purposes of scraping the internet to provide information services to its clients was expressive and was therefore protected speech. Justice Feasby noted that Clearview’s collection of internet-based information was bot-driven and not engaged in by humans. Nevertheless, he found that “scraping the internet with a bot to gather images and information may be protected by s. 2(b) when it is part of a process that leads to the conveyance of meaning” (at para 104).

Interestingly, Justice Feasby noted that since Clearview no longer offered its services in Canada, any expressive activities took place outside of Canada, and thus were arguably not protected by the Charter. However, he acknowledged that the services had at one point been offered in Canada and could be again. He observed that “until Clearview removes itself permanently from Alberta, I must find that its expression in Alberta is restricted by PIPA and the PIPA Regulation” (at para 106).

Having found a prima facie breach of s. 2(b), Justice Feasby considered whether this was a reasonable limit demonstrably justified in a free and democratic society, under s. 1 of the Charter. The Commissioner argued that the expression at issue in this case was commercial in nature and thus of lesser value. Justice Feasby was not persuaded by category-based assumptions of value; rather, he preferred an approach in which the regulation of commercial expression is consistent with and proportionate to its character.

Justice Feasby found that the Commissioner’s reasonable interpretation of the exception in s. 7 of the regulations meant that it would exclude social media platforms or “other kinds of internet websites where images and personal information may be found” (at para 118). He noted that this is a source-based exception – in other words that some publicly available information may be used without knowledge or consent, but not other similar information. The exclusion depends on the source and not the purpose of use for the personal information. Justice Feasby expressed concern that the same exception that would exclude the scraping of images from the internet for the creation of a facial recognition database would also apply to the activities of search engines widely used by individuals to gain access to information on the internet. He thus found that the publicly available information exception was overbroad, stating: “Without a reasonable exception to the consent requirement for personal information made publicly available on the internet without use of privacy settings, internet search service providers are subject to a mandatory consent requirement when they collect, use and disclose such personal information by indexing and delivering search results” (at para 138). He stated: “I take judicial notice of the fact that search engines like Google are an important (and perhaps the most important) way individuals access information on the internet” (at para 144).

Justice Feasby also noted that while it was important to give individuals some level of control over their personal information, “it must also be recognized that some individuals make conscious choices to make their images and information discoverable by search engines and that they have the tools in the form of privacy settings to prevent the collection, use, and disclosure of their personal information” (at para 143). His constitutional remedy – to strike the words “including, but not limited to magazines, books, and newspapers” from the regulation was designed to allow “the word ‘publication’ to take its ordinary meaning which I characterize as ‘something that has been intentionally made public’” (at para 149).

The Belt and Suspenders Approach

Although excising part of the publicly available information definition seems like a major victory for Clearview AI, in practical terms it is not. This is because of what the court refers to as the law’s “belt and suspenders approach”. This metaphor suggests that there are two routes to keep up privacy’s pants – and loosening the belt does not remove the suspenders. In this case, the suspenders are located in the clause found in PIPA, as well as in its federal and BC counterparts, that limits the collection, use and disclosure of personal information to only that which “a reasonable person would consider appropriate in the circumstances”. The court ruled that the Commissioner’s conclusion that the scraping of personal information was not for purposes that a reasonable person would consider appropriate in the circumstances was reasonable and should not be overturned. This approach, set out in the joint report of findings, emphasized that the company’s mass data scraping involved over 3 billion images of individuals, including children. It was used to create biometric face prints that would remain in Clearview’s databases even if the source images were removed from the internet, and it was carried out for commercial purposes. The commissioners also found that the purposes were not related to the reasons why individuals might have shared their photographs online, could be used to the detriment of those individuals, and created the potential for a risk of significant harm. Continuing with his analogy to search engines, Justice Feasby noted that Clearview AI’s use of publicly available images was very different from the use of the same images by search engines. The different purposes are essential to the reasonableness determination. Justice Feasby states: “The “purposes that are reasonable” analysis is individualized such that a finding that Clearview’s use of personal information is not for reasonable purposes does not apply to other organizations and does not threaten the operations of the internet” (at para 159). He noted that the commercial dimensions of the use are not determinative of reasonableness. However, he observed that “where images and information are posted to social media for the purpose of sharing with family and friends (or prospective friends), the commercialization of such images and information by another party may be a relevant consideration in determining whether the use is reasonable” (at para 160).

The result is that Clearview AI’s scraping of images from the public internet violates Alberta’s PIPA. The court further ruled that the Commissioner’s order was clear and specific, and capable of being implemented. Justice Feasby required Clearview AI to report within 50 days on its good faith progress in taking steps to cease the collection, use and disclosure of images and biometric data collected from individuals in Alberta, and to delete images and biometric data in its database that are from individuals in Alberta.

Harmonized Approaches to Data Protection Law in Canada

This decision highlights some of the challenges to the growing collaboration and cooperation of privacy commissioners in Canada when it comes to interpreting key terms and concepts in substantially similar legislation. Increasingly, the commissioners engage in joint investigations where complaints involve organizations operating in multiple jurisdictions in Canada. While this occurs primarily in the private sector context, it is not exclusively the case, as a recent joint investigation between the BC and Ontario commissioners into a health data breach demonstrates. Joint investigations conserve regulator resources and save private sector organizations from having to respond to multiple similar and concurrent investigations. In addition, joint investigations can lead to harmonized approaches and interpretations of shared concepts in similar legislation. This is a good thing for creating certainty and consistency for those who do business across Canadian jurisdictions.

However, harmonized approaches are vulnerable to multiple judicial review applications, as was the case following the Clearview AI investigation. Although the BC Supreme Court found that the BC Commissioner’s order was reasonable, what the Alberta King’s Bench decision demonstrates is that a common front can be fractured. Justice Feasby found that a slight difference in wording between Alberta’s regulations and those in BC and at the federal level was sufficient to justify finding the scope of Alberta’s publicly available information exception to be unconstitutional.

Harmonized approaches may also be vulnerable to unilateral legislative change. In this respect, it is worth noting that an Alberta report on the impending reform of PIPA recommends “that the Government take all necessary steps, including through proposing amendments to the Personal Information Protection Act, to improve alignment of all provincial privacy legislation, including in the private, public and health sectors” (at p. 13).

The Elephant in the Room: Generative AI and Data Protection Law in Canada

In his reasons, Justice Feasby made Google’s search functions a running comparison for Clearview AI’s data scraping practices. Perhaps a better example would have been the data scraping that takes place in order to train generative AI models. However, the court may have avoided that example because there is an ongoing investigation by the Alberta, Quebec, BC and federal commissioners into OpenAI’s practices. The findings in that investigation are overdue – perhaps the delay has, at least in part, been caused by anticipation of what might happen with the Alberta Clearview AI judicial review. The Alberta decision will likely present a conundrum for the commissioners.

Reading between the lines of Justice Feasby’s decision, it is entirely possible that he would find that the scraping of the public internet to gather training data for generative AI systems would both fall within the exception for publicly available information and be for a purpose that a reasonable person would consider appropriate in the circumstances. Generative AI tools are now widely used – more widely even than search engines since these tools are now also embedded in search engines themselves. To find that the collection and use of personal information that may be indiscriminately found on the internet cannot be used in this way because consent is required is fundamentally impractical. In the EU, the legitimate interest exception in the GDPR provides latitude for use in this way without consent, and recent guidance from the European Data Protection Supervisor suggestions that legitimate interests combined, where appropriate with Data Protection Impact Assessments may address key data protection issues.

In this sense, the approach taken by Justice Feasby seems to carve a path for data protection in a GenAI era in Canada by allowing data scraping of publicly available sources on the Internet in principle, subject to the limit that any such collection or any ensuing use or disclosure of the personal information must be for purposes that a reasonable person would consider appropriate in the circumstances. However, this is not a perfect solution. In the first place, unlike the EU approach, which ensures that other privacy protective measures (such as privacy impact assessments) govern this kind of mass collection, Canadian law remains outdated and inadequate. Further, the publicly available information exceptions – including Alberta’s even after its constitutional nip and tuck – also emphasize that, to use the language of Alberta’s PIPA, it must be “reasonable to assume that the individual that the information is about provided the information”. In fact, there will be many circumstances in which individuals have not provided the information posted online about them. This is the case with photos from parties, family events and other social interactions. Further, social media – and the internet as a whole – is full of non-consensual images, gossip, anecdotes and accusations.

The solution crafted by the Alberta Court of King’s Bench is therefore only a partial solution. A legitimate interest exception would likely serve much better in these circumstances, particularly if it is combined with broader governance obligations to ensure that privacy is adequately considered and assessed. Of course, before this happens, the federal government’s privacy reform measures in Bill C-27 must be resuscitated in some form or another.

 

The Commission d’accès à l’information du Québec (CAI) has released a decision regarding a pilot project to use facial recognition technology (FRT) in Métro stores in Quebec. When this is paired with a 2023 investigation report of the BC Privacy Commissioner regarding the use of FRT in Canadian Tire Stores in that province, there seems to be an emerging consensus around how privacy law will apply to the use of FRT in the retail sector in Canada.

Métro had planned to establish a biometric database to enable the use of FRT at certain of its stores operating under the Métro, Jean Coutu and Super C brands, on a pilot basis. The objective of the system was to reduce shoplifting and fraud. The system would function in conjunction with video surveillance cameras installed at the entrances and exits to the stores. The reference database would consist of images of individuals over the age of majority who had been linked to security incidents involving fraud or shoplifting. Images of all shoppers entering the stores would be captured on the video surveillance cameras and then converted to biometric face prints for matching with the face prints in the reference database.

The CAI initiated an investigation after receiving notice from Métro of the creation of the biometric database. The company agreed to put its launch of the project on hold pending the results of the investigation.

The Quebec case involved the application of Quebec’s the Act respecting the protection of personal information in the private sector (PPIPS) as well as its Act to establish a legal framework for information technology (LFIT) The LFIT requires an organization that is planning to create a database of “biometric characteristics and measurements” to disclose this fact to the CAI no later than 60 days before it is to be used. The CAI can impose requirements and can also order the use suspended or the database destroyed if it is not in compliance with any such orders or if it “otherwise constitutes an invasion of privacy” (LFIT art. 45).

Métro argued that the LFIT required individual consent only for the use of a biometric database to ‘confirm or verify’ the identity of an individual (LFIT s. 44). It maintained that its proposed use was different – the goal was not to confirm or verify the identities of shoppers; rather, it was to identify ‘high risk’ shoppers based on matches with the reference database. The CAI rejected this approach, noting the sensitivity of biometric data. Given the quasi-constitutional status of Canadian data protection laws, the CAI found that a ‘large and liberal’ approach to interpretation of the law was required. The CAI found that Métro was conflating the separate concepts of “verification” and “confirmation” of identity. In this case, the biometric faceprints in the probe images would be used to search for a match in the “persons of interest” database. Even if the goal of the generation of the probe images was not to determine the precise identity of all customers – or to add those face prints to the database – the underlying goal was to verify one attribute of the identity of shoppers – i.e., whether there was a match with the persons of interest database. This brought the system within the scope of the LTIF. The additional information in the persons of interest database, which could include the police report number, a description of the past incident, and related personal information would facilitate the further identification of any matches.

Métro also argued that the validation or confirmation of identity did not happen in one single process and that therefore s. 44 of the LTIF was not engaged. The CAI dismissed what it described as the compartmentalisation of the process. Instead, the law required a consideration of the combined effect of all the steps in the operation of the system.

The company also argued that they had obtained the consent required under art 12 of the PPIPS. It maintained that the video cameras captured shoppers’ images with their consent, as there was notice of use of the cameras and the shoppers continued into the stores. It argued that the purposes for which it used the biometric data were consistent with the purposes for which the security cameras were installed, making it a permissible secondary use under s. 12(1) of PPIPS. The CAI rejected this argument noting that it was not a question of a single collection and a related secondary use. Rather, the generation of biometric faceprints from images captured on video is an independent collection personal of data. That collection must comply with data protection requirements and cannot be treated a secondary use of already collected data.

The system proposed by Métro would be used on any person entering the designated stores, and as such it was an entry requirement. Individuals would have no ability to opt out and still shop, and there were no alternatives to participation in the FRT scheme. Not only is consent not possible for the general population entering the stores, those whose images become part of the persons of interest database would also have no choice in the matter.

Métro argued that its obligation to protect its employees and the public outweighed the privacy interests of its customers. The CAI rejected this argument, noting that this was not the test set out in the LTIF, which asked instead whether the database of biometric characteristics “otherwise constitutes an invasion of privacy” (art 45). The CAI was of the view that to create a database of biometric characteristics and to match these characteristics against face prints generated from data captured from the public without their consent in circumstances where the law required consent amounted to a significant infringement of privacy rights. The Commission emphasized again the highly sensitive character of the personal data and issued an order prohibiting the implementation of the proposed system.

The December 2023 BC investigation report was based on that province’s Personal Information Protection Act. It followed a commissioner-initiated investigation into the use by several Canadian Tire Stores in BC of FRT systems integrated with video surveillance cameras. Like the Métro pilot, biometric face prints were generated from the surveillance footage and matched against a persons-of-interest database. The stated goals of the systems were similar as well – to reduce shoplifting and enhance the security of the stores. As was the case in Quebec, the BC Commissioner found that the generation of biometric face prints was a new collection of personal information that required express consent. The Commissioner had found that the stores had not provided adequate notice of collection, making the issue of consent moot. However, he went on to find that even if there had been proper notice, express consent had not been obtained, and consent could not be implied in the circumstances. The collection of biometric faceprint data of everyone entering the stores in question was not for a purpose that a reasonable person would consider appropriate, given the acute sensitivity of the data collected and the risks to the individual that might flow from its misuse, inaccuracy, or from data breaches. Interestingly, in BC, the four stores under investigation removed their FRT systems soon after receiving the notice of investigation. During the investigation, the Commissioner found little evidence to support the need for the systems, with store personnel admitting that the systems added little to their normal security functions. He chastised the retailers for failing both to conduct privacy impact assessments prior to adoption and to put in place measures to evaluate the effectiveness and performance of the systems.

An important difference between the two cases relates to the ability of the CAI to be proactive. In Quebec, the LTIF requires notice to be provided to the Commissioner of the creation of a biometric database in advance of its implementation. This enabled it to rule on the appropriateness of the system before privacy was adversely impacted on a significant scale. By contrast, the systems in BC were in operation for three years before sufficient awareness surfaced to prompt an investigation. Now that powerful biometric technologies are widely available for retail and other uses, governments should be thinking seriously about reforming private sector privacy laws to provide for advance notice requirements – at the very least, for biometric systems.

Following both the Quebec and the BC cases, it is difficult to see how broad-based FRT systems integrated with store security cameras could be deployed in a manner consistent with data protection laws – at least under current shopping business models. This suggests that such uses may be emerging as a de facto no-go zone in Canada. Retailers may argue that this reflects a problem with the law, to the extent that it interferes with their business security needs. Yet if privacy is to mean anything, there must be reasonable limits on the collection of personal data – particularly sensitive data. Just because something can be done, does not mean it should be. Given the rapid advance of technology, we should be carefully attuned to this. Being FRT face-printed each time one goes to the grocery store for a carton of milk may simply be an unacceptably disproportionate response to an admittedly real problem. It is a use of technology that places burdens and risks on ordinary individuals who have not earned suspicion, and who may have few other choices for accessing basic necessities.

 

The Clearview AI saga has a new Canadian instalment. In December 2024, the British Columbia Supreme Court rendered a decision on Clearview AI’s application for judicial review of an order issued by the BC Privacy Commissioner. This post explores that decision and some of its implications. The first part sets the context, the next discusses the judicial review decision, and part three looks at the ramifications for Canadian privacy law of the larger (and ongoing) legal battle.

Context

Late in 2021, the Privacy Commissioners of BC, Alberta, Quebec and Canada issued a joint report on their investigation into Clearview AI (My post on this order is here). Clearview AI, a US-based company, had created a massive facial recognition (FRT) database from images scraped from the internet that it marketed to law enforcement agencies around the world. The investigation was launched after a story broke in the New York Times about Clearview’s activities. Although Canadian police services initially denied using Clearview AI, the RCMP later admitted that it had purchased two licences. Other Canadian police services made use of promotional free accounts.

The joint investigation found that Clearview AI had breached the private sector data protection laws of the four investigating jurisdictions by collecting and using sensitive personal information without consent and by doing so for purposes that a reasonable person would not consider appropriate in the circumstances. The practices also violated Quebec’s Act to establish a legal framework for information technology. Clearview AI disagreed with these conclusions. It indicated that it would temporarily cease its operations in Canada but maintained that it was entitled to scrape content from the public web. After failing to respond to the recommendations in the joint report, the Commissioners of Quebec, BC and Alberta issued orders against the company. These orders required Clearview AI to cease offering its services in their jurisdictions, to make best efforts to stop collecting the personal information of those within their respective provincial boundaries, and to delete personal information in its databases that had been improperly collected from those within their boundaries. No order issued from the federal Commissioner, who does not have order making powers under the Personal Information Protection and Electronic Documents Act (PIPEDA). He could have applied to the Federal Court for an order but chose not to do so (more on that in Part 3 of this post).

Clearview AI declined to comply with the provincial orders, other than to note that it had already temporarily ceased operations in Canada. It then applied for judicial review of the orders in each of the three provinces.

To date, only the challenge to the BC Order has been heard and decided. In the BC application, Clearview argued that the Commissioner’s decision was unreasonable. Specifically, it argued that BC’s Personal Information Protection Act (PIPA) did not apply to Clearview AI, that the information it scraped was exempt from consent requirements because it was “publicly available information”, and that the Commissioner’s interpretation of purposes that a reasonable person would consider appropriate in the circumstances was unreasonable and failed to consider Charter values. In his December 2024 decision, Justice Shergill of the BC Supreme Court disagreed, upholding the Commissioner’s order.

The BC Supreme Court Decision on Judicial Review

Justice Shergill confirmed that BC’s PIPA applies to Clearview AI’s activities, notwithstanding the fact that Clearview AI is a US-based company. He noted that applying the ‘real and substantial connection’ test – which considers the nature and extent of connections between a party’s activities and the jurisdiction in which proceedings are initiated – leads to that conclusion. There was evidence that Clearview AI’s database had been marketed to and used by police services in BC, as well as by the RCMP which polices many parts of the province. Further, Justice Shergill noted that Clearview’s data scraping practices were carried out worldwide and captured data about BC individuals including, in all likelihood, data from websites hosted in BC. Interestingly, he also found that Clearview’s scraping of images from social media sites such as Facebook, YouTube and Instagram also created sufficient connection, as these sites “undoubtedly have hundreds of thousands if not millions of users in British Columbia” (at para 91). In reaching his conclusion, Justice Shergill emphasized “the important role that privacy plays in the preservation of our societal values, the ‘quasi-constitutional’ status afforded to privacy legislation, and the increasing significance of privacy laws as technology advances” (at para 95). He also found that there was nothing unfair about applying BC’s PIPA to Clearview AI, as the company “chose to enter British Columbia and market its product to local law enforcement agencies. It also chooses to scrape data from the Internet which involves personal information of people in British Columbia” (at para 107).

Sections 12(1)(e), 15(1)(e) and 18(1)(e) of PIPA provide exceptions to the requirement of knowledge and consent for the collection, use and disclosure of personal information where “the personal information is available to the public” as set out in regulations. The PIPA Regulations include “printed or electronic publications, including a magazine, book, or newspaper in printed or electronic form.” Similar exceptions are found in the federal PIPEDA and in Alberta’s Personal Information Protection Act. Clearview AI had argued that public internet websites, including social media sites, fell within the category of electronic publications and their scraping was thus exempt from consent requirements. The commissioners disagreed, and Clearview AI challenged this interpretation as unreasonable.

Justice Shergill found that the Commissioners’ conclusion that social media websites fell outside the exception for publicly available information was reasonable. The BC Commissioner was entitled to read the list in the PIPA Regulations as a “narrow set of sources” (at para 160). Justice Shergill reviewed the reasoning in the joint report for why social media sites should be treated differently from other types of publications mentioned in the exception. These include the fact that social media sites are dynamic and not static and that individuals exercise a different level of control over their personal information on social media platforms than on news or other such sites. Although the legislation may require a balancing of privacy rights with private sector interests, Justice Shergill found that it was reasonable for the Commissioner to conclude that privacy rights should be given precedence over commercial interests in the overall context of the legislation. Referencing the Supreme Court of Canada’s decision in Lavigne, Justice Shergill noted that “it is the protection of individual privacy that supports the quasi-constitutional status of privacy legislation, not the right of the organization to collect and use personal information” (at para 174). An individual’s ability to control what happens to their personal information is fundamental to the autonomy and dignity protected by privacy rights and “it is thus reasonable to conclude that any exception to these important rights should be interpreted narrowly” (at para 175).

Clearview AI argued that posting photos to social media sites reflected an individual’s autonomous choice to surrender the information to the public domain. Justice Shergill preferred the Commissioner’s interpretation, which considered the sensitivity of the biometric information, and the impact its collection and use could have on individuals. He referenced the Supreme Court of Canada’s decision in R. v. Bykovets (my post on this case is here), which emphasized that “individuals ‘may choose to divulge certain information for a limited purpose, or to a limited class of persons, and nonetheless retain a reasonable expectation of privacy” (at para 162, citing para 46 of Bykovets).

Clearview AI also argued that the Commissioner was unreasonable in not taking into account Charter values in his interpretation of PIPA. In particular, the company was of the view that the freedom of expression, which guarantees the right both to communicate and to receive information, extended to the ability to access and use publicly available information without restriction. Although Justice Shergill found that the Commissioner could have been more direct in his consideration of Charter values, his decision was still not unreasonable on this point. The Commissioner did not engage with the Charter values issues at length because he did not consider the law to be ambiguous – Charter values-based interpretation comes into play in helping to resolve ambiguities in the law. As Justice Shergill noted, “It is difficult to understand how Clearview’s s. 2(b) Charter rights are infringed through an interpretation of ‘publicly available’ which excludes it from collecting personal information from social media websites without consent” (at para 197).

Like its counterpart legislation in Alberta and at the federal level, BC’s PIPA contains a section that articulates the overarching principle, that any collection, use or disclosure of personal information must be for purposes that a reasonable person would consider appropriate in the circumstances. This means, among other things, that even if the exception to consent had applied in this case, the collection and use of the scraped personal information would still have had to have been for a reasonable purpose.

The Commissioners had found that overall, Clearview’s scraping of vast quantities of sensitive personal information from the internet to build a massive facial recognition database was not one that a reasonable person would find appropriate in the circumstances. Clearview AI preferred to characterize its purpose as providing a service to the benefit of law enforcement and national security. In their joint report, the Commissioners had rejected this characterization noting that it did not justify the massive, widespread scraping of personal information by a private sector company. Further, the Commissioners had noted that such an activity could have negative consequences for individuals, including cybersecurity risks and risks that errors could lead to reputational harm. They also observed that the activity contributed to “broad-based harm inflicted on all members of society, who find themselves under continual mass surveillance by Clearview based on its indiscriminate scraping and processing of their facial images” (at para 253). Justice Shergill found that the record supported these conclusions, and that the Commissioners’ interpretation of reasonable purposes was reasonable.

Clearview AI also argued that the Commissioner’s Order was “unnecessary, unenforceable or overbroad”, and should thus be quashed (at para 258). Justice Shergill accepted the Commissioner’s argument that the order was necessary because Clearview had only temporarily suspended its services in Canada, leaving open the possibility that it would offer its services to Canadian law enforcement agencies in the future. He also accepted the Commissioner’s argument that compliance with the order was possible, noting that Clearview had accepted certain steps for ceasing collection and removing images in its settlement of an Illinois class action lawsuit. The order required the company to use “best efforts”, in an implicit acknowledgement that a perfect solution was likely impossible. Clearview argued that a “best efforts” standard was too vague to be enforceable; Justice Shergill disagreed, noting that courts often used “best efforts language”. Further, and quite interestingly, Justice Shergill noted that “if it is indeed impossible for Clearview to sufficiently identify personal information sourced from people in British Columbia, then this is a situation of Clearview’s own making” (at para 279). He noted that “[i]t is not an answer for Clearview to say that because the data was indiscriminately collected, any order requiring it to cease collecting data of persons present in a particular jurisdiction is unenforceable” (at para 279).

Implications

This is a significant decision as it upholds interpretations of important provisions of BC PIPA. These provisions are similar to ones in Alberta’s PIPA and in the federal PIPEDA. However, it is far from the end of the Clearview AI saga, and there is much to continue to watch.

In the first place, the BC Supreme Court decision is already under appeal to the BC Court of Appeal. If the Court of Appeal upholds this decision, it will be a major victory for the BC Commissioner. Yet, either way, there is likely to be a further application for leave to appeal to the Supreme Court of Canada. It may be years before the issue is finally resolved. In this time, data protection laws in BC, Alberta and at the federal level might well be reformed. It will therefore also be important to examine any new bills to see whether the provisions at issue in this case are addressed in any way or left as is.

In the meantime, Clearview AI has also filed for judicial review of the orders of the Quebec and Alberta commissioners, and these applications are moving forward. All three orders (BC, Alberta and Quebec) are based on the same joint findings. A decision by either or both the Quebec or Alberta superior courts that the orders are unreasonable could strike a significant blow for the united front that Canada’s commissioners are increasingly showing on privacy issues that affect all Canadians. There is therefore a great deal riding on the outcomes of these applications. In any event, regardless of the outcomes, expect applications for leave to appeal to the Supreme Court of Canada. Leave to appeal is less likely to be granted if all three provincial courts of appeal take a similar approach to the issues. It is at this point impossible to predict how this litigation will play out.

It is notable that the Privacy Commissioner of Canada, who has no order making powers under PIPEDA but who can apply to Federal Court for an order, declined to do so. Under PIPEDA, such an application requires a hearing de novo by the Federal Court – this means that unlike the judicial review proceedings in the other provinces, the Federal Court need not show any deference to the federal Commissioner’s findings. Instead, the Court would proceed to a determination of the issues after hearing and considering the parties’ evidence and argument. One might wonder whether the rather bruising decision of the Federal Court in Privacy Commissioner v. Facebook (which was subsequently overturned by the Federal Court of Appeal) might have influenced the Commissioner to not roll the dice to seek an order with so much at stake. That a hearing de novo before the Federal Court could upset the apple cart of the Commissioners’ attempts to co-ordinate efforts, reduce duplication and harmonize interpretation, is sobering. Yet, it also means that if this litigation saga ends with the conclusion that the orders are reasonable and enforceable, BC, Alberta and Quebec residents will have received results in the form of orders requiring Clearview to delete images and to geo-fence any future collection of images to protect those within those provinces (which will still need to be made enforceable in the US) – while Canadians elsewhere in the country will not. Canadians will need long promised but as yet undelivered reform of PIPEDA to address the ability of the federal Commissioner to issue orders – ones that will be subject to judicial review with appropriate deference, rather than second guessed by the Personal Information and Data Protection Tribunal proposed in Bill C-27.

Concluding thoughts

Despite rulings from privacy and data protection commissioners around the world that Clearview AI is in breach of their respective laws, and notwithstanding two class action lawsuits in the US under the Illinois Biometric Information Privacy Act, the company has continued to grow its massive FRT database. At the time of the Canadian investigation, the database was said to hold 3 billion images. Current reports place this number at over 50 billion. Considering the resistance of the company to compliance with Canadian law, this raises the question of what it will take to motivate compliance by resistant organizations. As the proposed amendments to Canada’s federal private sector privacy laws wither on the vine after neglect and mismanagement in their journey through Parliament, this becomes a pressing and important question.

 

Ontario plans to introduce digital identity services (Digital ID) to provide Ontarians with better access to their personal health information (PHI) in the provincial Electronic Health Record (EHR). This is being done through proposed amendments to the Personal Health Information Protection Act (PHIPA) introduced in Schedule 6 of Bill 231, currently before the legislature. Schedule 6 replaces proposed amendments to PHIPA regulations that were introduced in the summer of 2024 and that were substantively criticized by Ontario’s Privacy Commissioner. In introducing Bill 231, Health Minister Sylvia Jones stated that the goal is “to provide more people with the right publicly funded care in the right place by making it easier to access your health care records”.

Digital ID is an electronic means of verifying a person’s identity. Typically, such systems include some form of biometric data (for example, a face-print) to create a secure and verifiable ID system. We are becoming increasingly used to consuming products and services from both public and private sector sources in mobile and online contexts. Digital ID has the potential to improve secure access to these services.

Digital ID is already in place in many countries, but adoption has been slow in Canada. This may be in part because Digital ID raises concerns among some about the empowerment of a surveillance state. There are rumours that Ontario retreated from plans to introduce a more ambitious public sector Digital ID system over concerns about potential backlash, although it is quietly moving ahead in Bill 231 with the Digital Health ID. Unfortunately, Digital ID is most advantageous where a single Digital ID can be used to access multiple sites and services, eliminating the need to manage numerous usernames and passwords (with the security risks such management can entail). It is important to note that under Bill 231, the Digital Health ID will be single purpose, significantly reducing its advantages.

There is no doubt that Digital ID systems raise important privacy and security issues. They must be carefully implemented to ensure that the sensitive personal information they incorporate and the identities they represent are not misappropriated. They also raise equity issues. If Digital ID provides better and faster access to information and services, those who are not able to make use of Digital ID – because of age, disability, or the digital divide – will be at a disadvantage. Attention must be paid to ensuring that services and information are still available to those who must use other forms of identification – and that those other forms of identification remain accessible so long as they are needed.

Ontario’s Privacy Commissioner, in her comments on Bill 231 indicates that she fully supports the Ontario government’s goal in introducing Digital ID for the Electronic Health Record. She notes the importance of “enabling meaningful access to one’s health records” and agrees that “EHR access can help Ontarians better manage their health, and in turn, help create efficiencies in the health care system”. However, while she endorses the objectives, the Commissioner is highly critical of Bill 231. Her detailed comments note that the proposed amendments to PHIPA have the potential to reduce rights of access to personal health information in the EHR; that the bill contains no parameters on how, why and by whom the Digital ID scheme will be used; and that it includes broad regulation and directive making powers that could unravel rights and requirements already in place under PHIPA. She also observes that it conflates and converges the role of Ontario Health with respect to health data and Digital ID, and that it creates inconsistent and incomplete powers that will hinder enforcement and oversight. These are important concerns, articulately expressed by the head of perhaps the only independent body in the province capable of making sense of Bill 231’s Schedule 6.

Schedule 6 is brutally difficult to read and comprehend. This is largely because the introduction of Digital Health ID is being done as a series of amendments to an already (overly) complex piece of health privacy legislation. New legislation often has a narrative structure that – although not gripping reading – is at least relatively easy to understand and to follow. Bills that amend existing legislation can also generally be understood by those who work with them. You can cross-reference and see where new powers are added, and where the wording of clauses has been changed. But Schedule 6 of Bill 231 is an ugly hybrid. It introduces a complex new Digital Health ID scheme as an amendment to existing health privacy legislation, even though Digital Health ID is more than just a privacy issue. There is no doubt that such a system would have to be compliant with PHIPA and that some amendments might be required. However, Digital Health ID creates a new system for accessing health data in the EHR. It could have been introduced as a separate bill. Such an approach would have been clearer, more transparent and more accessible than the convoluted and incomplete scheme that has been shoe-horned into PHIPA by Bill 231.

It is not just the lack of transparency caused by such a contorted set of amendments that is a problem. In a 2019 presentation by Assistant Deputy Minister of Health Hein, the government’s approach to their “Digital First for Health” program promised to “[m]odernize PHIPA to make it easier for Ontarians to access their information, streamline information sharing processes, and support the use of data for analytics and planning.” One of the goals of PHIPA modernization was “[r]educing barriers to patient access by enabling patients to more easily access, use, and share their personal health information, empowering them to better manage their health.” This sets up Digital ID as part of the PHIPA modernization process. But Digital ID is not a “solution” to barriers caused by privacy laws. For Digital ID, the real barriers to better access to health data are structural and infrastructural issues in health data management.

Let me be clear that I am not suggesting that the Ontario government’s health system reform goals are not important. They are. But Digital Health ID should not be framed as “PHIPA modernization”. The objectives of such a system are not about modernizing health privacy legislation; they are about modernizing the health care system. They will have privacy implications which will need to be attended to but framing them as “PHIPA modernization” means that you end up where we are now: with changes to the health care system being implemented through complicated and problematic amendments to legislation that is first and foremost meant to protect the privacy of personal health information.

Australia and New Zealand have both introduced government-backed digital ID systems through specific digital identity legislation. Admittedly both statutes address digital identity more broadly than just in the health sector. Nevertheless, these laws are examples of how legislation can clearly and systematically set out a framework for digital identity that includes all the necessary elements – including how the law will protect privacy and how it dovetails with existing privacy laws and oversight. This kind of framework facilitates public debate and discussion. It makes it easier to understand, critique and propose improvements to the Bill. In her comments on Bill 231, for example, the Privacy Commissioner notes that “[c]larity and coherence of the many roles of Ontario Health would also assist my office’s oversight and enforcement role.” She observes that Schedule 6 “is inconsistent and incomplete in its approach to my office’s oversight and enforcement authority”. These are only two examples of places in her comments where it is evident that the lack of clarity regarding the proposed Digital Health ID scheme hampers its assessment.

Schedule 6 also leaves much of its substance to future regulations and directives. This is part of a disturbing trend in law-making in which key details of legislation are left to behind-the-scenes rulemaking. As the Privacy Commissioner notes in her comments, some of the matters left to these subordinate forms of regulation are matters of policy for which public consultation and engagement are required. As she so aptly puts it: “Directives are appropriate for guiding the implementation of legal requirements, not for establishing the very legal requirements to be implemented.”

Clearly, technology moves fast, and it is hard to keep laws relevant and applicable. There may be a need in some cases to resort to different tools or strategies to ensure that the laws remain flexible enough to adapt to evolving and emerging technologies. The challenge is, however, to determine which things belong in the law, and which things can be ‘flexed’. There is a difference between building flexibility into a law and enacting something that looks like a rough draft with sticky notes in places where further elaboration will be needed. Schedule 6 of Bill 231 is a rough draft of a set of amendments to an already overly-complex law. It should be its own statute, carefully coordinated with PHIPA and its independent oversight.

Digital Health ID may be important to improve access to health information for Ontarians. It will certainly carry with it risks that should be properly managed. As a starting point, Ontarians deserve a clear and transparent law that can be understood and debated. Further, privacy law should not be set up as a problem that stands in the way of reforming the health care system. Such an approach does not make good law, nor does it bode well for the privacy rights of Ontarians.

 

Regulatory sandboxes are a relatively recent innovation in regulation (with the first one being launched by the UK Financial Authority in 2015). Since that time, they have spread rapidly in the fintech sector. The EU’s new Artificial Intelligence Act has embraced this new tool, making AI regulatory sandboxes mandatory for member states. In its most recent budget, Canada’s federal government also revealed a growing interest in advancing the use of regulatory sandboxes, although sandboxes are not mentioned in the ill-fated Artificial Intelligence and Data Act in Bill C-27.

Regulatory sandboxes are seen as a tool that can support innovation in areas where complex technology evolves rapidly, creating significant regulatory hurdles for innovators to overcome. The goal is not to evade or dilute regulation; rather, it is to create a space where regulators and innovators can explore how regulations designed to protect the public should be applied to technologies that were unforeseen at the time the regulations were drafted. The sandbox is meant to be a learning experience for both regulators and innovators. Outcomes can include new guidance that can be shared with all innovators; recommendations for legislative or regulatory reform; or even decisions that a particular innovation is not yet capable of safe deployment.

Of course, sandboxes can raise issues about regulatory capture and the independence of regulators. They are also resource intensive, requiring regulators to make choices about how to meet their goals. They require careful design to minimize risks and maximize return. They also require the interest and engagement of regulated parties.

In the autumn of 2023, Elif Nur Kumru and I began a SSHRC-funded project to explore the potential for a privacy regulatory sandbox for Ontario. Working in partnership with the Office of Ontario’s Information and Privacy Commissioner, we examined the history and evolution of regulatory sandboxes. We met with representatives of data protection authorities in the United Kingdom, Norway and France to learn about the regulatory sandboxes they had developed to address privacy issues raised by emerging technologies, including artificial intelligence. We identified some of the challenges and issues, as well as key features of regulatory sandboxes. Our report is now publicly available in both English and French.

A recent decision of the Federal Court of Canada (Ali v. Minister of Public Safety and Emergency Preparedness) highlights the role of judicial review in addressing automated decision-making. It also prompts reflection on the limits of emerging codified rights to an explanation.

In July 2024, Justice Battista overturned a decision of the Refugee Protection Division (RPD) which had vacated the refugee status of the applicant, Mr. Ali. The decision of the RPD was based largely on a photo comparison that the RPD to conclude that Mr. Ali was not a Somali refugee as he had claimed. Rather, they concluded that he was a Kenyan student who had entered Canada on a student visa in 2016, a few months prior to Mr. Ali’s refugee protection claim.

Throughout the proceedings the applicant had sought information about how photos of the Kenyan student had been found and matched with his own. He was concerned that facial recognition technology (FRT) – which has had notorious deficiencies when used to identify persons of colour – had been used. In response, the Minister denied the use of FRT, maintaining instead that the photographs had been found and analyzed through a ‘manual process’. A Canadian Border Services agent subsequently provided an affidavit to the effect that “a confidential manual investigative technique was used” (at para 15). The RPD was satisfied with this assurance. It considered that how the photographs had been gathered was irrelevant to their own capacity as a tribunal to decide based on the photographs before them. They concluded that Mr. Ali had misrepresented his identity.

On judicial review, Justice Battista found that the importance of the decision to Mr. Ali and the quasi-judicial nature of the proceedings meant that he was owed a high level of procedural fairness. Because a decision of the RPD cannot be appealed, and because the consequences of revocation of refugee status are very serious (including loss of permanent resident status and possible removal from the country), Justice Battista found that “it is difficult to find a process under [the Immigration and Refugee Protection Act] with a greater imbalance between severe consequences and limited recourse” (at para 23). He found that the RPD had breached Mr. Ali’s right to procedural fairness “when it denied his request for further information about the source and methodology used by the Minister in obtaining and comparing the photographs” (at para 28).

Justice Battista ruled that, given the potential consequences for the applicant, disclosure of the methods used to gather the evidence against him “had to be meaningful” (at para 33). He concluded that it was unfair for the RPD “to consider the photographic evidence probative enough for revoking the Applicant’s statuses and at the same time allow that evidence to be shielded from examination for reliability” (at para 37).

In addition to finding a breach of procedural fairness, Justice Battista also found that the RPD’s decision was unreasonable. He noted that there had been sufficiently credible evidence before the original RPD refugee determination panel to find that Mr. Ali was a Somali national entitled to refugee protection. None of this evidence had been assessed in the decision of the panel that vacated Mr. Ali’s refugee status. Justice Battista noted that “[t]he credibility of this evidence cannot co-exist with the validity of the RPD vacation panel’s decision” (at para 40). He also noted that the applicant had provided an affidavit describing differences between his photo and that of the Kenyan student; this evidence had not been considered in the RPD’s decision, contributing to its unreasonableness. The RPD also dismissed evidence from a Kenyan official that, based on biometric records analysis, there was no evidence that Mr. Ali was Kenyan. Justice Battista noted that this dismissal of the applicant’s evidence was in “stark contrast to its treatment of the Minister’s photographic evidence” (at para 44).

The Ali decision and the right to an explanation

Ali is interesting to consider in the context of the emerging right to an explanation of automated decision-making. Such a right is codified for the private sector context in the moribund Bill C-27, and Quebec has enacted a right to an explanation for both public and private sector contexts. Such rights would apply in cases where an automated decision system (ADS) has been used (and in the case of Quebec, the decision must be based “exclusively on an automated processing” of personal information. Yet in Ali there is no proof that the decision was made or assisted by an AI technology – in part because the Minister refused to explain their ‘confidential’ process. Further, the ultimate decision was made by humans. It is unclear how a codified right to an explanation would apply if the threshold for the exercise of the right is based on the obvious and/or exclusive use of an ADS.

It is also interesting to consider the outcome here in light of the federal Directive on Automated Decision Making (DADM). The DADM, which largely addresses the requirements for design and development of ADS in the federal public sector, incorporates principles of fairness. It applies to “any system, tool, or statistical model used to make an administrative decision or a related assessment about a client”. It defines an “automated decision system” as “[a]ny technology that either assists or replaces the judgment of human decision-makers […].” In theory, this would include the use of automated systems such as FRT that assist in human decision-making. Where and ADS is developed and used, the DADM imposes transparency obligations, which include an explanation in plain language of:

  • the role of the system in the decision-making process;
  • the training and client data, their source, and method of collection, as applicable;
  • the criteria used to evaluate client data and the operations applied to process it;
  • the output produced by the system and any relevant information needed to interpret it in the context of the administrative decision; and
  • a justification of the administrative decision, including the principal factors that led to it. (Appendix C)

The catch, of course, is that it might be impossible for an affected person to know whether a decision has been made with the assistance of an AI technology, as was the case here. Further, the DADM is not effective at capturing informal or ‘off-the-books’ uses of AI tools. The decision in Ali therefore does two important things in the administrative law context. First, it confirms that – in the case of a high impact decision – the right of the individual to an explanation of how the decision was reached as a matter of procedural fairness. Judicial review thus provides recourse for affected individuals – something that the more prophylactic DADM does not. Second, this right includes an obligation to provide details that could either explain or rule out the use of an automated system in the decisional process. In other words, procedural fairness includes a right to know whether and how AI technologies were used in reaching the contested decision. Mere assertions that no algorithms were used in gathering evidence or in making the decision are insufficient – if an automated system might have played a role, the affected individual is entitled to know the details of the process by which the evidence was gathered and the decision reached. Ultimately, what Justice Battista crafts in Ali is not simply a right to an explanation of automated decision-making; rather, it is a right to the explanation of administrative decision-making processes that account for an AI era. In a context in which powerful computing tools are available for both general and personal use, and are not limited to purpose-specific, carefully governed and auditable in-house systems, the ability to demand an explanation of the decisional process in order to rule out the non-transparent use of AI systems seems increasingly important.

Note: The Directive on Automated Decision-Making is currently undergoing its fourth review. You may participate in consultations here.

A 2024 investigation report from the Office of the Ontario Information and Privacy Commissioner (OIPC) highlights the tension between the desire of researchers to access health data on the one hand, and the need to protect patient privacy on the other. The protection of personal health information (PHI) is of great practical importance as misuse of such information can have serious consequences for individuals. Yet there is also a significant autonomy and dignity dimension as well. As patients, we are required to share very personal health information with physicians in order to be treated. The understanding is that when we provide that information, it will be appropriately cared for, and that it will not be used for other purposes without our express consent unless it falls within carefully constrained legislative exceptions.

In Ontario, the Personal Health Information Protection Act (PHIPA) provides the basic framework for the protection of PHI. Under PHIPA, those who collect PHI from patients are custodians of that information and have significant legal duties. Custodians must obtain appropriate consent for the collection of PHI; they are obliged to use it only for consented purposes; and they must keep it secure. Because of the strong public interest in medical research – for which good data is essential – PHIPA provides several avenues to support medical research. The first is consent. For research studies that require identifiable individuals to participate and to share their data, researchers can recruit participants and seek their informed consent to the collection and use of their data. Consent is not required if researchers use de-identified data, but they must request access to such data, and must complete a research ethics protocol, which is evaluated by a hospital or university research ethics board (REB). The Ontario government has also created “prescribed entities” under PHIPA. A prescribed entity has authority under the legislation to collect health administrative data as well as other data, to secure and administer it, and to use it for analytic purposes. Pursuant to PHIPA, they have the lawful authority to disclose the PHI to researchers under conditions that also involve research protocols and ethics review. ICES is the leading example of a prescribed entity for analytics and research using and disclosing Ontarian’s PHI. Prescribed entities amass significant quantities of PHI but do so under strict regulatory control. Their privacy and security practices are reviewed every three years by the OIPC, and they must comply with any recommendations made by the OIPC. However, prescribed entities do not meet all needs for health data for research, in spite of their growing patient chart datasets. In addition, there have been concerns raised that access to data in the hands of prescribed entities is cumbersome, although this is in part due to the added requirement for a privacy impact assessment (in addition to a research ethics protocol) as mandated by the OIPC. .

It is within this context that the complaint that fueled this investigation into the University of Toronto’s Practice-Based Research Network (UTOPIAN) must be understood. Created and overseen through the Department of Family & Community Medicine at the University of Toronto, UTOPIAN was essentially framed as a research project. The “research” described by the University in its REB application involved the creation of a database of “anonymized patient data from EMRs of primary health care providers”, providing “accessible data options for research and public health surveillance”, and devising “algorithms or other processes to enable automated EMR data collection, data de-identification, and other data processes” (at para 150). The University of Toronto (the University) sought and received ethics approval from its Research Ethics Board (REB). It then collected PHI on a regular and ongoing basis from clinicians in primary care practices in Ontario affiliated with the University of Toronto to create a pool of health data. To obtain the data, UTOPIAN sought the agreement of individual physicians to provide regular downloads of their patient electronic medical record (EMR) data to UTOPIAN. It then provided access to this data for health research to members of the broader U of T health research community.

Not just the volume, but also the type of information collected by UTOPIAN increased over time. The investigation report notes that in 2020 the University “significantly increased the extent of the information it uploaded from physicians’ electronic medical records (EMR) systems” (para 5). The information collected included full chart and identifying information although the identifying information was stored separately.

An initial complaint to the OIPC was filed by doctors who were aware of but uncomfortable with UTOPIAN. They raised several concerns with the OIPC but sought to remain anonymous out of fear of retribution within the university health network. The OIPC therefore proceeded with the investigation as if it were a commissioner-initiated complaint. The issues for investigation were whether UTOPIAN was properly “research” within the meaning of PHIPA; and if it was, whether it complied with the requirements for research under s. 44 of PHIPA.

The investigator began by considering whether, assuming UTOPIAN was research, it had complied with PHIPA requirements. Research projects that use patient data without consent must have a research plan approved by a Research Ethics Board (REB). They must also enter into a research agreement with the REB, and they must comply with all conditions set by the REB. A copy of the research plan and the REB decision approving the plan must be shared with the custodian who is asked to provide data for the project. PHI obtained under such a research plan must only be used for the specified purposes approved by the REB. Researchers must also notify the custodians who provide the data if there has been any breach of the research agreement.

The investigator found that the University was in breach of several of its obligations. First, it did not share its research plan with data custodians, nor did it provide copies of updated research plans as the project progressed. Instead, it provided a letter that summarized the project, and that the custodian could sign to indicate agreement. Although the University maintained that copies of the other documentation was available on request, the letter did not specify this. The investigator found that the letter lacked important details, including the end date of the project. While she found the idea of providing a high-level summary commendable, she also found that the other documents should have been appended to the letter, and it should have been clear to custodians that these documents contained additional information.

The investigator also found that the UTOPIAN project was changed over time, and while new custodians were asked to sign an updated version of the Provider Letter, there was no new letter sent to existing participant custodians. Instead, they received email notices about changes to the project. Some of these, such as the extraction of the full patient chart, were significant. The investigator found that email notices did not suffice – there had to be express agreement with the new changes. Further, she found that notice was only provided of what the University considered to be the most significant changes. The investigator found that it was not reasonable to consider that sending out emails and assuming consent if no objections were raised was sufficient to constitute agreement. She noted that emails can be overlooked by busy physicians or can even be lost in spam filters. She also disagreed with the University’s characterization of some of the changes as ‘minor’. She found that the University need to ensure that custodians “clearly, unambiguously and unequivocally communicated their acceptance of the proposed amendment to the Provider Agreement rather than relying on silence.” (at para 101).

REB approvals for research projects are time-limited and can be renewed. In this case, the REB approval expired in November 2022, but the University continued to collect PHI after that date (a date which had not been provided in the letter to custodians). This collection of PHI was therefore not authorized under PHIPA. The University sent a letter in January 2023 to custodians informing them that there had been an inadvertent uploading of patient data after the expiry of the agreement. Although it destroyed this data, the investigator nevertheless found that this was a significant breach of PHIPA. The investigator also found that there had been an earlier period where the REB approval had been allowed to expire and where data had been collected during the two-month period between its expiry and a new REB approval. That too was a breach of PHIPA. The investigator declined to characterize these breaches as administrative oversights, noting instead that they were “deeply concerning from both a legal and ethical perspective.” (at para 80). She also found that although the University had provided notice of the breach caused by collection of data after the expiry of the agreement in 2022, it had failed to provide notice of the breach that occurred when the agreement lapsed for two months in 2018. This failure to provide notice violated s. 44(6) of PHIPA.

The REB had required the University to de-link collected data from identifying information. The investigator reviewed the University’s deidentification practices and found no evidence to suggest there were problems with it. However, she nonetheless recommended that, considering the volume and sensitivity of the data collected, the University should conduct a re-identification study of its UTOPIAN database.

The REB had also required the University to conduct site visits to custodians’ offices to ensure that notices were properly provided to patients of the custodians. The investigator found that although site visits had been constrained by the COVID-19 pandemic, the University had not resumed these visits post-pandemic. The REB had required “regular” site visits, and she found that this failure to resume visits did not meet this requirement. Further, she raised concerns about the adequacy of notices posted in physician waiting rooms in a context in which doctors used virtual technologies with many patients. This shift in practice should have prompted a variation to the research plan.

The complainants had also raised concerns that deidentified patient data was being sold. The investigator was satisfied that this was not the case. However, she found that these concerns – raised by doctors who had been invited to participate in the project – highlighted a lack of adequate transparency. She noted that the abbreviated form of notice provided by the University “may have contributed to the suspicion and distrust on the part of at least some of the custodians” (at para 135).

At the time of the investigation and report, the University had put on hold all its activities in relation to UTOPIAN. Although it had no plans to collect new data, it was developing an REB application in relation to the use of the existing data in the database. The investigator made a series of recommendations to the University to correct its practices in the event that it sought to use the archived UTOPIAN database for research purposes.

Up to this point, the investigator’s report raises serious concerns about a project that operated on a large scale. UTOPIAN was a substantial pool of data – the investigator noted that it contained the health data of almost 600,000 Ontarians. However, the most significant issue from a public policy point of view is whether this type of project – which essentially creates a “data safe haven” to use the University’s own words – qualifies as “research” under PHIPA. In other words, the fundamental issue was whether this was an appropriate statutory basis to leverage so as to engage in this type of data sharing.

In addition to its research exceptions, PHIPA contains provisions allowing for the creation of “prescribed entities” who are empowered under the legislation to pool data from different sources and to make it available for analytics. Prescribed entities are also permitted to disclosure these data to researchers for research purposes. However, prescribed entities must meet the requirements of s. 45(3) of PHIPA, which mandate close supervision by the OIPC. The investigator noted that UTOPIAN performed functions similar to ICES, a prescribed entity for health data in Ontario, but did so without the same levels of oversight. She observed that using the research provisions of PHIPA “to authorize large-scale research platforms that operate as an ongoing concern, such as UTOPIAN, can lead to many practical difficulties given the awkward fit” (at para 155).

Since UTOPIAN was no longer operating at the time of the decision, the investigator ultimately reached no conclusion as to whether it constituted “research”, and she declined to send the matter to adjudication. This is unfortunate given the University’s plans, acknowledged in the report, to seek REB approval to use the data already collected for further research studies. The investigator also noted in a postscript to her report that Queen’s University had applied for and received REB approval to create a similar project Ontario-wide project, called the Primary Care Ontario Practice-based Learning and Research Network (POPLAR). She noted that she had forwarded her decision on the UTOPIAN file to Queen’s University, highlighting for them her reservations about whether this type of project qualified for PHIPA’s research exception. She noted that the OIPC was open to consultation by Queen’s on this issue.

The conclusion of the UTOPIAN investigation is thus rather inconclusive. If UTOPIAN was ‘research’, it clearly breached several PHIPA requirements. What is less clear is whether it was ‘research’. If it was not, then there was no legal basis for the collection, hosting and sharing of this data. The investigator avoided making a call on the fundamental legitimacy of this data pooling project because it had ended at the time of the report, even though there appeared to be plans to make use of the already-collected data, and even though the concept had been embraced by another institution with plans for an even larger data pool. As a result, serious issues regarding the pooling of health data for research in Ontario remain unresolved.

Seen one way, the OIPC’s invitation to Queen’s University to consult with them regarding POPLAR signals the OIPC’s willingness to explore whether and how complex new proposals designed to enhance health research in Ontario can be reconciled with existing legislation. Seen another way, leveraging the research exception in this way seems to create a clearly inadequate framework for data sharing on this scale. This is evident when compared with the considerable safeguards for privacy and security protection in the case of prescribed entities. If prescribed entities are not meeting the needs of researchers, then perhaps the solution lies in law reform rather than privacy law hacks. What the decision lacks (and could not have been expected to provide) is an analysis of the landscape for health data research in Ontario, an assessment of the existing frameworks and any shortcomings they might have, and proposals to address any issues in a manner that both furthers research goals and protects privacy. This should be the role of government. The investigation report into UTOPIAN – situated within this public policy vacuum – leaves Ontarians with ongoing uncertainty and no clear path forward.

On May 13, 2024, the Ontario government introduced Bill 194. The bill addresses a catalogue of digital issues for the public sector. These include: cybersecurity, artificial intelligence governance, the protection of the digital information of children and youth, and data breach notification requirements. Consultation on the Bill closes on June 11, 2024. Below is my submission to the consultation. The legislature has now risen for the summer, so debate on the bill will not be moving forward now until the fall.

 

Submission to the Ministry of Public and Business Service Delivery on the Consultation on proposed legislation: Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024

Teresa Scassa, Canada Research Chair in Information Law and Policy, University of Ottawa

June 4, 2024

I am a law professor at the University of Ottawa, where I hold the Canada Research Chair in Information Law and Policy. I research and write about legal issues relating to artificial intelligence and privacy. My comments on Bill 194 are made on my own behalf.

The Enhancing Digital Security and Trust Act, 2024 has two schedules. Schedule 1 has three parts. The first relates to cybersecurity, the second to the use of AI in the broader public service, and the third to the use of digital technology affecting individuals under 18 years of age in the context of Children’s Aid Societies and School Boards. Schedule 2 contains a series of amendments to the Freedom of Information and Protection of Privacy Act (FIPPA). My comments are addressed to each of the Schedules. Please note that all examples provided as illustrations are my own.

Summary

Overall, I consider this to be a timely Bill that addresses important digital technology issues facing Ontario’s public sector. My main concerns relate to the sections on artificial intelligence (AI) systems and on digital technologies affecting children and youth. I recommend the addition of key principles to the AI portion of the Bill in both a reworked preamble and a purpose section. In the portion dealing with digital technologies and children and youth, I note the overlap created with existing privacy laws, and recommend reworking certain provisions so that they enhance the powers and oversight of the Privacy Commissioner rather than creating a parallel and potentially conflicting regime. I also recommend shifting the authority to prohibit or limit the use of certain technologies in schools to the Minister of Education and to consider the role of public engagement in such decision-making. A summary of recommendations is found at the end of this document.

Schedule 1 - Cybersecurity

The first section of the Enhancing Digital Security and Trust Act (EDSTA) creates a framework for cybersecurity obligations that is largely left to be filled by regulations. Those regulations may also provide for the adoption of standards. The Minister will be empowered to issue mandatory Directives to one or more public sector entities. There is little detail provided as to what any specific obligations might be, although section 2(1)(a) refers to a requirement to develop and implement “programs for ensuring cybersecurity” and s. 2(1)(c) anticipates requirements on public sector entities to submit reports to the minister regarding cyber security incidents. Beyond this, details are left to regulations. These details may relate to roles and responsibilities, reporting requirements, education and awareness measures, response and recovery measures, and oversight.

The broad definition of a “public sector entity” to which these obligations apply includes hospitals, school boards, government ministries, and a wide range of agencies, boards and commissions at the provincial and municipal level. This scope is important, given the significance of cybersecurity concerns.

Although there is scant detail in Bill 194 regarding actual cyber security requirements, this manner of proceeding seems reasonable given the very dynamic cybersecurity landscape. A combination of regulations and standards will likely provide greater flexibility in a changeable context. Cybersecurity is clearly in the public interest and requires setting rules and requirements with appropriate training and oversight. This portion of Bill 194 would create a framework for doing this. This seems like a reasonable way to address public sector cybersecurity, although, of course, the effectiveness will depend upon the timeliness and the content of any regulations.

Schedule 1 – Use of Artificial Intelligence Systems

Schedule 1 of Bill 194 also contains a series of provisions that address the use of AI systems in the public sector. These will apply to AI systems that meet a definition that maps onto the Organization for Economic Co-operation and Development (OECD) definition. Since this definition is one to which many others are being harmonized (including a proposed amendment to the federal AI and Data Act, and the EU AI Act), this seems appropriate. The Bill goes on to indicate that the use of an AI system in the public sector includes the use of a system that is publicly available, that is developed or procured by the public sector, or that is developed by a third party on behalf of the public sector. This is an important clarification. It means, for example, that the obligations under the Act could apply to the use of general-purpose AI that is embedded within workplace software, as well as purpose-built systems.

Although the AI provisions in Bill 194 will apply to “public service entities” – defined broadly in the Bill to include hospitals and school boards as well as both federal and municipal boards, agencies and commissions – the AI provisions will only apply to a public sector entity that is “prescribed for the purposes of this section if they use or intend to use an artificial intelligence system in prescribed circumstances” (s. 5(1)). The regulations also might apply to some systems (e.g., general purpose AI) only when they are being used for a particular purpose (e.g., summarizing or preparing materials used to support decision-making). Thus, while potentially quite broad in scope, the actual impact will depend on which public sector entities – and which circumstances – are prescribed in the regulations.

Section 5(2) of Bill 194 will require a public sector entity to which the legislation applies to provide information to the public about the use of an AI system, but the details of that information are left to regulations. Similarly, there is a requirement in s. 5(3) to develop and implement an accountability framework, but the necessary elements of the framework are left to regulations. Under s. 5(4) a public sector entity to which the Act applies will have to take steps to manage risks in accordance with regulations. It may be that the regulations will be tailored to different types of systems posing different levels of risk, so some of this detail would be overwhelming and inflexible if included in the law itself. However, it is important to underline just how much of the normative weight of this law depends on regulations.

Bill 194 will also make it possible for the government, through regulations, to prohibit certain uses of AI systems (s. 5(6) and s. 7(f) and (g)). Interestingly, what is contemplated is not a ban on particular AI systems (e.g., facial recognition technologies (FRT)); rather, it is potential ban on particular uses of those technologies (e.g., FRT in public spaces). Since the same technology can have uses that are beneficial in some contexts but rights-infringing in others, this flexibility is important. Further, the ability to ban certain uses of FRT on a province-wide basis, including at the municipal level, allows for consistency across the province when it comes to issues of fundamental rights.

Section 6 of the bill provides for human oversight of AI systems. Such a requirement would exist only when a public entity uses an AI system in circumstances set out in the regulations. The obligation will require oversight in accordance with the regulations and may include additional transparency obligations. Essentially, the regulations will be used to customize obligations relating to specific systems or uses of AI for particular purposes.

Like the cybersecurity measures, the AI provisions in Bill 194 leave almost all details to regulations. Although I have indicated that this is an appropriate way to address cybersecurity concerns, it may be less appropriate for AI systems. Cybersecurity is a highly technical area where measures must adapt to a rapidly evolving security landscape. In the cybersecurity context, the public interest is in the protection of personal information and government digital and data infrastructures. Risks are either internal (having to do with properly training and managing personnel) or adversarial (where the need is for good security measures to be in place). The goal is to put in place measures that will ensure that the government’s digital systems are robust and secure. This can be done via regulations and standards.

By contrast, the risks with AI systems will flow from decisions to deploy them, their choice and design, the data used to train the systems, and their ongoing assessment and monitoring. Flaws at any of these stages can lead to errors or poor functioning that can adversely impact a broad range of individuals and organizations who may interact with government via these systems. For example, an AI chatbot that provides information to the public about benefits or services, or an automated decision-making system for applications by individuals or businesses for benefits or services, interacts with and impacts the public in a very direct way. Some flaws may lead to discriminatory outcomes that violate human rights legislation or the Charter. Others may adversely impact privacy. Errors in output can lead to improperly denied (or allocated) benefits or services, or to confusion and frustration. There is therefore a much more direct impact on the public, with effects on both groups and individuals. There are also important issues of transparency and trust. This web of considerations makes it less appropriate to leave the governance of AI systems entirely to regulations. The legislation should, at the very least, set out the principles that will guide and shape those regulations. The Ministry of Public and Business Service Delivery has already put considerable work into developing a Trustworthy AI Framework and a set of (beta) principles. This work could be used to inform guiding principles in the statute.

Currently, the guiding principles for the whole of Bill 194 are found in the preamble. Only one of these directly relates to the AI portion of the bill, and it states that “artificial intelligence systems in the public sector should be used in a responsible, transparent, accountable and secure manner that benefits the people of Ontario while protecting privacy”. Interestingly, this statement only partly aligns with the province’s own beta Principles for Ethical Use of AI. Perhaps most importantly, the second of these principles, “good and fair”, refers to the need to develop systems that respect the “rule of law, human rights, civil liberties, and democratic values”. Currently, Bill 194 is entirely silent with respect to issues of bias and discrimination (which are widely recognized as profoundly important concerns with AI systems, and which have been identified by Ontario’s privacy and human rights commissioners as a concern). At the very least, the preamble to Bill 194 should address these specific concerns. Privacy is clearly not the only human rights consideration at play when it comes to AI systems. The preamble to the federal government’s Bill C-27, which contains the proposed Artificial Intelligence and Data Act, states: “that artificial intelligence systems and other emerging technologies should uphold Canadian norms and values in line with the principles of international human rights law”. The preamble to Bill 194 should similarly address the importance of human rights values in the development and deployment of AI systems for the broader public sector.

In addition, the bill would benefit from a new provision setting out the purpose of the part dealing with public sector AI. Such a clause would shape the interpretation of the scope of delegated regulation-making power and would provide additional support for a principled approach. This is particularly important where legislation only provides the barest outline of a governance framework.

In this regard, this bill is similar to the original version of the federal AI and Data Act, which was roundly criticized for leaving the bulk of its normative content to the regulation-making process. The provincial government’s justification is likely to be similar to that of the federal government – it is necessary to remain “agile”, and not to bake too much detail into the law regarding such a rapidly evolving technology. Nevertheless, it is still possible to establish principle-based parameters for regulation-making. To do so, this bill should more clearly articulate the principles that guide the adoption and use of AI in the broader public service. A purpose provision could read:

The purpose of this Part is to ensure that artificial intelligence systems adopted and used by public sector entities are developed, adopted, operated and maintained in manner that is transparent and accountable and that respects the privacy and human rights of Ontarians.

Unlike AIDA, the federal statute which will apply to the private sector, Bill 194 is meant to apply to the operations of the broader public service. The flexibility in the framework is a recognition of both the diversity of AI systems, and the diversity of services and activities carried out in this context. It should be noted, however, that this bill does not contemplate any bespoke oversight for public sector AI. There is no provision for a reporting or complaints mechanism for members of the public who have concerns with an AI system. Presumably they will have to complain to the department or agency that operates the AI system. Even then, there is no obvious requirement for the public sector entity to record complaints or to report them for oversight purposes. All of this may be provided for in s. 5(3)’s requirement for an accountability framework, but the details of this have been left to regulation. It is therefore entirely unclear from the text of Bill 194 or what recourse – if any – the public will have when they have problematic encounters with AI systems in the broader public service. Section 5(3) could be amended to read:

5(3) A public sector entity to which this section applies, shall, in accordance with the regulations, develop and implement an accountability framework respecting their use of the artificial intelligence system. At a minimum, such a framework will include:

a) The specification of reporting channels for internal or external complaints or concerns about the operation of the artificial intelligence system;

b) Record-keeping requirements for complaints and concerns raised under subparagraph 5(3)(a), as well as for responses thereto.

Again, although a flexible framework for public sector AI governance may be an important goal, key elements of that framework should be articulated in the legislation.

Schedule 1 – Digital Technology Affecting Individuals Under Age 18

The third part of Schedule 1 addresses digital technology affecting individuals under age 18. This part of Bill 194 applies to children’s aid societies and school boards. Section 9 enables the Lieutenant Governor in Council to make regulations regarding “prescribed digital information relating to individuals under age 18 that is collected, used, retained or disclosed in a prescribed manner”. Significantly, “digital information” is not defined in the Bill.

The references to digital information are puzzling, as it seems to be nothing more than a subset of personal information – which is already governed under both the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA) and FIPPA. Personal information is defined in both these statutes as “recorded information about an identifiable individual”. It is hard to see how “digital information relating to individuals under age 18” is not also personal information (which has received an expansive interpretation). If it is meant to be broader, it is not clear how. Further, the activities to which this part of Bill 194 will apply are the “collection, use, retention or disclosure” of such information. These are activities already governed by MFIPPA and FIPPA – which apply to school boards and children’s aid societies respectively. What Bill 194 seems to add is a requirement (in s. 9(b)) to submit reports to the Minister regarding the collection, use, retention and disclosure of such information, as well as the enablement of regulations in s. 9(c) to prohibit collection, use, retention or disclosure of prescribed digital information in prescribed circumstances, for prescribed purposes, or subject to certain conditions. Nonetheless, the overlap with FIPPA and MFIPPA is potentially substantial – so much so, that s. 14 provides that in case of conflict between this Act and any other, the other Act would prevail. What this seems to mean is that FIPPA and MFIPPA will trump the provisions of Bill 194 in case of conflict. Where there is no conflict, the bill seems to create an unnecessary parallel system for governing the personal information of children.

The need for more to be done to protect the personal information of children and youth in the public school system is clear. In fact, this is a strategic priority of the current Information and Privacy Commissioner (IPC), whose office has recently released a Digital Charter for public schools setting out voluntary commitments that would improve children’s privacy. The IPC is already engaged in this area. Not only does the IPC have the necessary expertise in the area of privacy law, the IPC is also able to provide guidance, accountability and independent oversight. In any event, since the IPC will still have oversight over the privacy practices of children’s aid societies and school boards notwithstanding Bill 194, the new system will mean that these entities will have to comply with regulations set by the Minister on the one hand, and the provisions of FIPPA and MFIPPA on the other. The fact that conflicts between the two regimes will be resolved in favour of privacy legislation means that it is even conceivable that the regulations could set requirements or standards that are lower than what is required under FIPPA or MFIPPA – creating an unnecessarily confusing and misleading system.

Another odd feature of the scheme is that Bill 194 will require “reports to be submitted to the Minister or a specified individual in respect of the collection, use, retention and disclosure” of digital information relating to children or youth (s. 9(b)). It is possible that the regulations will specify that it is the Privacy Commissioner to whom the reports should be submitted. If it is, then it is once again difficult to see why a parallel regime is being created. If it is not, then the Commissioner will be continuing her oversight of privacy in schools and children’s aid societies without access to all the relevant data that might be available.

It seems as if Bill 194 contemplates two separate sets of measures. One addresses the proper governance of the digital personal information of children and youth in schools and children’s aid societies. This is a matter for the Privacy Commissioner, who should be given any additional powers she requires to fulfil the government’s objectives. Sections 9 and 10 of Bill 194 could be incorporated into FIPPA and MFIPPA, with modifications to require reporting to the Privacy Commissioner. This would automatically bring oversight and review under the authority of the Privacy Commissioner. The second objective of the bill seems to be to provide the government with the opportunity to issue directives regarding the use of certain technologies in the classroom or by school boards. This is not unreasonable, but it is something that should be under the authority of the Minister of Education (not the Minister of Public and Business Service Delivery). It is also something that might benefit from a more open and consultative process. I would recommend that the framework be reworked accordingly.

Schedule 2: FIPPA Amendments

Schedule 2 consists of amendments to the Freedom of Information and Protection of Privacy Act. These are important amendments that will introduce data breach notification and reporting requirements for public sector entities in Ontario that are governed by FIPPA (although, interestingly, not those covered by MFIPPA). For example, a new s. 34(2)(c.1) will require the head of an institution to include in their annual report to the Commissioner “the number of thefts, losses or unauthorized uses or disclosures of personal information recorded under subsection 40.1”. The new subsection 40.1(8) will require the head of an institution to keep a record of any such data breach. Where a data breach reaches the threshold of creating a “real risk that a significant harm to an individual would result” (or where any other circumstances prescribed in regulations exist), a separate report shall be made to the Commissioner under s. 40.1(1). This report must be made “as soon as feasible” after it has been determined that the breach has taken place (s. 40.1(2)). New regulations will specify the form and contents of the report. There is a separate requirement for the head of the institution to notify individuals affected by any breach that reaches the threshold of a real risk of significant harm (s. 40.1(3)). The notification to the individual will have to contain, along with any prescribed information, a statement that the individual is entitled to file a complaint with the Commissioner with respect to the breach, and the individual will have one year to do so (ss. 40.1(4) and (5)). The amendments also identify the factors relevant in determining if there is a real risk of significant harm (s. 40.1(7)).

The proposed amendments also provide for a review by the Commissioner of the information practices of an institution where a complaint has been filed under s. 40.1(4), or where the Commissioner “has other reason to believe that the requirements of this Part are not being complied with” (s. 49.0.1).) The Commissioner can decide not to review an institution’s practices in circumstances set out in s. 49.0.1(3). Where the Commissioner determines that there has been a contravention of the statutory obligations, she has order-making powers (s. 49.0.1(7)).

Overall, this is a solid and comprehensive scheme for addressing data breaches in the public sector (although it does not extend to those institutions covered by MFIPPA). In addition to the data breach reporting requirements, the proposed amendments will provide for whistleblower protections. They will also specifically enable the Privacy Commissioner to consult with other privacy commissioners (new s. 59(2)), and to coordinate activities, enter into agreements, and to provide for handling “of any complaint in which they are mutually interested.” (s. 59(3)). These are important amendments given that data breaches may cross provincial lines, and Canada’s privacy commissioners have developed strong collaborative relationships to facilitate cooperation and coordination on joint investigations. These provisions make clear that such co-operation is legally sanctioned, which may avoid costly and time-consuming court challenges to the commissioners’ authority to engage in this way.

The amendments also broaden s. 61(1)(a) of FIPPA which currently makes it an offence to wilfully disclose personal information in contravention of the Act. If passed, it will be an offence to wilfully collect, use or disclose information in the same circumstances.

Collectively the proposed FIPPA amendments are timely and important.

Summary of Recommendations:

On artificial intelligence in the broader public sector:

1. Amend the Preamble to Bill 194 to address the importance of human rights values in the development and deployment of AI systems for the broader public sector.

2. Add a purpose section to the AI portion of Bill 194 that reads:

The purpose of this Part is to ensure that artificial intelligence systems adopted and used by public sector entities are developed, adopted, operated and maintained in manner that is transparent and accountable and that respects the privacy and human rights of Ontarians.

3. Amend s. 5(3) to read:

5(3) A public sector entity to which this section applies, shall, in accordance with the regulations, develop and implement an accountability framework respecting their use of the artificial intelligence system. At a minimum, such a framework will include:

a) The specification of reporting channels for internal or external complaints or concerns about the operation of the artificial intelligence system;

b) Record-keeping requirements for complaints and concerns raised under subparagraph 5(3)(a), as well as for responses thereto.

On Digital Technology Affecting Individuals Under Age 18:

1. Incorporate the contents of ss. 9 and 10 into FIPPA and MFIPPA, with the necessary modification to require reporting to the Privacy Commissioner.

2. Give the authority to issue directives regarding the use of certain technologies in the classroom or by school boards to the Minister of Education and ensure that an open and consultative public engagement process is included.

<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 1 of 38

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law