Teresa Scassa - Blog

Displaying items by tag: artificial intelligence
Monday, 09 February 2026 07:15

Canada's AI Strategy: Some Reflections

The Department of Innovation Science and Economic Development (ISED) has released the results of the consultation it carried out in advance of its development of the latest iteration of its AI Strategy. The consultation had two components – one was a Task Force on AI – a group of experts tasked with consulting their peers to develop their views. The experts were assigned to specified themes (research and talent; adoption across industry and government; commercialization of AI; scaling our champions and attracting investment; building safe AI systems and public trust in AI, education and skills; infrastructure; and security). The second component was a broad public consultation asking for either answers to an online survey or emailed free-form submissions. This post offers some reflections on the process and its outcomes.

1. The controversy over the consultation

The consultation process generated controversy. One reason for this was the sudden and short timelines. Submissions from the public were sought within a month, and Task Force members were initially expected to consult their peers and report in the month following the launch of the consultation. In the end, the Task Force Reports were not published until early February – the timelines were simply unrealistic. However, there was no extension for the public consultation. The Summary of Inputs on the consultation refers to it as “the largest public consultation in the history of Innovation Science and Economic Development Canada, generating important ideas, questions and legitimate concerns to take into consideration in the drafting of the strategy” (at page 3). The response signals how important the issue is to Canadians and how they want to be heard. One has to wonder how many submissions ISED might have received with longer timelines. Short deadlines favour those with time and resources. Civil society organizations, small businesses, and individuals with full workloads (domestic and professional) find short timelines particularly challenging. Running a “sprint” consultation favours participation from some groups over others.

Another point of controversy was the lack of diversity of the Task Force. The government was roundly criticized for putting together a Task Force with no representation from Canada’s Black communities, particularly given the risks of bias and discrimination posed by AI technologies. A letter to this effect was sent to the Minister of AI, the Prime Minister, and the leaders of Canada’s other political parties by a large group of Black academic and scholars. Following this, a Black representative – a law student - was hurriedly added to the Task Force.

An open letter to the Minister of Artificial Intelligence for civil society organizations and individuals also denounced the consultation, arguing that the deadline should be extended, and that the Task Force should be more equitably representative. The letter noted that civil society groups, human rights experts, and others were absent from the Task Force panel. The group was also critical of the online survey for being biased towards particular outcomes. This group indicated that it would be boycotting the consultation. They have now set up their own People’s Consultation on AI, which is accepting submissions until March 15, 2026.

These controversies highlight a major stumble in developing the AI Strategy. The lack of consultation around the failed Artificial Intelligence and Data Act in Bill C-27 and the criticism that this generated should have been a lesson to ISED on how important the issues raised by AI are to the public and about how they want to be heard. The Summary makes no mention of the controversy it generated. Nevertheless, the criticisms and pushbacks are surely an important part of the outcome of this process.

2. Some thoughts on Transparency

ISED has not only published a summary of the results of its consultation and of the Task Force Reports, it has published in its open government portal the raw data from the consultation, as well as the individual task force reports. This seems to be in line with a new commitment to greater transparency around AI – in the fall of 2025 ISED also published its beta version of a register of AI in use within the federal public service. These are positive developments, although it is worth watching to see if tools like the register of AI are refined, improved (and updated).

ISED was also transparent about its use of generative AI to process the results of the consultation. Page 16 of the summary document explains how it used (unspecified) LLMs to create a “classification pipeline” to “clean survey responses and categorize them into a structured set of themes and subthemes”. The report also describes the use of human oversight to ensure that there was “at least a 90% success rate in categorizing responses into specific intents”. ISED explains that it consulted research experts about their methodology and indicated that the methods they used were in conformity with the recent Treasury Board Guide on the use of generative artificial intelligence. The declaration on the use of AI indicates that the output was used to produce the final report, which is apparently a combination of human authorship and extracts from the AI generated content.

It would frankly be astonishing if generative AI tools have not already been used in other contexts to process submissions to government consultations (but likely without having been disclosed). As a result, the level of transparency about the use here is important. This is illustrated by my colleague Michael Geist’s criticisms of the results of ISED’s use of AI. He ran the Task Force reports through two (identified) LLMs and noted differences in the results between his generated analysis and ISED’s. He argues that “the government had not provided the public with the full picture” and posits that the results were softened by ISED to suggest a consensus that is not actually present. Putting a particular spin on things is not exclusively the result of the use of AI tools – humans do this all the time. However, explaining how results were arrived at using a technological system can create an impression of objectivity and scientific rigor that can mislead, and this underscores the importance of Prof. Geist’s critique.

It is worth noting that it is the level of transparency provided by ISED that allowed this analysis and critique. The immediacy of the publication of the data on which the report was based is important as well. Prolonged access to information request processes were unnecessary here. This approach should become standard government practice.

3. AI Governance/Regulation

The consultation covered many themes, and the AI Strategy is clearly intended to be about more than just how to regulate or govern AI. In fact, one could be forgiven for thinking that the AI Strategy will be about everything except governance and regulation, given the limited expertise from these areas on the Task Force. These focus areas emphasized adoption, investment in, and scaling of AI innovation, as well as strengthening sovereign infrastructure. Among the focus areas only “public trust, skills and safety” gives a rather offhand nod to governance and regulation.

That said, reading between the lines of the summary of inputs, Canadian are concerned about AI governance and regulation. This can be seen in statements such as “Respondents…urged Canada to prioritize responsible governance” (p. 7). Respondents also called for “meaningful regulation” (p. 8) and reminded the government of the need to “modernize regulations” (p. 8). There were also references to “accountable and robust governance”(p. 8) and “strict regulation, penalties for non-compliance and frameworks that uphold Canadian values” (p. 8) when it comes to generative AI. There were also calls for “strict liability laws” (p. 9), and concerns expressed over “lack of regulation and accountability” (p. 9).

One finds these snippets throughout the summary document, which suggests that meaningful regulation was a matter of real concern for respondents. However, the “Conclusions and next steps” section of the report mentions only the need for “regulatory clarity” and streamlined regulatory frameworks – neither of which is a bad thing, but neither of which is really about new regulation or governance. Instead, the report concludes that: “There was general consensus among participants that public trust depends on transparency, accountability, and robust governance, supported by certification standards, independent audits and AI literacy programs” (p. 15, my emphasis). While those tools are certainly part of a regulatory toolkit for AI, on their own and outside of a framework that builds in accountability and oversight, they are basically soft-law and self-regulation. This feels like a rather convenient consensus around where the government was likely heading in the first place.

 

Published in Privacy

The Ontario and British Columbia Information and Privacy Commissioners each released new AI medical scribes guidance on Privacy Day (January 28, 2026). This means that along with Alberta and Saskatchewan, a total for four provincial information and privacy commissioners have now issued similar guidance. BC’s guidance is aimed at health care practitioners running their own practices and governed by the province’s Personal Information Protection Act. It does not extend to health authorities and hospitals that fall under the province’s Freedom of Information and Protection of Privacy Act. Ontario’s guidance is for both public institutions and physicians in private practice who are governed by the Personal Health Information Protection Act.

This flurry of guidance on AI Scribes shows how privacy regulators are responding to the very rapid adoption in the Canadian health sector of an AI-tool that raises sometimes complicated privacy issues with a broad public impact.

At its most basic level, an AI medical scribe is a tool that records a doctor’s interaction with their patient. The recording is then transcribed by the scribe, and a summary is generated that can be cut and pasted by the doctor into the patient’s electronic medical record (EMR). The development and adoption of AI scribes has been rapid, in part because physicians have been struggling with both significant administrative burdens as well as burnout. This is particularly acute in the primary care sector. AI scribes offer the promise of better patient care (doctors are more focused on the patient as they are freed up from notetaking during appointments), as well as potentially significantly reduced time spent on administrative work.

AI medical scribes raise a number of different privacy issues. These can include issues relating to the scribe tool itself (for example, how good is the data security of the scribe company? What kind of personal health information (PHI) is stored, where, and for how long? Are secondary uses made of de-identified PHI? Is the scribe company’s definition of de-identification consistent with the relevant provincial health information legislation?) They may also include issues around how the technology is adopted and implemented by the physician (including, for example” whether the physician retains the full transcription as well as the chart summary and for how long; what data security measures are in place within the physician’s practice; and how consent is obtained from patients to the use of this tool). As the BC IPC’s guidance notes, “What distinguishes an AI scribe’s collection of personal information from traditional notetaking with a pen and notepad is that there are many processes taking place with an AI scribe that are more complex, potentially more privacy invasive, and less obvious to the average person” (at 5).

AI scribes raise issues other than privacy that touch on patient data. In their guidance, Ontario’s IPC notes the human rights considerations raised by AI scribes and refers to its recent AI Principles issued jointly with the Ontario Human Rights Commission (which I have written about here). The quality of AI technologies depends upon the quality of their training data. Where training data does not properly represent the populations impacted by the tool, there can be bias and discrimination. Concerns exist, for example, about how well AI scribes will function for people (or physicians) with accents, or for those with speech impaired by disease or disability. Certainly, the accuracy of personal health information that is recorded by the physician is a data protection issue; it is also a quality of health care issue. There are concerns that busy physicians may develop automation bias, increasingly trusting the scribe tool and reducing time spent on reviewing and correcting summaries – potentially leading to errors in the patient’s medical record.

AI scribes are being adopted by individual physicians, but they are also adopted and used within institutions – either with the engagement of the institution, or as a form of ‘shadow use’. A recent response to a breach by Ontario’s IPC relating to the use of a general-purpose AI scribe illustrates how complex the privacy issues may be in such as case (I have written about this incident here). In that case, the scribe tool ‘attended’ nephrology rounds at a hospital, transcribed the meeting, sent a summary to all 65 people on the mailing list for the meeting and provided a link to the full transcript. The summary and transcript contained the sensitive personal information of the patients seen on those rounds. Complicating the matter was the fact that the physician whose scribe attended the meeting was no longer even at the hospital.

Privacy commissioners are not the only ones who have stepped up to provide guidance and support to physicians in the choice of AI scribe tools. Ontario MD, for example, conducted an evaluation of AI medical scribes, and is assisting in assessing and recommending scribing tools that are considered safe and compliant with Ontario law.

Of course, scribe technologies are not standing still. It is anticipated that these tools will evolve to include suggestions for physicians for diagnosis or treatment plans, raising new and complex issues that will extend beyond privacy law. As the BC guidance notes, some of these tools are already being used to “generate referral letters, patient handouts, and physician reminders for ordering lab work and writing prescriptions for medication” (at 2). Further, this is a volatile area where scribe tools are likely to be acquired by EMR companies to integrate with their offerings, reducing the number of companies and changing the profile of the tools. The mutable tools and volatile context might suggest that guidance is premature; but the AI era is presenting novel regulatory challenges, and this is an example of guidance designed not to consolidate and structure rules and approaches that have emerged over time; but rather to reduce risk and harm in a rapidly evolving context. Regulator guidance may serve other goals here as well, as it signals to developers and to EMR companies those design features which will be important for legal compliance. Both the BC and Ontario guidance caution that function creep will require those who adopt and use these technologies to be alert to potential new issues that may arise as the adopted tools’ functionalities change over time.

Note: Daniel Kim and I have written a paper on the privacy and other risks related to AI medical scribes which is forthcoming in the TMU Law Review. A pre-print version can be found here: Scassa, Teresa and Kim, Daniel, AI Medical Scribes: Addressing Privacy and AI Risks with an Emergent Solution to Primary Care Challenges (January 07, 2025). (2025) 3 TMU Law Review, Available at SSRN: https://ssrn.com/abstract=5086289

 

Published in Privacy

Ontario’s Office of the Information and Privacy Commissioner (IPC) and Human Rights Commission (OHRC) have jointly released a document titled Principles for the Responsible Use of Artificial Intelligence.

Notably, this is the second collaboration of these two institutions on AI governance. Their first was a joint statement on the use of AI technologies in 2023, which urged the Ontario government to “develop and implement effective guardrails on the public sector’s use of AI technologies”. This new initiative, oriented towards “the Ontario public sector and the broader public sector” (at p. 1), is interesting because it deepens the cooperation between the IPC and the OHRC in relation to a rapidly evolving technology that is increasingly used in the public sector. It also fills a governance gap left by the province’s delay in developing its public sector AI regulatory framework.

In 2024, the Ontario government enacted the Enhancing Digital Security and Trust Act, 2024 (EDSTA), which contains a series of provisions addressing the use of AI in the broader public sector (which includes hospitals and universities). It also issued the Responsible Use of Artificial Intelligence Directive which sets basic rules and principles for Ontario ministries and provincial agencies. The Directive is currently in force and is built around principles similar to those set out by the IPC and OHRC. It outlines a set of obligations for ministries and agencies that adopt and use AI systems. These include transparency, risk management, risk mitigation, and documentation requirements. The EDSTA, which would have a potentially broader application, creates a framework for transparency, accountability, and risk management obligations, but the actual requirements have been left to regulations. Those regulations will also determine to whom any obligations will apply. Although the EDSTA can apply to all actors within the public sector, broadly defined, its obligations can be tailored by regulations to specific departments or agencies, and can include or exclude universities and hospitals. There has been no obvious movement on the drafting of the regulations needed to breathe life into EDSTA’s AI provisions

It is clear that AI systems will have both privacy and human rights implications, and that both the IPC and the OHRC will have to deal with complaints about such systems in relation to matters within their respective jurisdictions. As the Commissioners put it, the principles “will ground our assessment of organizations’ adoption of AI systems consistent with privacy and human rights obligations.” (at p. 1) The document clarifies what the IPC and OHRC expect from institutions. For example, conforming to the ‘Valid and reliable” principle will require compliance with independent testing standards and objective evidence will be required to demonstrate that systems “fulfil the intended requirements for a specified use or application”. (at p. 3) The safety principle also requires demonstrable cybersecurity protection and safeguards for privacy and human rights. The Commissioners also expect institutions to provide opportunities for access and correction of individuals’ personal data both used in and generated by AI systems. The “Human rights affirming” principle includes a caution that public institutions “should avoid the uniform use of AI systems with diverse groups”, since such practices could lead to adverse effects discrimination. The Commissioners also caution against uses of systems that may “unduly target participants in public or social movements, or subject marginalized communities to excessive surveillance that impedes their ability to freely associate with one another.” (at p. 6)

The Commissioners’ “Transparency” principle requires that the use by the public sector of AI be visible. The IPC’s mandate covers both access to information and privacy. The Principles state that the documentation required for the “public account” of AI use “may include privacy impact assessments, algorithmic impact assessments, or other relevant materials.” (at p. 6) There must also be transparency regarding “the sources of any personal data collected and used to train or operate the system, the intended purposes of the system, how it is being used, and the ways in which its outputs may affect individuals or communities.” (at p. 6)

The Principles also require that systems used in the public sector be understandable and explainable. The accountability principle requires public sector institutions to document design and application choices and to be prepared to explain how the system works to an oversight body. They should also establish mechanisms to receive and respond to complaints and concerns. The Principles call for whistleblower protections to support reporting of non-compliant systems.

The joint nature of the Principles highlights how issues relating to AI do not easily fall within the sole jurisdiction of any one regulator. It also highlights that the dependence of AI systems on data – often personal data or de-identified personal data – carries with it implications both for privacy and human rights.

That the IPC and OHRC will have to deal with complaints and investigations that touch on AI issues is indisputable. In fact, the IPC has already conducted formal and informal investigations that touch on AI-enabled remote proctoring, AI scribes, and vending machines on university campuses that incorporate face-detection technologies. The Principles offer important insights into how these two oversight bodies see privacy and human rights intersecting with the adoption and use of AI technologies, and what organizations should be doing to ensure that the systems they procure, adopt and deploy are legally compliant.

 

 

Published in Privacy
Monday, 05 January 2026 08:32

Canada's New Regulatory Sandbox Policy

In November 2025, Canada’s federal government published a new Policy on Regulatory Sandboxes in anticipation of amendments to the Red Tape Reduction Act which had been announced in the 2024 budget. This development deserves some attention, particularly as the federal government embraces a pro-innovation agenda and shifts its approach to regulation of innovative technologies such as artificial intelligence (AI).

Regulatory sandboxes have received considerable attention since the first use of one by the Financial Conduct Authority the UK in 2017. Although they first took hold in the financial services sector, they have since attracted interest in other sectors. For example, several European data protection authorities have created privacy regulatory sandboxes (see, e.g., the UK Information Commissioner and France’s CNIL). In Canada, the Ontario Energy Board and the Law Society of Ontario – to give just two examples – both have regulatory sandboxes. Alberta also created a fintech regulatory sandbox by legislation in 2022. Regulatory sandboxes are expected to be an important component in AI regulation in the European Union. Article 57 of the EU Artificial Intelligence Act requires all member states to establish an AI regulatory sandbox – or at the very least to partner with one or more members states to jointly create such a sandbox.

Regulatory sandboxes are seen as a regulatory tool that can be effectively deployed in rapidly evolving technological contexts where existing regulations may create barriers to innovation. In some cases, innovators may hesitate to develop novel products or services where they see no clear pathway to regulatory approval. In many instances, regulators struggle to understand rapidly evolving technologies and the novel business methods they may bring with them. A regulatory sandbox is a space created by a regulator that allows selected innovators to work with regulators to explore how these innovations can be brought to market in a safe and compliant way, and to learn whether and how existing regulations might need to be adapted to a changing technological environment. It is a form of experimental regulation with benefits both for the regulator and for regulated parties.

This is the context in which the federal Policy has been introduced. It defines a regulatory sandbox in these terms:

[A] regulatory sandbox, in the context of this policy, is the practice by which a temporary authorization is provided for innovation (for example, a new product, service, process, application, regulatory and non-regulatory approaches) and is for the purpose of evaluating the real-life impacts of innovation, in order to provide information to the regulator to support the development, management and/or review and assessment of the results of regulations. This can also include for the purposes of equipping the regulatory framework to support innovation, competitiveness or economic growth.

It is important to remember that the policy is anchored in the Red Tape Reduction Act and has a particular slant that sets it apart from other sandbox initiatives. An example of the type of sandbox likely contemplated by this policy can be found in a new regulatory sandbox proposed by Transport Canada to address a very specific regulatory issue arising with respect to the design of aircraft. This sandbox is described as being for “minor change approvals used in support of a major modification.” It is narrow in scope, using modifications to existing regulations to try out a new regulatory process for the certification of major modifications to aircraft design. The end goal is to reduce regulatory burden and to relieve uncertainties caused by existing regulations. Data will be collected from the sandbox experiment to assess the impact of regulatory changes before they might be made permanent.

This approach frames sandboxing as a means to enable innovation by improving existing regulations and streamlining processes. While this is a worthy objective, there is a risk that the policy may be cast too narrowly by focusing on a regulatory sandbox as a means to improve regulation, rather than more broadly as a means of understanding how novel technologies or processes can be brought safely to market – sometimes under existing regulatory frameworks. This is reflected in the policy document, which states that sandboxes proposed under this policy “must demonstrate how regulatory regimes could be modernized”.

The definition of a regulatory sandbox in the Policy, reproduced above, essentially describes a data gathering process by the regulator “to support the development, management and/or review and assessment of the results of regulations.” This can be contrasted with the more open-ended definition adopted in the relatively recent standard for regulatory sandboxes developed by the Digital Governance Standardization Initiative (DGSI):

A regulatory sandbox is a facility created and controlled by a regulator, designed to allow the conduct of testing or experiments with novel products or processes prior to their entry into a regulated marketplace.

Rather than focus on the regulator conducting an assessment of its regulations, the DGSI definition is focused on innovative products and processes, and frames sandboxes in terms of their recognized mutual benefits for both regulators and innovators. The focus of the DGSI’s sandbox definition is on the bringing to market of novel products or processes. Although improving regulations and regulatory processes is a perfectly acceptable outcome of a regulatory sandbox, it is not the only possible outcome – nor is it even a necessary one. In this context, the new federal policy is rather narrow. It is focused on regulations themselves at the core of the sandbox experiments – rather than how innovative technologies challenge regulatory frameworks.

An example of this latter approach is found in the Ontario Bar Association’s regulatory sandbox for AI-enabled access to justice innovations (A2I). In some cases, innovations of this kind might be characterized as constituting the illegal practice of law, creating a barrier to market entry. In the A2I sandbox the novel products or services are developed and live-tested under supervision to assess whether they can be deployed in a way that is sufficiently protective of the public. The issue is partly a regulatory one – but it is not that any particular regulations necessarily require changing – rather, it is that innovators need a level of comfort that their innovation will not be blocked by existing regulations. At the same time, the regulator needs to understand the emerging technology and how they can fulfil their public protection mandate while supporting useful innovation. One out come of a sandbox process might be to learn that a particular innovation cannot safely be brought to market.

A similar paradigm exists with privacy regulatory sandboxes, which might either explore ways in which a novel technology can be designed to comply with the legislation, or examine how existing rules should be understood and applied in novel circumstances.

In all cases, the regulator may learn something about how existing regulations might need to adapt to an evolving technological context, and this too is a useful outcome. However, it does not have to be the principal goal of the regulatory sandbox. While the federal Policy is interesting, it seems narrowly focused. It appears to primarily be a tool conceived of to help streamline and improve regulatory processes (still a worthy goal) rather than a more ambitious sandboxing initiative. The policy is interesting and signals an openness to the concept of regulatory sandboxes. Unfortunately, it is still a rather narrow framing of the nature and potential of this regulatory tool.

 

Published in Privacy
Saturday, 29 November 2025 14:42

Canada launches its beta AI Register

Canada’s federal government has just released an early version of the AI Register it promised after its election earlier this year.

An AI Register is an important transparency tool – it will help researchers and the broader public understand what AI-enabled tools are in use in the federal public sector and provides basic information about them. The government also intends the register to be a resource for the public sector – allowing different departments and agencies to better see what others are doing so as to avoid duplication and to learn from each other.

The information accompanying the Register (which is published on Canada’s open government portal) indicates that this is a “Minimum Viable Product”. This means that it is “an early version with only basic features and content that is used to gather feedback.” It will be interesting to see how it develops over time.

One interesting aspect of the register is that it states that it was “assembled from existing sources of information, including Algorithmic Impact Assessments, Access to Information requests, responses to Parliamentary Questions, Personal Information Banks, and the GC Service Inventory.” Since it contains 409 entries at the time of writing, and since there are only a few dozen published Algorithmic Impact Assessments (AIAs), this suggests that the database was compiled largely using sources other than AIAs. The reference to access to information requests suggest that some of the data may have been gathered using the TAG Register Canada laboriously compiled by Joanna Redden and her team at the Western University. The sources for the TAG Register also included access to information requests and responses to questions by Members of Parliament. Prior to the development of the federal AI Register, the TAG Register was probably the most important source of information about public sector AI in Canada. The TAG Register is not made redundant by the new AI Register – it contains additional information about the systems derived from the source materials.

The federal AI Register sets out the name of each system and provides a description. It indicates who the primary users are, and which government organization is responsible for it. Other fields provide data about whether the system is designed in-house or is furnished by a vendor (and if so, which one). It also indicates whether the system is in development, in production, or retired. There is a brief description of the system’s capabilities, some information about the data sources used, and an indication of whether it uses personal data. The register also indicates whether users are given notice of use. There is a brief description of the expected outcomes of the system use.

All in all, it’s a good start, and clearly the developers of this database are open to feedback. (For example, I would like to see a link to the Algorithmic Impact Assessment under the Directive on Automated Decision-Making, if such an assessment has been carried out).

This is an important transparency initiative, and it will be a good source of data for researchers interested in public sector AI. It is also an interesting model that provincial governments might want to consider as they also roll out AI use across their public sectors.

 

Published in Privacy

Regulatory sandboxes are a relatively recent innovation in regulation (with the first one being launched by the UK Financial Authority in 2015). Since that time, they have spread rapidly in the fintech sector. The EU’s new Artificial Intelligence Act has embraced this new tool, making AI regulatory sandboxes mandatory for member states. In its most recent budget, Canada’s federal government also revealed a growing interest in advancing the use of regulatory sandboxes, although sandboxes are not mentioned in the ill-fated Artificial Intelligence and Data Act in Bill C-27.

Regulatory sandboxes are seen as a tool that can support innovation in areas where complex technology evolves rapidly, creating significant regulatory hurdles for innovators to overcome. The goal is not to evade or dilute regulation; rather, it is to create a space where regulators and innovators can explore how regulations designed to protect the public should be applied to technologies that were unforeseen at the time the regulations were drafted. The sandbox is meant to be a learning experience for both regulators and innovators. Outcomes can include new guidance that can be shared with all innovators; recommendations for legislative or regulatory reform; or even decisions that a particular innovation is not yet capable of safe deployment.

Of course, sandboxes can raise issues about regulatory capture and the independence of regulators. They are also resource intensive, requiring regulators to make choices about how to meet their goals. They require careful design to minimize risks and maximize return. They also require the interest and engagement of regulated parties.

In the autumn of 2023, Elif Nur Kumru and I began a SSHRC-funded project to explore the potential for a privacy regulatory sandbox for Ontario. Working in partnership with the Office of Ontario’s Information and Privacy Commissioner, we examined the history and evolution of regulatory sandboxes. We met with representatives of data protection authorities in the United Kingdom, Norway and France to learn about the regulatory sandboxes they had developed to address privacy issues raised by emerging technologies, including artificial intelligence. We identified some of the challenges and issues, as well as key features of regulatory sandboxes. Our report is now publicly available in both English and French.

Published in Privacy

A recent decision of the Federal Court of Canada (Ali v. Minister of Public Safety and Emergency Preparedness) highlights the role of judicial review in addressing automated decision-making. It also prompts reflection on the limits of emerging codified rights to an explanation.

In July 2024, Justice Battista overturned a decision of the Refugee Protection Division (RPD) which had vacated the refugee status of the applicant, Mr. Ali. The decision of the RPD was based largely on a photo comparison that the RPD to conclude that Mr. Ali was not a Somali refugee as he had claimed. Rather, they concluded that he was a Kenyan student who had entered Canada on a student visa in 2016, a few months prior to Mr. Ali’s refugee protection claim.

Throughout the proceedings the applicant had sought information about how photos of the Kenyan student had been found and matched with his own. He was concerned that facial recognition technology (FRT) – which has had notorious deficiencies when used to identify persons of colour – had been used. In response, the Minister denied the use of FRT, maintaining instead that the photographs had been found and analyzed through a ‘manual process’. A Canadian Border Services agent subsequently provided an affidavit to the effect that “a confidential manual investigative technique was used” (at para 15). The RPD was satisfied with this assurance. It considered that how the photographs had been gathered was irrelevant to their own capacity as a tribunal to decide based on the photographs before them. They concluded that Mr. Ali had misrepresented his identity.

On judicial review, Justice Battista found that the importance of the decision to Mr. Ali and the quasi-judicial nature of the proceedings meant that he was owed a high level of procedural fairness. Because a decision of the RPD cannot be appealed, and because the consequences of revocation of refugee status are very serious (including loss of permanent resident status and possible removal from the country), Justice Battista found that “it is difficult to find a process under [the Immigration and Refugee Protection Act] with a greater imbalance between severe consequences and limited recourse” (at para 23). He found that the RPD had breached Mr. Ali’s right to procedural fairness “when it denied his request for further information about the source and methodology used by the Minister in obtaining and comparing the photographs” (at para 28).

Justice Battista ruled that, given the potential consequences for the applicant, disclosure of the methods used to gather the evidence against him “had to be meaningful” (at para 33). He concluded that it was unfair for the RPD “to consider the photographic evidence probative enough for revoking the Applicant’s statuses and at the same time allow that evidence to be shielded from examination for reliability” (at para 37).

In addition to finding a breach of procedural fairness, Justice Battista also found that the RPD’s decision was unreasonable. He noted that there had been sufficiently credible evidence before the original RPD refugee determination panel to find that Mr. Ali was a Somali national entitled to refugee protection. None of this evidence had been assessed in the decision of the panel that vacated Mr. Ali’s refugee status. Justice Battista noted that “[t]he credibility of this evidence cannot co-exist with the validity of the RPD vacation panel’s decision” (at para 40). He also noted that the applicant had provided an affidavit describing differences between his photo and that of the Kenyan student; this evidence had not been considered in the RPD’s decision, contributing to its unreasonableness. The RPD also dismissed evidence from a Kenyan official that, based on biometric records analysis, there was no evidence that Mr. Ali was Kenyan. Justice Battista noted that this dismissal of the applicant’s evidence was in “stark contrast to its treatment of the Minister’s photographic evidence” (at para 44).

The Ali decision and the right to an explanation

Ali is interesting to consider in the context of the emerging right to an explanation of automated decision-making. Such a right is codified for the private sector context in the moribund Bill C-27, and Quebec has enacted a right to an explanation for both public and private sector contexts. Such rights would apply in cases where an automated decision system (ADS) has been used (and in the case of Quebec, the decision must be based “exclusively on an automated processing” of personal information. Yet in Ali there is no proof that the decision was made or assisted by an AI technology – in part because the Minister refused to explain their ‘confidential’ process. Further, the ultimate decision was made by humans. It is unclear how a codified right to an explanation would apply if the threshold for the exercise of the right is based on the obvious and/or exclusive use of an ADS.

It is also interesting to consider the outcome here in light of the federal Directive on Automated Decision Making (DADM). The DADM, which largely addresses the requirements for design and development of ADS in the federal public sector, incorporates principles of fairness. It applies to “any system, tool, or statistical model used to make an administrative decision or a related assessment about a client”. It defines an “automated decision system” as “[a]ny technology that either assists or replaces the judgment of human decision-makers […].” In theory, this would include the use of automated systems such as FRT that assist in human decision-making. Where and ADS is developed and used, the DADM imposes transparency obligations, which include an explanation in plain language of:

  • the role of the system in the decision-making process;
  • the training and client data, their source, and method of collection, as applicable;
  • the criteria used to evaluate client data and the operations applied to process it;
  • the output produced by the system and any relevant information needed to interpret it in the context of the administrative decision; and
  • a justification of the administrative decision, including the principal factors that led to it. (Appendix C)

The catch, of course, is that it might be impossible for an affected person to know whether a decision has been made with the assistance of an AI technology, as was the case here. Further, the DADM is not effective at capturing informal or ‘off-the-books’ uses of AI tools. The decision in Ali therefore does two important things in the administrative law context. First, it confirms that – in the case of a high impact decision – the right of the individual to an explanation of how the decision was reached as a matter of procedural fairness. Judicial review thus provides recourse for affected individuals – something that the more prophylactic DADM does not. Second, this right includes an obligation to provide details that could either explain or rule out the use of an automated system in the decisional process. In other words, procedural fairness includes a right to know whether and how AI technologies were used in reaching the contested decision. Mere assertions that no algorithms were used in gathering evidence or in making the decision are insufficient – if an automated system might have played a role, the affected individual is entitled to know the details of the process by which the evidence was gathered and the decision reached. Ultimately, what Justice Battista crafts in Ali is not simply a right to an explanation of automated decision-making; rather, it is a right to the explanation of administrative decision-making processes that account for an AI era. In a context in which powerful computing tools are available for both general and personal use, and are not limited to purpose-specific, carefully governed and auditable in-house systems, the ability to demand an explanation of the decisional process in order to rule out the non-transparent use of AI systems seems increasingly important.

Note: The Directive on Automated Decision-Making is currently undergoing its fourth review. You may participate in consultations here.

Published in Privacy

On May 13, 2024, the Ontario government introduced Bill 194. The bill addresses a catalogue of digital issues for the public sector. These include: cybersecurity, artificial intelligence governance, the protection of the digital information of children and youth, and data breach notification requirements. Consultation on the Bill closes on June 11, 2024. Below is my submission to the consultation. The legislature has now risen for the summer, so debate on the bill will not be moving forward now until the fall.

 

Submission to the Ministry of Public and Business Service Delivery on the Consultation on proposed legislation: Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024

Teresa Scassa, Canada Research Chair in Information Law and Policy, University of Ottawa

June 4, 2024

I am a law professor at the University of Ottawa, where I hold the Canada Research Chair in Information Law and Policy. I research and write about legal issues relating to artificial intelligence and privacy. My comments on Bill 194 are made on my own behalf.

The Enhancing Digital Security and Trust Act, 2024 has two schedules. Schedule 1 has three parts. The first relates to cybersecurity, the second to the use of AI in the broader public service, and the third to the use of digital technology affecting individuals under 18 years of age in the context of Children’s Aid Societies and School Boards. Schedule 2 contains a series of amendments to the Freedom of Information and Protection of Privacy Act (FIPPA). My comments are addressed to each of the Schedules. Please note that all examples provided as illustrations are my own.

Summary

Overall, I consider this to be a timely Bill that addresses important digital technology issues facing Ontario’s public sector. My main concerns relate to the sections on artificial intelligence (AI) systems and on digital technologies affecting children and youth. I recommend the addition of key principles to the AI portion of the Bill in both a reworked preamble and a purpose section. In the portion dealing with digital technologies and children and youth, I note the overlap created with existing privacy laws, and recommend reworking certain provisions so that they enhance the powers and oversight of the Privacy Commissioner rather than creating a parallel and potentially conflicting regime. I also recommend shifting the authority to prohibit or limit the use of certain technologies in schools to the Minister of Education and to consider the role of public engagement in such decision-making. A summary of recommendations is found at the end of this document.

Schedule 1 - Cybersecurity

The first section of the Enhancing Digital Security and Trust Act (EDSTA) creates a framework for cybersecurity obligations that is largely left to be filled by regulations. Those regulations may also provide for the adoption of standards. The Minister will be empowered to issue mandatory Directives to one or more public sector entities. There is little detail provided as to what any specific obligations might be, although section 2(1)(a) refers to a requirement to develop and implement “programs for ensuring cybersecurity” and s. 2(1)(c) anticipates requirements on public sector entities to submit reports to the minister regarding cyber security incidents. Beyond this, details are left to regulations. These details may relate to roles and responsibilities, reporting requirements, education and awareness measures, response and recovery measures, and oversight.

The broad definition of a “public sector entity” to which these obligations apply includes hospitals, school boards, government ministries, and a wide range of agencies, boards and commissions at the provincial and municipal level. This scope is important, given the significance of cybersecurity concerns.

Although there is scant detail in Bill 194 regarding actual cyber security requirements, this manner of proceeding seems reasonable given the very dynamic cybersecurity landscape. A combination of regulations and standards will likely provide greater flexibility in a changeable context. Cybersecurity is clearly in the public interest and requires setting rules and requirements with appropriate training and oversight. This portion of Bill 194 would create a framework for doing this. This seems like a reasonable way to address public sector cybersecurity, although, of course, the effectiveness will depend upon the timeliness and the content of any regulations.

Schedule 1 – Use of Artificial Intelligence Systems

Schedule 1 of Bill 194 also contains a series of provisions that address the use of AI systems in the public sector. These will apply to AI systems that meet a definition that maps onto the Organization for Economic Co-operation and Development (OECD) definition. Since this definition is one to which many others are being harmonized (including a proposed amendment to the federal AI and Data Act, and the EU AI Act), this seems appropriate. The Bill goes on to indicate that the use of an AI system in the public sector includes the use of a system that is publicly available, that is developed or procured by the public sector, or that is developed by a third party on behalf of the public sector. This is an important clarification. It means, for example, that the obligations under the Act could apply to the use of general-purpose AI that is embedded within workplace software, as well as purpose-built systems.

Although the AI provisions in Bill 194 will apply to “public service entities” – defined broadly in the Bill to include hospitals and school boards as well as both federal and municipal boards, agencies and commissions – the AI provisions will only apply to a public sector entity that is “prescribed for the purposes of this section if they use or intend to use an artificial intelligence system in prescribed circumstances” (s. 5(1)). The regulations also might apply to some systems (e.g., general purpose AI) only when they are being used for a particular purpose (e.g., summarizing or preparing materials used to support decision-making). Thus, while potentially quite broad in scope, the actual impact will depend on which public sector entities – and which circumstances – are prescribed in the regulations.

Section 5(2) of Bill 194 will require a public sector entity to which the legislation applies to provide information to the public about the use of an AI system, but the details of that information are left to regulations. Similarly, there is a requirement in s. 5(3) to develop and implement an accountability framework, but the necessary elements of the framework are left to regulations. Under s. 5(4) a public sector entity to which the Act applies will have to take steps to manage risks in accordance with regulations. It may be that the regulations will be tailored to different types of systems posing different levels of risk, so some of this detail would be overwhelming and inflexible if included in the law itself. However, it is important to underline just how much of the normative weight of this law depends on regulations.

Bill 194 will also make it possible for the government, through regulations, to prohibit certain uses of AI systems (s. 5(6) and s. 7(f) and (g)). Interestingly, what is contemplated is not a ban on particular AI systems (e.g., facial recognition technologies (FRT)); rather, it is potential ban on particular uses of those technologies (e.g., FRT in public spaces). Since the same technology can have uses that are beneficial in some contexts but rights-infringing in others, this flexibility is important. Further, the ability to ban certain uses of FRT on a province-wide basis, including at the municipal level, allows for consistency across the province when it comes to issues of fundamental rights.

Section 6 of the bill provides for human oversight of AI systems. Such a requirement would exist only when a public entity uses an AI system in circumstances set out in the regulations. The obligation will require oversight in accordance with the regulations and may include additional transparency obligations. Essentially, the regulations will be used to customize obligations relating to specific systems or uses of AI for particular purposes.

Like the cybersecurity measures, the AI provisions in Bill 194 leave almost all details to regulations. Although I have indicated that this is an appropriate way to address cybersecurity concerns, it may be less appropriate for AI systems. Cybersecurity is a highly technical area where measures must adapt to a rapidly evolving security landscape. In the cybersecurity context, the public interest is in the protection of personal information and government digital and data infrastructures. Risks are either internal (having to do with properly training and managing personnel) or adversarial (where the need is for good security measures to be in place). The goal is to put in place measures that will ensure that the government’s digital systems are robust and secure. This can be done via regulations and standards.

By contrast, the risks with AI systems will flow from decisions to deploy them, their choice and design, the data used to train the systems, and their ongoing assessment and monitoring. Flaws at any of these stages can lead to errors or poor functioning that can adversely impact a broad range of individuals and organizations who may interact with government via these systems. For example, an AI chatbot that provides information to the public about benefits or services, or an automated decision-making system for applications by individuals or businesses for benefits or services, interacts with and impacts the public in a very direct way. Some flaws may lead to discriminatory outcomes that violate human rights legislation or the Charter. Others may adversely impact privacy. Errors in output can lead to improperly denied (or allocated) benefits or services, or to confusion and frustration. There is therefore a much more direct impact on the public, with effects on both groups and individuals. There are also important issues of transparency and trust. This web of considerations makes it less appropriate to leave the governance of AI systems entirely to regulations. The legislation should, at the very least, set out the principles that will guide and shape those regulations. The Ministry of Public and Business Service Delivery has already put considerable work into developing a Trustworthy AI Framework and a set of (beta) principles. This work could be used to inform guiding principles in the statute.

Currently, the guiding principles for the whole of Bill 194 are found in the preamble. Only one of these directly relates to the AI portion of the bill, and it states that “artificial intelligence systems in the public sector should be used in a responsible, transparent, accountable and secure manner that benefits the people of Ontario while protecting privacy”. Interestingly, this statement only partly aligns with the province’s own beta Principles for Ethical Use of AI. Perhaps most importantly, the second of these principles, “good and fair”, refers to the need to develop systems that respect the “rule of law, human rights, civil liberties, and democratic values”. Currently, Bill 194 is entirely silent with respect to issues of bias and discrimination (which are widely recognized as profoundly important concerns with AI systems, and which have been identified by Ontario’s privacy and human rights commissioners as a concern). At the very least, the preamble to Bill 194 should address these specific concerns. Privacy is clearly not the only human rights consideration at play when it comes to AI systems. The preamble to the federal government’s Bill C-27, which contains the proposed Artificial Intelligence and Data Act, states: “that artificial intelligence systems and other emerging technologies should uphold Canadian norms and values in line with the principles of international human rights law”. The preamble to Bill 194 should similarly address the importance of human rights values in the development and deployment of AI systems for the broader public sector.

In addition, the bill would benefit from a new provision setting out the purpose of the part dealing with public sector AI. Such a clause would shape the interpretation of the scope of delegated regulation-making power and would provide additional support for a principled approach. This is particularly important where legislation only provides the barest outline of a governance framework.

In this regard, this bill is similar to the original version of the federal AI and Data Act, which was roundly criticized for leaving the bulk of its normative content to the regulation-making process. The provincial government’s justification is likely to be similar to that of the federal government – it is necessary to remain “agile”, and not to bake too much detail into the law regarding such a rapidly evolving technology. Nevertheless, it is still possible to establish principle-based parameters for regulation-making. To do so, this bill should more clearly articulate the principles that guide the adoption and use of AI in the broader public service. A purpose provision could read:

The purpose of this Part is to ensure that artificial intelligence systems adopted and used by public sector entities are developed, adopted, operated and maintained in manner that is transparent and accountable and that respects the privacy and human rights of Ontarians.

Unlike AIDA, the federal statute which will apply to the private sector, Bill 194 is meant to apply to the operations of the broader public service. The flexibility in the framework is a recognition of both the diversity of AI systems, and the diversity of services and activities carried out in this context. It should be noted, however, that this bill does not contemplate any bespoke oversight for public sector AI. There is no provision for a reporting or complaints mechanism for members of the public who have concerns with an AI system. Presumably they will have to complain to the department or agency that operates the AI system. Even then, there is no obvious requirement for the public sector entity to record complaints or to report them for oversight purposes. All of this may be provided for in s. 5(3)’s requirement for an accountability framework, but the details of this have been left to regulation. It is therefore entirely unclear from the text of Bill 194 or what recourse – if any – the public will have when they have problematic encounters with AI systems in the broader public service. Section 5(3) could be amended to read:

5(3) A public sector entity to which this section applies, shall, in accordance with the regulations, develop and implement an accountability framework respecting their use of the artificial intelligence system. At a minimum, such a framework will include:

a) The specification of reporting channels for internal or external complaints or concerns about the operation of the artificial intelligence system;

b) Record-keeping requirements for complaints and concerns raised under subparagraph 5(3)(a), as well as for responses thereto.

Again, although a flexible framework for public sector AI governance may be an important goal, key elements of that framework should be articulated in the legislation.

Schedule 1 – Digital Technology Affecting Individuals Under Age 18

The third part of Schedule 1 addresses digital technology affecting individuals under age 18. This part of Bill 194 applies to children’s aid societies and school boards. Section 9 enables the Lieutenant Governor in Council to make regulations regarding “prescribed digital information relating to individuals under age 18 that is collected, used, retained or disclosed in a prescribed manner”. Significantly, “digital information” is not defined in the Bill.

The references to digital information are puzzling, as it seems to be nothing more than a subset of personal information – which is already governed under both the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA) and FIPPA. Personal information is defined in both these statutes as “recorded information about an identifiable individual”. It is hard to see how “digital information relating to individuals under age 18” is not also personal information (which has received an expansive interpretation). If it is meant to be broader, it is not clear how. Further, the activities to which this part of Bill 194 will apply are the “collection, use, retention or disclosure” of such information. These are activities already governed by MFIPPA and FIPPA – which apply to school boards and children’s aid societies respectively. What Bill 194 seems to add is a requirement (in s. 9(b)) to submit reports to the Minister regarding the collection, use, retention and disclosure of such information, as well as the enablement of regulations in s. 9(c) to prohibit collection, use, retention or disclosure of prescribed digital information in prescribed circumstances, for prescribed purposes, or subject to certain conditions. Nonetheless, the overlap with FIPPA and MFIPPA is potentially substantial – so much so, that s. 14 provides that in case of conflict between this Act and any other, the other Act would prevail. What this seems to mean is that FIPPA and MFIPPA will trump the provisions of Bill 194 in case of conflict. Where there is no conflict, the bill seems to create an unnecessary parallel system for governing the personal information of children.

The need for more to be done to protect the personal information of children and youth in the public school system is clear. In fact, this is a strategic priority of the current Information and Privacy Commissioner (IPC), whose office has recently released a Digital Charter for public schools setting out voluntary commitments that would improve children’s privacy. The IPC is already engaged in this area. Not only does the IPC have the necessary expertise in the area of privacy law, the IPC is also able to provide guidance, accountability and independent oversight. In any event, since the IPC will still have oversight over the privacy practices of children’s aid societies and school boards notwithstanding Bill 194, the new system will mean that these entities will have to comply with regulations set by the Minister on the one hand, and the provisions of FIPPA and MFIPPA on the other. The fact that conflicts between the two regimes will be resolved in favour of privacy legislation means that it is even conceivable that the regulations could set requirements or standards that are lower than what is required under FIPPA or MFIPPA – creating an unnecessarily confusing and misleading system.

Another odd feature of the scheme is that Bill 194 will require “reports to be submitted to the Minister or a specified individual in respect of the collection, use, retention and disclosure” of digital information relating to children or youth (s. 9(b)). It is possible that the regulations will specify that it is the Privacy Commissioner to whom the reports should be submitted. If it is, then it is once again difficult to see why a parallel regime is being created. If it is not, then the Commissioner will be continuing her oversight of privacy in schools and children’s aid societies without access to all the relevant data that might be available.

It seems as if Bill 194 contemplates two separate sets of measures. One addresses the proper governance of the digital personal information of children and youth in schools and children’s aid societies. This is a matter for the Privacy Commissioner, who should be given any additional powers she requires to fulfil the government’s objectives. Sections 9 and 10 of Bill 194 could be incorporated into FIPPA and MFIPPA, with modifications to require reporting to the Privacy Commissioner. This would automatically bring oversight and review under the authority of the Privacy Commissioner. The second objective of the bill seems to be to provide the government with the opportunity to issue directives regarding the use of certain technologies in the classroom or by school boards. This is not unreasonable, but it is something that should be under the authority of the Minister of Education (not the Minister of Public and Business Service Delivery). It is also something that might benefit from a more open and consultative process. I would recommend that the framework be reworked accordingly.

Schedule 2: FIPPA Amendments

Schedule 2 consists of amendments to the Freedom of Information and Protection of Privacy Act. These are important amendments that will introduce data breach notification and reporting requirements for public sector entities in Ontario that are governed by FIPPA (although, interestingly, not those covered by MFIPPA). For example, a new s. 34(2)(c.1) will require the head of an institution to include in their annual report to the Commissioner “the number of thefts, losses or unauthorized uses or disclosures of personal information recorded under subsection 40.1”. The new subsection 40.1(8) will require the head of an institution to keep a record of any such data breach. Where a data breach reaches the threshold of creating a “real risk that a significant harm to an individual would result” (or where any other circumstances prescribed in regulations exist), a separate report shall be made to the Commissioner under s. 40.1(1). This report must be made “as soon as feasible” after it has been determined that the breach has taken place (s. 40.1(2)). New regulations will specify the form and contents of the report. There is a separate requirement for the head of the institution to notify individuals affected by any breach that reaches the threshold of a real risk of significant harm (s. 40.1(3)). The notification to the individual will have to contain, along with any prescribed information, a statement that the individual is entitled to file a complaint with the Commissioner with respect to the breach, and the individual will have one year to do so (ss. 40.1(4) and (5)). The amendments also identify the factors relevant in determining if there is a real risk of significant harm (s. 40.1(7)).

The proposed amendments also provide for a review by the Commissioner of the information practices of an institution where a complaint has been filed under s. 40.1(4), or where the Commissioner “has other reason to believe that the requirements of this Part are not being complied with” (s. 49.0.1).) The Commissioner can decide not to review an institution’s practices in circumstances set out in s. 49.0.1(3). Where the Commissioner determines that there has been a contravention of the statutory obligations, she has order-making powers (s. 49.0.1(7)).

Overall, this is a solid and comprehensive scheme for addressing data breaches in the public sector (although it does not extend to those institutions covered by MFIPPA). In addition to the data breach reporting requirements, the proposed amendments will provide for whistleblower protections. They will also specifically enable the Privacy Commissioner to consult with other privacy commissioners (new s. 59(2)), and to coordinate activities, enter into agreements, and to provide for handling “of any complaint in which they are mutually interested.” (s. 59(3)). These are important amendments given that data breaches may cross provincial lines, and Canada’s privacy commissioners have developed strong collaborative relationships to facilitate cooperation and coordination on joint investigations. These provisions make clear that such co-operation is legally sanctioned, which may avoid costly and time-consuming court challenges to the commissioners’ authority to engage in this way.

The amendments also broaden s. 61(1)(a) of FIPPA which currently makes it an offence to wilfully disclose personal information in contravention of the Act. If passed, it will be an offence to wilfully collect, use or disclose information in the same circumstances.

Collectively the proposed FIPPA amendments are timely and important.

Summary of Recommendations:

On artificial intelligence in the broader public sector:

1. Amend the Preamble to Bill 194 to address the importance of human rights values in the development and deployment of AI systems for the broader public sector.

2. Add a purpose section to the AI portion of Bill 194 that reads:

The purpose of this Part is to ensure that artificial intelligence systems adopted and used by public sector entities are developed, adopted, operated and maintained in manner that is transparent and accountable and that respects the privacy and human rights of Ontarians.

3. Amend s. 5(3) to read:

5(3) A public sector entity to which this section applies, shall, in accordance with the regulations, develop and implement an accountability framework respecting their use of the artificial intelligence system. At a minimum, such a framework will include:

a) The specification of reporting channels for internal or external complaints or concerns about the operation of the artificial intelligence system;

b) Record-keeping requirements for complaints and concerns raised under subparagraph 5(3)(a), as well as for responses thereto.

On Digital Technology Affecting Individuals Under Age 18:

1. Incorporate the contents of ss. 9 and 10 into FIPPA and MFIPPA, with the necessary modification to require reporting to the Privacy Commissioner.

2. Give the authority to issue directives regarding the use of certain technologies in the classroom or by school boards to the Minister of Education and ensure that an open and consultative public engagement process is included.

Published in Privacy

Artificial intelligence technologies have significant potential to impact human rights. Because of this, emerging AI laws make explicit reference to human rights. Already-deployed AI systems are raising human rights concerns – including bias and discrimination in hiring, healthcare, and other contexts; disruptions of democracy; enhanced surveillance; and hateful deepfake attacks. Well-documented human rights impacts also flow from the use of AI technologies by law enforcement and the state, and from the use of AI in armed conflicts.

Governments are aware that human rights issues with AI technologies must be addressed. Internationally, this is evident in declarations by the G7, UNESCO, and the OECD. It is also clear in emerging national and supranational regulatory approaches. For example, human rights are tackled in the EU AI Act, which not only establishes certain human-rights-based no-go zones for AI technologies, but also addresses discriminatory bias. The US’s NIST AI Risk Management Framework (a standard, not a law – but influential nonetheless) also addresses the identification and mitigation of discriminatory bias.

Canada’s Artificial Intelligence and Data Act (AIDA), proposed by the Minister of Industry, Science and Economic Development (ISED) is currently at the committee stage as part of Bill C-27. The Bill’s preamble states that “Parliament recognizes that artificial intelligence systems and other emerging technologies should uphold Canadian norms and values in line with the principles of international human rights law”. In its substantive provisions, AIDA addresses “biased output”, which it defines in terms of the prohibited grounds of discrimination in the Canadian Human Rights Act. AIDA imposes obligations on certain actors to assess and mitigate the risks of biased output in AI systems. The inclusion of these human rights elements in AIDA is positive, but they are also worth a closer look.

Risk Regulation and Human Rights

Requiring developers to take human rights into account in the design and development of AI systems is important, and certainly many private sector organizations already take seriously the problems of bias and the need to identify and mitigate it. After all, biased AI systems will be unable to perform properly, and may expose their developers to reputational harm and possibly legal action. However, such attention has not been universal, and has been addressed with different degrees of commitment. Legislated requirements are thus necessary, and AIDA will provide these. AIDA creates obligations to identify and mitigate potential harms at the design and development stage, and there are additional documentation and some transparency requirements. The enforcement of AIDA obligations can come through audits conducted or ordered by the new AI and Data Commissioner, and there is also the potential to use administrative monetary penalties to punish non-compliance, although what this scheme will look like will depend very much on as-yet-to-be-developed regulations. AIDA, however, has some important limitations when it comes to human rights.

Selective Approach to Human Rights

Although AIDA creates obligations around biased output, it does not address human rights beyond the right to be free from discrimination. Unlike the EU AI Act, for example, there are no prohibited practices related to the use of AI in certain forms of surveillance. A revised Article 5 of the EU AI Act will prohibit real-time biometric surveillance by law enforcement agencies in publicly accessible spaces, subject to carefully-limited exceptions. The untargeted scraping of facial images for the building or expansion of facial recognition databases (as occurred with Clearview AI) is also prohibited. Emotion recognition technologies are banned in some contexts, as are some forms of predictive policing. Some applications that are not outright prohibited, are categorized as high risk and have limits imposed on the scope of their use. These “no-go zones” reflect concerns over a much broader range of human rights and civil liberties than what we see reflected in Canada’s AIDA. It is small comfort to say that the Canadian Charter of Rights and Freedoms remains as a backstop against government excess in the use of AI tools for surveillance or policing; ex ante AI regulation is meant to head off problems before they become manifest. No-go zones reflect limits on what society is prepared to tolerate; AIDA sets no such limits. Constitutional litigation is expensive, time-consuming and uncertain in outcome (just look at the 5-4 splint in the recent R. v. Bykovets decision of the Supreme Court of Canada). Further, the application of AIDA to the military and intelligence services is expressly excluded from AIDA’s scope (as is the application of the law to the federal public service).

Privacy is an important human right, and privacy rights are not part of the scope of AIDA. The initial response is that such rights are dealt with under privacy legislation for public and private sectors and at federal, provincial and territorial levels. However, such privacy statutes deal principally with data protection (in other words, they govern the collection, use and disclosure of personal information). AIDA could have addressed surveillance more directly. After all, the EU has top of its class data protection laws, but still places limits on the use of AI systems for certain types of surveillance activities. Second, privacy laws in Canada (and there are many of them) are, apart from Quebec’s, largely in a state of neglect and disrepair. Privacy commissioners at federal, provincial, and territorial levels have been issuing guidance as to how they see their laws applying in the AI context, and findings and rulings in privacy complaints involving AI systems are starting to emerge. The commissioners are thoughtfully adapting existing laws to new circumstances, but there is no question that there is need for legislative reform. In issuing its recent guidance on Facial Recognition and Mugshot Databases, the Office of the Information and Privacy Commissioner of Ontario specifically identified the need to issue the guidance in the face of legislative gaps and inaction that “if left unaddressed, risk serious harms to individuals’ right to privacy and other fundamental human rights.”

Along with AIDA, Bill C-27 contains the Consumer Privacy Protection Act (CPPA) which will reform Canada’s private sector data protection law, the Personal Information Protection and Electronic Documents Act (PIPEDA). However, the CPPA has only one AI-specific amendment – a somewhat tepid right to an explanation of automated decision-making. It does not address the data scraping issue at the heart of the Clearview AI investigation, for example (where the core findings of the Commissioner remain disputed by the investigated company) and which prompted the articulation of a no-go zone for data-scraping for certain purposes in the EU AI Act.

High Impact AI and Human Rights

AIDA will apply only to “high impact” AI systems. Among other things, such systems can adversely impact human rights. While the original version of AIDA in Bill C-27 left the definition of “high impact” entirely to regulations (generating considerable and deserved criticism), the Minister of ISED has since proposed amendments to C-27 that set out a list of categories of “high impact” AI systems. While this list at least provides some insight into what the government is thinking, it creates new problems as well. This list identifies several areas in which AI systems could have significant impacts on individuals, including in healthcare and in some court or tribunal proceedings. Also included on the list is the use of AI in all stages of the employment context, and the use of AI in making decisions about who is eligible for services and at what price. Left off the list, however, is where AI systems are (already) used to determine who is selected as a tenant for rental accommodation. Such tools have extremely high impact. Yet, since residential tenancies are interests in land, and not services, they are simply not captured by the current “high impact” categories. This is surely an oversight – yet it is one that highlights the rather slap-dash construction of the AIDA and its proposed amendments. As a further example, a high-impact category addressing the use of biometrics to assess an individual’s behaviour or state of mind could be interpreted to capture affect recognition systems or the analysis of social media communications, but this is less clear than it should be. It also raises the question as to whether the best approach, from a human rights perspective, is to regulate such systems as high impact or whether limits need to be placed on their use and deployment.

Of course, a key problem is that this bill is housed within ISED. This is not a bill centrally developed that takes a broader approach to the federal government and its powers. Under AIDA, medical devices are excluded from the category of “high impact” uses of AI in the healthcare context because it is Health Canada that will regulate AI-enabled medical devices, and ISED must avoid treading on its toes. Perhaps ISED also seeks to avoid encroaching on the mandates of the Minister of Justice, or the Minister of Public Safety. This may help explain some of the crabbed and clunky framing of AIDA compared to the EU AI Act. It does, however, raise the question of why Canada chose this route – adopting a purportedly comprehensive risk-management framework housed under the constrained authority of the Minister of ISED.

Such an approach is inherently flawed. As discussed above, AIDA is limited in the human rights it is prepared to address, and it raises concerns about how human rights will be both interpreted and framed. On the interpretation side of things, the incorporation of the Canadian Human Rights Act’s definition of discrimination in AIDA combined with ISED’s power to interpret and apply the proposed law will give ISED interpretive authority over the definition of discrimination without the accompanying expertise of the Canadian Human Rights Commission. Further, it is not clear that ISED is a place for expansive interpretations of human rights; human rights are not a core part of its mandate – although fostering innovation is.

All of this should leave Canadians with some legitimate concerns. AIDA may well be passed into law – and it may prove to be useful in the better governance of AI. But when it comes to human rights, it has very real limitations. AIDA cannot be allowed to end the conversation around human rights and AI at the federal level – nor at the provincial level either. Much work remains to be done.

Published in Privacy

Ontario’s Information and Privacy Commissioner has released a report on an investigation into the use by McMaster University of artificial intelligence (AI)-enabled remote proctoring software. In it, Commissioner Kosseim makes findings and recommendations under the province’s Freedom of Information and Protection of Privacy Act (FIPPA) which applies to Ontario universities. Interestingly, noting the absence of provincial legislation or guidance regarding the use of AI, the Commissioner provides additional recommendations on the adoption of AI technologies by public sector bodies.

AI-enabled remote proctoring software saw a dramatic uptake in use during the pandemic as university classes migrated online. It was also widely used by professional societies and accreditation bodies. Such software monitors those writing online exams in real-time, recording both audio and video, and using AI to detect anomalies that may indicate that cheating is taking place. Certain noises or movements generate ‘flags’ that lead to further analysis by AI and ultimately by the instructor. If the flags are not resolved, academic integrity proceedings may ensue. Although many universities, including the respondent McMaster, have since returned to in-person exam proctoring, AI-enabled remote exam surveillance remains an option where in-person invigilation is not possible. This can include in courses delivered online to students in diverse and remote locations.

The Commissioner’s investigation related to the use by McMaster University of two services offered by the US-based company Respondus: Respondus Lockdown Browser and Respondus Monitor. Lockdown Browser consists of software downloaded by students onto their computers that blocks access to the internet and to other files on the computer during an exam. Respondus Monitor is the AI-enabled remote proctoring application. This post focuses on Respondus Monitor.

AI-enabled remote proctoring systems have raised concerns about both privacy and broader human rights issues. These include the intrusiveness of the constant audio and video monitoring, the capturing of data from private spaces, uncertainty over the treatment of personal data collected by such systems, adverse impacts on already marginalised students, and the enhanced stress and anxiety that comes from both constant surveillance and easily triggered flags. The broader human rights issues, however, are an uncomfortable fit with public sector data protection law.

Commissioner Kosseim begins with the privacy issues, finding that Respondus Monitor collects personal information that includes students’ names and course information, images of photo identification documents, and sensitive biometric data in audio and video recordings. Because the McMaster University Act empowers the university to conduct examinations and appoint examiners, the Commissioner found that the collection was carried out as part of a lawfully authorized activity. Although exam proctoring had chiefly been conducted in-person prior to the pandemic, she found that there was no “principle of statute or common law that would confine the method by which the proctoring of examinations may be conducted by McMaster to an in-person setting” (at para 48). Further, she noted that even post-pandemic, there might still be reasons to continue to use remote proctoring in some circumstances. She found that the university had a legitimate interest in attempting to curb cheating, noting that evidence suggested an upward trend in academic integrity cases, and a particular spike during the pandemic. She observed that “by incorporating online proctoring into its evaluation methods, McMaster was also attempting to address other new challenges that arise in an increasingly digital and remote learning context” (at para 50).

The collection of personal information must be necessary to a lawful authorized activity carried out by a public body. Commissioner Kosseim found that the information captured by Respondus Monitor – including the audio and video recordings – was “technically necessary for the purpose of conducting and proctoring the exams” (at para 60). Nevertheless, she expressed concerns over the increased privacy risks that accompany this continual surveillance of examinees. She was also troubled by McMaster’s assertion that it “retains complete autonomy, authority, and discretion to employ proctored online exams, prioritizing administrative efficiency and commercial viability, irrespective of necessity” (at para 63). She found that the necessity requirement in s. 38(2) of FIPPA applied, and that efficiency or commercial advantage could not displace it. She noted that the kind of personal information collected by Respondus Monitor was particularly sensitive, creating “risks of unfair allegations or decisions being made about [students] based on inaccurate information” (at para 66). In her view, “[t]hese risks must be appropriately mitigated by effective guardrails that the university should have in place to govern its adoption and use of such technologies” (at para 66).

FIPPA obliges public bodies to provide adequate notice of the collection of personal information. Commissioner Kosseim reviewed the information made available to students by McMaster University. Although she found overall that it provided students with useful information, students had to locate different pieces of information on different university websites. The need to check multiple sites to get a clear picture of the operation of Respondus Monitor did not satisfy the notice requirement, and the Commissioner recommended that the university prepare a “clear and comprehensive statement either in a single source document, or with clear cross-references to other related documents” (at para 70).

Section 41(1) of FIPPA limits the use of personal information collected by a public body to the purpose for which it was obtained or compiled, or for a consistent purpose. Although the Commissioner found that the analysis of the audio and video recordings to generate flags was consistent with the collection of that information, the use by Respondus of samples of the recordings to improve its own systems – or to allow third party research – was not. On this point, there was an important difference in interpretation. Respondus appeared to define personal information as personal identifiers such as names and ID numbers; it treated audio and video clips that lacked such identifiers as “anonymized”. However, under FIPPA audio and video recordings of individuals are personal information. No provision was made for students either to consent to or opt out of this secondary use of their personal information. Commissioner Kosseim noted that Respondus had made public statements that when operating in some jurisdictions (including California and EU members states) it did not use audio or video recordings for research or to improve its products or services. She recommended that McMaster obtain a similar undertaking from Respondus to not use its students’ information for these purposes. The Commissioner also noted that Respondus’ treating the audio and video recordings as anonymized data meant that it did not have adequate safeguards in place for this personal information.

Respondus’ Terms of Service provide that the company reserved the right to disclose personal information for law enforcement purposes. Commissioner Kosseim found that McMaster should require, in its contact with Respondus, that Respondus notify it promptly of any compelled disclosure of its students’ personal information to law enforcement or to government, and to limit any such disclosure to the specific information it is legally required to disclose. She also set a retention limit for the audio and video recordings at one year, with confirmation to be provided by Respondus of deletions after the end of this period.

One of the most interesting aspects of this report is the section titled “Other Recommendations” in which the Commissioner addresses the adoption of an AI-enabled technology by a public institution in a context in which “there is no current law or binding policy specifically governing the use of artificial intelligence in Ontario’s public sector.” (at para 134). The development and adoption of these technologies is outpacing the evolution of law and policy, leaving important governance gaps. In May 2023, the Commissioner Kosseim and Commissioner DeGuire of the Ontario Human Rights Commission issued a joint statement urging the Ontario government to take action to put in place an accountability framework for public sector AI. Even as governments acknowledge that these technologies create risks of discriminatory bias and other potential harms, there remains little to govern AI systems outside the piecemeal coverage offered by existing laws such as, in this case, FIPPA. Although the Commissioner’s interpretation and application of FIPPA addressed issues relating to the collection, use and disclosure of personal information, there remain important issues that cannot be addressed through privacy legislation.

Commissioner Kosseim acknowledged that McMaster University had “already carried out a level of due diligence prior to adopting Respondus Monitor” (at para 138). Nevertheless, given the risks and potential harms of AI-enabled technologies, she made a number of further recommendations. The first was to conduct an Algorithmic Impact Assessment (AIA) in addition to a Privacy Impact Assessment. She suggested that the federal government’s AIA tool could be a useful guide while waiting for one to be developed for Ontario. An AIA could allow the adopter of an AI system to have better insight into the data used to train the algorithms, and could assess impacts on students going beyond privacy (which might include discrimination, increased stress, and harms from false positive flags). She also called for meaningful consultation and engagement with those affected by the adoption of the technology taking place both before the adoption of the system and on an ongoing basis thereafter. Although the university may have had to react very quickly given that the first COVID shutdown occurred shortly before an exam period, an iterative engagement process even now would be useful “for understanding the full scope of potential issue that may arise, and how these may impact, be perceived, and be experienced by others” (at para 142). She noted that this type of engagement would allow adopters to be alert and responsive to problems both prior to adoption and as they arise during deployment. She also recommended that the consultations include experts in both privacy and human rights, as well as those with technological expertise.

Commissioner Kosseim also recommended that the university consider providing students with ways to opt out of the use of these technologies other than through requesting accommodations related to disabilities. She noted “AI-powered technologies may potentially trigger other protected grounds under human rights that require similar accommodations, such as color, race or ethnic origin” (at para 147). On this point, it is worth noting that the use of remote proctoring software creates a context in which some students may need to be accommodated for disabilities or other circumstances that have nothing to do with their ability to write their exam, but rather that impact the way in which the proctoring systems read their faces, interpret their movements, or process the sounds in their homes. Commissioner Kosseim encouraged McMaster University “to make special arrangements not only for students requesting formal accommodation under a protected ground in human rights legislation, but also for any other students having serious apprehensions about the AI-enabled software and the significant impacts it can have on them and their personal information” (at para 148).

Commissioner Kosseim also recommended that there be an appropriate level of human oversight to address the flagging of incidents during proctoring. Although flags were to be reviewed by instructors before deciding whether to proceed to an academic integrity investigation, the Commissioner found it unclear whether there was a mechanism for students to challenge or explain flags prior to escalation to the investigation stage. She recommended that there be such a procedure, and, if there already was one, that it be explained clearly to students. She further recommended that a public institution’s inquiry into the suitability for adoption of an AI-enabled technology should take into account more than just privacy considerations. For example, the public body’s inquiries should consider the nature and quality of training data. Further, the public body should remain accountable for its use of AI technologies “throughout their lifecycle and across the variety of circumstances in which they are used” (at para 165). Not only should the public body monitor the performance of the tool and alert the supplier of any issues, the supplier should be under a contractual obligation to inform the public body of any issues that arise with the system.

The outcome of this investigation offers important lessons and guidance for universities – and for other public bodies – regarding the adoption of third-party AI-enabled services. For the many Ontario universities that adopted remote proctoring during the pandemic, there are recommendations that should push those still using these technologies to revisit their contracts with vendors – and to consider putting in place processes to measure and assess the impact of these technologies. Although some of these recommendations fall outside the scope of FIPPA, the advice is still sage and likely anticipates what one can only hope is imminent guidance for Ontario’s public sector.

Published in Privacy
<< Start < Prev 1 2 3 Next > End >>
Page 1 of 3

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law