Teresa Scassa - Blog

Privacy

Ontario’s Office of the Information and Privacy Commissioner (IPC) and Human Rights Commission (OHRC) have jointly released a document titled Principles for the Responsible Use of Artificial Intelligence.

Notably, this is the second collaboration of these two institutions on AI governance. Their first was a joint statement on the use of AI technologies in 2023, which urged the Ontario government to “develop and implement effective guardrails on the public sector’s use of AI technologies”. This new initiative, oriented towards “the Ontario public sector and the broader public sector” (at p. 1), is interesting because it deepens the cooperation between the IPC and the OHRC in relation to a rapidly evolving technology that is increasingly used in the public sector. It also fills a governance gap left by the province’s delay in developing its public sector AI regulatory framework.

In 2024, the Ontario government enacted the Enhancing Digital Security and Trust Act, 2024 (EDSTA), which contains a series of provisions addressing the use of AI in the broader public sector (which includes hospitals and universities). It also issued the Responsible Use of Artificial Intelligence Directive which sets basic rules and principles for Ontario ministries and provincial agencies. The Directive is currently in force and is built around principles similar to those set out by the IPC and OHRC. It outlines a set of obligations for ministries and agencies that adopt and use AI systems. These include transparency, risk management, risk mitigation, and documentation requirements. The EDSTA, which would have a potentially broader application, creates a framework for transparency, accountability, and risk management obligations, but the actual requirements have been left to regulations. Those regulations will also determine to whom any obligations will apply. Although the EDSTA can apply to all actors within the public sector, broadly defined, its obligations can be tailored by regulations to specific departments or agencies, and can include or exclude universities and hospitals. There has been no obvious movement on the drafting of the regulations needed to breathe life into EDSTA’s AI provisions

It is clear that AI systems will have both privacy and human rights implications, and that both the IPC and the OHRC will have to deal with complaints about such systems in relation to matters within their respective jurisdictions. As the Commissioners put it, the principles “will ground our assessment of organizations’ adoption of AI systems consistent with privacy and human rights obligations.” (at p. 1) The document clarifies what the IPC and OHRC expect from institutions. For example, conforming to the ‘Valid and reliable” principle will require compliance with independent testing standards and objective evidence will be required to demonstrate that systems “fulfil the intended requirements for a specified use or application”. (at p. 3) The safety principle also requires demonstrable cybersecurity protection and safeguards for privacy and human rights. The Commissioners also expect institutions to provide opportunities for access and correction of individuals’ personal data both used in and generated by AI systems. The “Human rights affirming” principle includes a caution that public institutions “should avoid the uniform use of AI systems with diverse groups”, since such practices could lead to adverse effects discrimination. The Commissioners also caution against uses of systems that may “unduly target participants in public or social movements, or subject marginalized communities to excessive surveillance that impedes their ability to freely associate with one another.” (at p. 6)

The Commissioners’ “Transparency” principle requires that the use by the public sector of AI be visible. The IPC’s mandate covers both access to information and privacy. The Principles state that the documentation required for the “public account” of AI use “may include privacy impact assessments, algorithmic impact assessments, or other relevant materials.” (at p. 6) There must also be transparency regarding “the sources of any personal data collected and used to train or operate the system, the intended purposes of the system, how it is being used, and the ways in which its outputs may affect individuals or communities.” (at p. 6)

The Principles also require that systems used in the public sector be understandable and explainable. The accountability principle requires public sector institutions to document design and application choices and to be prepared to explain how the system works to an oversight body. They should also establish mechanisms to receive and respond to complaints and concerns. The Principles call for whistleblower protections to support reporting of non-compliant systems.

The joint nature of the Principles highlights how issues relating to AI do not easily fall within the sole jurisdiction of any one regulator. It also highlights that the dependence of AI systems on data – often personal data or de-identified personal data – carries with it implications both for privacy and human rights.

That the IPC and OHRC will have to deal with complaints and investigations that touch on AI issues is indisputable. In fact, the IPC has already conducted formal and informal investigations that touch on AI-enabled remote proctoring, AI scribes, and vending machines on university campuses that incorporate face-detection technologies. The Principles offer important insights into how these two oversight bodies see privacy and human rights intersecting with the adoption and use of AI technologies, and what organizations should be doing to ensure that the systems they procure, adopt and deploy are legally compliant.

 

 


A recent communication from the Office of the Information and Privacy Commissioner of Ontario (IPC) highlights how rapidly evolving and widely available artificial intelligence-enabled tools can pose significant privacy risks for organizations.

The communication in question was a letter to an unnamed hospital (“the hospital”) which had reported a data breach to the IPC. The letter reviewed the breach, set out a series of recommendations for the hospital, and requested an update on the hospital’s response to the recommendations by late January 2026. Although the breach occurred in the health sector, with its strict privacy laws, lessons extend more broadly to other sectors as well.

The breach involved the use of a transcription tool of a kind now regularly in use by many physicians to document physician-patient interactions. AI Scribe tools record and transcribe physician-patient interactions and generate summaries suitable for inclusion in electronic medical records. These functions are designed to relieve physicians of significant note-taking and administrative burdens. Although there are many task-specific AI Scribe tools now commercially available, in this case, the tool used was the commonly available Otter.ai transcription tool designed for use in a broad range of contexts.

This breach was complicated by the fact that the Otter.ai tool acted as an AI agent of the physician who had downloaded it. AI agents can perform a series of tasks with a certain level of autonomy. In this case, the tool can be integrated with different communications platforms, as well as with the user’s digital calendar (such as Outlook). Essentially, Otter.ai can scan a user’s digital calendar and join scheduled meetings. The tool then transcribes and summarizes the meeting. It can also share both the summary and the transcription with other meeting participants – all without direct user intervention.

The physician had downloaded Otter.ai and provided it with access to his calendar over a year after he left the hospital that reported the breach. Because he had he used his personal email, rather than his hospital email, for internal communications while at that hospital, his departure in 2023 and the deactivation of his hospital email account had not led to the removal of his personal email from meeting invitation lists. When he downloaded Otter.ai in September 2024 and gave it access to his digital calendar, he was still receiving invitations from the hospital to hepatology rounds. Although the physician did not attend these rounds following his departure, his AI agent did. It attended a September 2024 meeting, produced a transcript and meeting summary and emailed the summary with a link to the full transcript to all 65 individuals on the meeting invitation. The breach was presumably reported to the hospital by one or more of the email recipients. Seven patients had been seen during the hepatology rounds, and the transcript and summary contained their sensitive personal health information.

The hospital took immediate action to address the breach. It cancelled the digital invitation to the physician and contacted all recipients of the summary and transcript asking them to promptly delete all copies of the rogue email and attachments. It also sent a notice to all staff reminding them that they are not permitted to use non-approved tools in association with their hospital credentials and/or devices. It contacted the physician who had used Otter.ai and ensured that he removed all digital connections with the hospital. They also requested that he contact Otter.ai to request that all information related to the meeting be deleted from their systems. Patients affected by the breach were also notified by the hospital. To prevent future breaches, the hospital created firewalls to block on-site access to non-approved scribing tools, updated its training materials to address the use of unapproved tools, and revised its Appropriate Use of Information and Information Technology policy. The revised policy emphasizes the importance of using only hospital approved IT resources. It also advises regular review of participant lists for meetings to ensure that AI tools or automated agents are not included.

In addition to these steps, the IPC made further recommendations, including that the hospital itself contact Otter.ai to request the deletion of any patient information that it may have retained. Twelve of the sixty-five email recipients had not confirmed that they had deleted the emails, and the IPC recommended that the hospital follow up to ensure this had been done. Updates to the hospital’s breach protocol were also recommended as well as changes to offboarding procedures to ensure that access to hospital information systems is “immediately revoked” when personnel leave the hospital. The OIPC also recommended the use of mandatory meeting lobbies for all virtual meetings so that unauthorized AI agents are not permitted access to meetings.

This incident highlights some of the important challenges faced by hospitals – as well as by many other organizations – with the development of widely available generative and agentic AI tools. Where sophisticated and powerful tools in the workplace were once more easily controlled by the employer, it is increasingly the case that employees have independent access to such tools. Shadow AI usage is a growing concern for organizations, as it may pose unexpected – and even undetected – risks for privacy and confidentiality of information. Rapidly evolving agentic AI tools – with their capacity to act independently may also create challenges, particularly where employees are not fully familiar with their full range of functions or default settings.

Medical associations and privacy commissioners’ offices have begun developing guidance for the use of AI Scribes in medical practice (see, e.g., guidance from Saskatchewan and Alberta OIPCs). Ontario MD has even gone so far as to develop a list of approved AI scribe vendors – ones that they consider meet privacy and security standards. However, the tool adopted in this case was designed for all contexts and is available in both free and paid versions, which only serves to highlight the risks and challenges in this area. The widespread availability of such tools poses important governance issues for privacy and security conscious organizations. Even where an organization may subscribe to a particular tool that has been customized to its own privacy and security standards, employees still have access to many other tools that they might already use in other contexts. The risk that an employee will simply decide to use a tool with which they are already familiar and with which they are comfortable must be considered.

More generic transcription tools may also pose other risks in the medical context, since they are not specifically trained or designed for a particular context such as health care. For example, they may be less adept at dealing with medical terminology, prescription drug names, or other terms of art. This could increase the incidence of errors in any transcriptions or summaries.

Risks that data collected through unauthorized tools may be used to train AI systems also underscores the potential consequences for privacy and confidentiality. Under Ontario’s Personal Health Information Protection Act (PHIPA), a health care custodian is not authorized to share personal health information with third parties without the patient’s express consent to do so. Using health-care related transcription or voice recordings to train third party AI systems without this express consent is not permitted. Although some services indicate that they only use “de-identified” information for system training, the term “de-identified” may not be defined in the same way as in PHIPA. For example, stripping information of all direct identifiers (names, ID numbers, etc.) does not count as de-identification under PHIPA which requires that in addition to the removal of all direct identifiers, it is also necessary to remove information “for which it is reasonably foreseeable in the circumstances that it could be utilized, either alone or with other information, to identify the individual”.

This incident highlights the vulnerability of sensitive personal information in a context in which a proliferation of novel (and evolving) technological tools for personal and professional use is rampant. Organizations must act quickly to assess and mitigate risks, and this will require regular engagement with and training of personnel.

Note: A pre-print version of my research paper with Daniel Kim on AI Scribes can be found here.

 


In November 2025, Canada’s federal government published a new Policy on Regulatory Sandboxes in anticipation of amendments to the Red Tape Reduction Act which had been announced in the 2024 budget. This development deserves some attention, particularly as the federal government embraces a pro-innovation agenda and shifts its approach to regulation of innovative technologies such as artificial intelligence (AI).

Regulatory sandboxes have received considerable attention since the first use of one by the Financial Conduct Authority the UK in 2017. Although they first took hold in the financial services sector, they have since attracted interest in other sectors. For example, several European data protection authorities have created privacy regulatory sandboxes (see, e.g., the UK Information Commissioner and France’s CNIL). In Canada, the Ontario Energy Board and the Law Society of Ontario – to give just two examples – both have regulatory sandboxes. Alberta also created a fintech regulatory sandbox by legislation in 2022. Regulatory sandboxes are expected to be an important component in AI regulation in the European Union. Article 57 of the EU Artificial Intelligence Act requires all member states to establish an AI regulatory sandbox – or at the very least to partner with one or more members states to jointly create such a sandbox.

Regulatory sandboxes are seen as a regulatory tool that can be effectively deployed in rapidly evolving technological contexts where existing regulations may create barriers to innovation. In some cases, innovators may hesitate to develop novel products or services where they see no clear pathway to regulatory approval. In many instances, regulators struggle to understand rapidly evolving technologies and the novel business methods they may bring with them. A regulatory sandbox is a space created by a regulator that allows selected innovators to work with regulators to explore how these innovations can be brought to market in a safe and compliant way, and to learn whether and how existing regulations might need to be adapted to a changing technological environment. It is a form of experimental regulation with benefits both for the regulator and for regulated parties.

This is the context in which the federal Policy has been introduced. It defines a regulatory sandbox in these terms:

[A] regulatory sandbox, in the context of this policy, is the practice by which a temporary authorization is provided for innovation (for example, a new product, service, process, application, regulatory and non-regulatory approaches) and is for the purpose of evaluating the real-life impacts of innovation, in order to provide information to the regulator to support the development, management and/or review and assessment of the results of regulations. This can also include for the purposes of equipping the regulatory framework to support innovation, competitiveness or economic growth.

It is important to remember that the policy is anchored in the Red Tape Reduction Act and has a particular slant that sets it apart from other sandbox initiatives. An example of the type of sandbox likely contemplated by this policy can be found in a new regulatory sandbox proposed by Transport Canada to address a very specific regulatory issue arising with respect to the design of aircraft. This sandbox is described as being for “minor change approvals used in support of a major modification.” It is narrow in scope, using modifications to existing regulations to try out a new regulatory process for the certification of major modifications to aircraft design. The end goal is to reduce regulatory burden and to relieve uncertainties caused by existing regulations. Data will be collected from the sandbox experiment to assess the impact of regulatory changes before they might be made permanent.

This approach frames sandboxing as a means to enable innovation by improving existing regulations and streamlining processes. While this is a worthy objective, there is a risk that the policy may be cast too narrowly by focusing on a regulatory sandbox as a means to improve regulation, rather than more broadly as a means of understanding how novel technologies or processes can be brought safely to market – sometimes under existing regulatory frameworks. This is reflected in the policy document, which states that sandboxes proposed under this policy “must demonstrate how regulatory regimes could be modernized”.

The definition of a regulatory sandbox in the Policy, reproduced above, essentially describes a data gathering process by the regulator “to support the development, management and/or review and assessment of the results of regulations.” This can be contrasted with the more open-ended definition adopted in the relatively recent standard for regulatory sandboxes developed by the Digital Governance Standardization Initiative (DGSI):

A regulatory sandbox is a facility created and controlled by a regulator, designed to allow the conduct of testing or experiments with novel products or processes prior to their entry into a regulated marketplace.

Rather than focus on the regulator conducting an assessment of its regulations, the DGSI definition is focused on innovative products and processes, and frames sandboxes in terms of their recognized mutual benefits for both regulators and innovators. The focus of the DGSI’s sandbox definition is on the bringing to market of novel products or processes. Although improving regulations and regulatory processes is a perfectly acceptable outcome of a regulatory sandbox, it is not the only possible outcome – nor is it even a necessary one. In this context, the new federal policy is rather narrow. It is focused on regulations themselves at the core of the sandbox experiments – rather than how innovative technologies challenge regulatory frameworks.

An example of this latter approach is found in the Ontario Bar Association’s regulatory sandbox for AI-enabled access to justice innovations (A2I). In some cases, innovations of this kind might be characterized as constituting the illegal practice of law, creating a barrier to market entry. In the A2I sandbox the novel products or services are developed and live-tested under supervision to assess whether they can be deployed in a way that is sufficiently protective of the public. The issue is partly a regulatory one – but it is not that any particular regulations necessarily require changing – rather, it is that innovators need a level of comfort that their innovation will not be blocked by existing regulations. At the same time, the regulator needs to understand the emerging technology and how they can fulfil their public protection mandate while supporting useful innovation. One out come of a sandbox process might be to learn that a particular innovation cannot safely be brought to market.

A similar paradigm exists with privacy regulatory sandboxes, which might either explore ways in which a novel technology can be designed to comply with the legislation, or examine how existing rules should be understood and applied in novel circumstances.

In all cases, the regulator may learn something about how existing regulations might need to adapt to an evolving technological context, and this too is a useful outcome. However, it does not have to be the principal goal of the regulatory sandbox. While the federal Policy is interesting, it seems narrowly focused. It appears to primarily be a tool conceived of to help streamline and improve regulatory processes (still a worthy goal) rather than a more ambitious sandboxing initiative. The policy is interesting and signals an openness to the concept of regulatory sandboxes. Unfortunately, it is still a rather narrow framing of the nature and potential of this regulatory tool.

 


Canada’s federal government has just released an early version of the AI Register it promised after its election earlier this year.

An AI Register is an important transparency tool – it will help researchers and the broader public understand what AI-enabled tools are in use in the federal public sector and provides basic information about them. The government also intends the register to be a resource for the public sector – allowing different departments and agencies to better see what others are doing so as to avoid duplication and to learn from each other.

The information accompanying the Register (which is published on Canada’s open government portal) indicates that this is a “Minimum Viable Product”. This means that it is “an early version with only basic features and content that is used to gather feedback.” It will be interesting to see how it develops over time.

One interesting aspect of the register is that it states that it was “assembled from existing sources of information, including Algorithmic Impact Assessments, Access to Information requests, responses to Parliamentary Questions, Personal Information Banks, and the GC Service Inventory.” Since it contains 409 entries at the time of writing, and since there are only a few dozen published Algorithmic Impact Assessments (AIAs), this suggests that the database was compiled largely using sources other than AIAs. The reference to access to information requests suggest that some of the data may have been gathered using the TAG Register Canada laboriously compiled by Joanna Redden and her team at the Western University. The sources for the TAG Register also included access to information requests and responses to questions by Members of Parliament. Prior to the development of the federal AI Register, the TAG Register was probably the most important source of information about public sector AI in Canada. The TAG Register is not made redundant by the new AI Register – it contains additional information about the systems derived from the source materials.

The federal AI Register sets out the name of each system and provides a description. It indicates who the primary users are, and which government organization is responsible for it. Other fields provide data about whether the system is designed in-house or is furnished by a vendor (and if so, which one). It also indicates whether the system is in development, in production, or retired. There is a brief description of the system’s capabilities, some information about the data sources used, and an indication of whether it uses personal data. The register also indicates whether users are given notice of use. There is a brief description of the expected outcomes of the system use.

All in all, it’s a good start, and clearly the developers of this database are open to feedback. (For example, I would like to see a link to the Algorithmic Impact Assessment under the Directive on Automated Decision-Making, if such an assessment has been carried out).

This is an important transparency initiative, and it will be a good source of data for researchers interested in public sector AI. It is also an interesting model that provincial governments might want to consider as they also roll out AI use across their public sectors.

 


Most of my recent work has looked at the relationship of data protection laws to personal information linked to geographical location. An article, co-authored with Anca Sattler, on Location-Based Services and privacy is forthcoming in the Canadian Journal of Law and Technology. Current projects include a study of the exception for publicly available information in PIPEDA. My work on geospatial privacy is ongoing as well. This research work is described in greater detail under the heading "geospatial data".


Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law