Tags
access to information
AI
AIDA
AI governance
AI regulation
Ambush Marketing
artificial intelligence
big data
bill c11
Bill c27
copyright
data governance
data protection
data scraping
data strategy
Electronic Commerce
freedom of expression
Geospatial
geospatial data
intellectual property
Internet
internet law
IP
open data
open government
personal information
pipeda
Privacy
trademarks
transparency
|
Displaying items by tag: ai scribes
Monday, 12 January 2026 08:45
Agentic AI transcription tool triggers health information data breachA recent communication from the Office of the Information and Privacy Commissioner of Ontario (IPC) highlights how rapidly evolving and widely available artificial intelligence-enabled tools can pose significant privacy risks for organizations. The communication in question was a letter to an unnamed hospital (“the hospital”) which had reported a data breach to the IPC. The letter reviewed the breach, set out a series of recommendations for the hospital, and requested an update on the hospital’s response to the recommendations by late January 2026. Although the breach occurred in the health sector, with its strict privacy laws, lessons extend more broadly to other sectors as well. The breach involved the use of a transcription tool of a kind now regularly in use by many physicians to document physician-patient interactions. AI Scribe tools record and transcribe physician-patient interactions and generate summaries suitable for inclusion in electronic medical records. These functions are designed to relieve physicians of significant note-taking and administrative burdens. Although there are many task-specific AI Scribe tools now commercially available, in this case, the tool used was the commonly available Otter.ai transcription tool designed for use in a broad range of contexts. This breach was complicated by the fact that the Otter.ai tool acted as an AI agent of the physician who had downloaded it. AI agents can perform a series of tasks with a certain level of autonomy. In this case, the tool can be integrated with different communications platforms, as well as with the user’s digital calendar (such as Outlook). Essentially, Otter.ai can scan a user’s digital calendar and join scheduled meetings. The tool then transcribes and summarizes the meeting. It can also share both the summary and the transcription with other meeting participants – all without direct user intervention. The physician had downloaded Otter.ai and provided it with access to his calendar over a year after he left the hospital that reported the breach. Because he had he used his personal email, rather than his hospital email, for internal communications while at that hospital, his departure in 2023 and the deactivation of his hospital email account had not led to the removal of his personal email from meeting invitation lists. When he downloaded Otter.ai in September 2024 and gave it access to his digital calendar, he was still receiving invitations from the hospital to hepatology rounds. Although the physician did not attend these rounds following his departure, his AI agent did. It attended a September 2024 meeting, produced a transcript and meeting summary and emailed the summary with a link to the full transcript to all 65 individuals on the meeting invitation. The breach was presumably reported to the hospital by one or more of the email recipients. Seven patients had been seen during the hepatology rounds, and the transcript and summary contained their sensitive personal health information. The hospital took immediate action to address the breach. It cancelled the digital invitation to the physician and contacted all recipients of the summary and transcript asking them to promptly delete all copies of the rogue email and attachments. It also sent a notice to all staff reminding them that they are not permitted to use non-approved tools in association with their hospital credentials and/or devices. It contacted the physician who had used Otter.ai and ensured that he removed all digital connections with the hospital. They also requested that he contact Otter.ai to request that all information related to the meeting be deleted from their systems. Patients affected by the breach were also notified by the hospital. To prevent future breaches, the hospital created firewalls to block on-site access to non-approved scribing tools, updated its training materials to address the use of unapproved tools, and revised its Appropriate Use of Information and Information Technology policy. The revised policy emphasizes the importance of using only hospital approved IT resources. It also advises regular review of participant lists for meetings to ensure that AI tools or automated agents are not included. In addition to these steps, the IPC made further recommendations, including that the hospital itself contact Otter.ai to request the deletion of any patient information that it may have retained. Twelve of the sixty-five email recipients had not confirmed that they had deleted the emails, and the IPC recommended that the hospital follow up to ensure this had been done. Updates to the hospital’s breach protocol were also recommended as well as changes to offboarding procedures to ensure that access to hospital information systems is “immediately revoked” when personnel leave the hospital. The OIPC also recommended the use of mandatory meeting lobbies for all virtual meetings so that unauthorized AI agents are not permitted access to meetings. This incident highlights some of the important challenges faced by hospitals – as well as by many other organizations – with the development of widely available generative and agentic AI tools. Where sophisticated and powerful tools in the workplace were once more easily controlled by the employer, it is increasingly the case that employees have independent access to such tools. Shadow AI usage is a growing concern for organizations, as it may pose unexpected – and even undetected – risks for privacy and confidentiality of information. Rapidly evolving agentic AI tools – with their capacity to act independently may also create challenges, particularly where employees are not fully familiar with their full range of functions or default settings. Medical associations and privacy commissioners’ offices have begun developing guidance for the use of AI Scribes in medical practice (see, e.g., guidance from Saskatchewan and Alberta OIPCs). Ontario MD has even gone so far as to develop a list of approved AI scribe vendors – ones that they consider meet privacy and security standards. However, the tool adopted in this case was designed for all contexts and is available in both free and paid versions, which only serves to highlight the risks and challenges in this area. The widespread availability of such tools poses important governance issues for privacy and security conscious organizations. Even where an organization may subscribe to a particular tool that has been customized to its own privacy and security standards, employees still have access to many other tools that they might already use in other contexts. The risk that an employee will simply decide to use a tool with which they are already familiar and with which they are comfortable must be considered. More generic transcription tools may also pose other risks in the medical context, since they are not specifically trained or designed for a particular context such as health care. For example, they may be less adept at dealing with medical terminology, prescription drug names, or other terms of art. This could increase the incidence of errors in any transcriptions or summaries. Risks that data collected through unauthorized tools may be used to train AI systems also underscores the potential consequences for privacy and confidentiality. Under Ontario’s Personal Health Information Protection Act (PHIPA), a health care custodian is not authorized to share personal health information with third parties without the patient’s express consent to do so. Using health-care related transcription or voice recordings to train third party AI systems without this express consent is not permitted. Although some services indicate that they only use “de-identified” information for system training, the term “de-identified” may not be defined in the same way as in PHIPA. For example, stripping information of all direct identifiers (names, ID numbers, etc.) does not count as de-identification under PHIPA which requires that in addition to the removal of all direct identifiers, it is also necessary to remove information “for which it is reasonably foreseeable in the circumstances that it could be utilized, either alone or with other information, to identify the individual”. This incident highlights the vulnerability of sensitive personal information in a context in which a proliferation of novel (and evolving) technological tools for personal and professional use is rampant. Organizations must act quickly to assess and mitigate risks, and this will require regular engagement with and training of personnel. Note: A pre-print version of my research paper with Daniel Kim on AI Scribes can be found here.
Published in
Privacy
|
Electronic Commerce and Internet Law in Canada, 2nd EditionPublished in 2012 by CCH Canadian Ltd.
Intellectual Property for the 21st CenturyIntellectual Property Law for the 21st Century: Interdisciplinary Approaches
|