Tags
access to information
AI
AIDA
AI governance
AI regulation
Ambush Marketing
artificial intelligence
big data
bill c11
Bill c27
copyright
data governance
data protection
data scraping
data strategy
Electronic Commerce
freedom of expression
Geospatial
geospatial data
intellectual property
Internet
internet law
IP
open data
open government
personal information
pipeda
Privacy
trademarks
transparency
|
Displaying items by tag: Privacy
Monday, 20 April 2026 06:47
Privacy Act Reform: Enhancing accountability and transparency
This is the third in a series of posts discussing the federal government’s new consultation document on reform of the federal Privacy Act. The previous posts are here and here. This post addresses the second theme in the document: Enhancing accountability and transparency. Accountability and transparency are important privacy principles, and it is no surprise that the TBS consultation document on reform of the federal Privacy Act addresses these issues in four proposals set out in its second theme. The first of these (Proposal #3 overall in the document) would create a “legal requirement to conduct a privacy impact assessment when a program or activity uses personal data to make a decision about someone”. Privacy impact assessments are currently required under the Directive on Privacy Practices when “personal information is to be used for an administrative purpose”. The consultation paper suggests that the proposal to reform the Privacy Act would “make PIAs a legal requirement instead of a policy requirement.” Under the proposal, PIAs would be shared with the Privacy Commissioner of Canada, who would assess whether they comply with the Privacy Act, and also with TBS. The consultation document notes that the incorporation of these existing policy requirements into the law would “not create an additional approval process or delay program implementation”. (See my discussion of the pragmatic privacy in my second post in the series). Although this is framed as a proposal to make an existing obligation more concrete and enforceable, according to the consultation document, the PIA requirement would be activated where there is a new program or a substantial modification to an existing program that uses personal data “to make decisions about people”. This is narrower than what the current policy on PIAs requires, and the difference is significant. I will return to this issue in the discussion of transparency, below. TBS also proposes to leave the contents of the PIA to policy to allow “the rules to be updated more easily as technologies, risks, and best practices change over time.” This tendency to leave details to regulations is becoming increasingly common in Canadian laws addressing rapidly evolving technologies. Nonetheless, although the law could simply require PIA’s to be completed according to a prescribed set of requirements (for example, there is currently a PIA template document for the federal public service), basic elements should still be set out in the law. For example, Alberta’s new public sector Protection of Privacy Act sets out four statutory requirements for PIAs. They must: 26. [. . .] (a) identify and review risks associated with the public body’s collection, use and disclosure of personal information, (b) develop mitigation strategies and safeguards respecting those risks, (c) address how the public body will comply with its duties under this Act, and (d) comply with the prescribed requirements. Section 38(3) of Ontario’s Freedom of Information and Protection of Privacy Act also provides a list of essential elements of a PIA, along with “any other prescribed elements”. A reformed federal Privacy Act should take the same approach, articulating essential requirements in the law, with other more variable elements to be prescribed. The consultation paper also proposes requiring the publication of plain language summaries of PIAs, suggesting that these would exclude information that might adversely impact “law enforcement, investigations, or national security”. The publication of plain language PIA summaries would offer an important level of transparency in an accessible format to a broader public. However, the level of detail in a full PIA could still be valuable to researchers and journalists. Both the detailed and plain language versions could be proactively published. After all, algorithmic impact assessments carried out under the Directive on Automated Decision-Making (DADM) are meant to be shared via the open government portal. In the US, PIAs under the E-Government Act 2002 must be proactively published unless certain exceptions apply. The second proposal under this theme (Proposal #4 overall) is to create a central registry of personal data holdings and to publish key information on personal data management practices. This system would replace the current Personal Information Banks system along with its classifications of personal data. Instead, there would be “a centralized registry of personal data holdings” (not a centralized data storage repository). The registry would include “privacy notices explaining why data is collected and how it will be used, general descriptions of how personal data is shared between programs, and summaries of PIAs.” Exceptions to disclosure would likely be created for law enforcement or national security, although the consultation document emphasizes that any exceptions should be “limited, specific, and clearly set out in the Act” and would require justification. This recommendation is aimed at modernizing how transparency is provided about government management of its personal data holdings. In the case of horizontal data sharing, it would ensure that the “flow of data between programs would be more clearly articulated”. The third proposal under this theme (Proposal #5) would establish “transparency requirements for the use of artificial intelligence and automated decision systems that support the right to the correction of personal data”. What is contemplated is an amendment to the Privacy Act to require – at the request of an individual – an explanation of “how an ADS [automated decision system] supported a decision and what personal data was used.” An automated decision system is currently defined in the DADM as “[a]ny technology that either assists or replaces the judgment of human decision makers.” A right to verify the accuracy of the data and to ask for corrections would also be provided. Where an individual believes that an error has been made, they could request a human review of the decision. The final proposal under this theme (Proposal #6) also deals with automated decision systems and would require notices that explain why data is being collected, for what purposes, and with whom it might be shared. The proposal would add a plain language requirement for such notices and would require them to be posted in the central registry. Additional notices would be required for ADS, and these would “provide a general explanation so the person can understand how the ADS handled their personal data and how the decision was made.” It is not entirely clear whether the ADS notice would be sent directly to affected individuals or placed in the centralized registry, but it seems that it might be the latter. The recommendations in this part of the proposal are clearly oriented towards automated decision-making. Although the federal Directive on Automated Decision Making (DADM) sets out certain transparency requirements, the DADM does not apply to all of the institutions that fall under the Privacy Act. The proposed reform would not only elevate these transparency requirements to law, but it would also ensure that they extend further across the public sector. While this would be a positive development, it is important to note that the DADM was developed as a form of AI governance, not as a privacy measure. The scope of the DADM is therefore shaped by its focus on automated decision-making. Indeed, TBS states that the transparency/correction requirement “would only apply to ADS that use personal data to make or support decisions that directly affect individuals”, language that echoes that used in the DADM. This is where the PIA requirement in Proposal #3 and the transparency requirement in Proposal #5 run into potential problems. As noted earlier, the PIA requirement in the consultation document would apply only where a new or modified program uses personal data “to make decisions about people”. (Compare this with the right to an explanation that featured in Bill C-27’s Consumer Privacy Protection Act, which would have applied to systems used to “make a prediction, recommendation or decision about an individual that could have a significant impact on them.”) The scope of this obligation will therefore be determined by how making “decisions about people” is defined. The DADM defines an administrative decision as one that “affects legal rights, privileges or interests”, which appears to be a relatively high threshold. The Guide on the scope of the DADM identifies a list of activities that are both in and out of scope of the Directive. In-scope activities include: · Triaging client applications based on their complexity as determined through machine-defined criteria · Examining a financial transaction to estimate the probability of fraud · Generating an assessment, score or classification about the client · Generating a summary of relevant client information for officers to determine eligibility to a program · Presenting information from multiple sources to an officer (such as by data matching and fuzzy matching) · Using facial recognition or other biometric technology to target subjects for additional scrutiny · Recommending one or multiple options to the decision maker · Using an AI resumé-screening tool or skills-based assessment tool to filter top-performing candidates to the interview stage in a recruitment process · Reviewing client applications for benefits and recommending approval or denial to an officer · Chatbot that officers use to recommend a course of action These offer some examples of the fairly wide net cast by the DADM and clearly go beyond some of the most obvious forms of automated decision-making. They help clarify what “decisions about people” mean, but any change to the legislation to add transparency and accountability in relation to automated decision making will need to be crystal clear that the scope of language such as “decisions about people” and about decisions that affect “legal rights, privileges or interest”, are as inclusive as this list. The risk is that without clear parameters, the interpretation of these rights could be too narrow.
Published in
Privacy
Monday, 13 April 2026 07:08
Pragmatic Privacy: Reform of the Federal Privacy ActThis post is the second in a series on the consultation paper published by Treasury Board Secretariat on proposed reform of the federal Privacy Act. The first can be found here. This post focuses on the first of six themes in the document: Enabling integrated services. If I had to sum up the new consultation paper on reform of the Privacy Act, I would describe it as a document about pragmatic privacy. It is about how government will protect privacy while enabling the uses that it needs and wants to make with data. It is not about the ideal of privacy, nor is it really about where the line should be drawn between government and citizen when it comes to the use of personal data. I am not suggesting that the document ignores the importance of privacy as a value; but I am proposing that the overall approach is pragmatic. The pragmatism is evident in first of six themes chosen to lead the consultation paper on reform of the federal Privacy Act: “Enabling integrated services”. This set of reforms is aimed at facilitating horizontal information sharing across the federal government. Horizontal data sharing has, to date, been limited by the Privacy Act, since the vertical siloing of personal data within departments and agencies was initially seen as a way to protect privacy. Only those departments or agencies that had collected information directly from individuals had access to that data. Horizontal sharing reflects two broad modernization goals. The first is to make it simpler for Canadians to access government services without having to provide or update the same information multiple times when dealing with programs housed in different departments. The second is less overt in the discussion paper, which describes : […] a new, purpose-based approach that allows government institutions to reuse and securely share personal data with each other and with their provincial, territorial, or municipal partners without asking for consent, if it clearly serves a public interest or directly benefits individuals, such as improving service delivery or program activities. This is broad language that will surely include using data in analytics and AI systems to develop and deliver services. The consultation paper makes it clear that horizontal data sharing will be subject to strict conditions which will include sharing only the information that is necessary for the stated purpose, sharing in the “least privacy-invasive way possible”, and having in place strong safeguards to protect privacy. (Note: Some of these issues are part of subsequent themes and proposals in the discussion document, and I will dig into them in later posts in this series). The document also promises that individuals will be informed of any reuse or sharing of their personal data, although it seems that this will be through plain language notices “published in a central registry before the data is shared or reused.” This transparency is important but note how the technological infrastructure to ensure transparency seems already determined. It will not be done through individual notice nor will it be through an Estonian-style citizen portal (called Data Tracker) which allows individuals to see who within government has accessed their personal data and when. The general move towards horizontal data sharing is evident in the reforms of some provincial public sector data protection laws. For example, Alberta’s new Protection of Privacy Act contains, in Part 3, a framework governing “data matching”, which is defined in s. 1(f) as “linking personal information between 2 or more databases or other electronic sources of information”. Nova Scotia’s revised Freedom of Information and Protection of Privacy Act allows for personal information to be shared horizontally if it is “necessary for the delivery of a common or integrated program or activity” (s. 70, s. 71(g)). Data linking is also permitted for research or statistical purposes in s. 72. It is unsurprising, then, that a reform of the federal Privacy Act would seek to better enable horizontal data sharing. However, this objective is buried in the first theme in language about enabling better services and requiring individuals only to provide information once instead of multiple times. The broader goals of horizontal data sharing should be more explicit. It is important to note that the data sharing envisaged is not just horizontal within the federal government, since the discussion paper refers to the potential to share information with provincial, territorial or even municipal governments. There is nothing inherently wrong with sharing information across governments. In Canada we sometimes create unnecessary barriers to getting things done, especially across layers of government. Yet there are also substantial risks with horizontal data sharing. These can include unwarranted surveillance, and problematic uses of data in AI systems that drive decision-making. Safeguards, transparency and accountability will be crucial. As part of the infrastructure to support horizontal data sharing, the consultation paper puts forward a model which would designate “certain programs or institutions as the official sources for specific types of personal data”. TBS admits that there would be set-up time required for this infrastructure, but that it will ultimately “reduce the need for repeated data collection, lower storage costs, and simplify updates to personal data for individuals by allowing them to maintain their data in fewer trusted locations.” The combination of discussion of privacy rules and infrastructure in the same document is part of the ‘pragmatic privacy’ approach. It highlights one of the differences between Privacy Act reform housed at TBS rather than in the Department of Justice. Past consultation papers from Justice have focused on privacy principles and reform of specific statutory provisions, with little discussion of the infrastructure required. On this model, principle precedes design. By contrast, the TBS consultation paper has one eye on privacy principles and another on how the new data infrastructures that will be required might be built. Another difference is that past discussion papers have been very specific about what provisions of the Privacy Act are targeted for change and how they might be changed. This consultation document discusses legislative changes in more general terms. One thing is clear: in this first theme, the discussion of reform of the Privacy Act is closely tied to new data infrastructure. Public sector data protection laws have an odd relationship to infrastructure. What the law allows and does not allow can dictate how data infrastructure is designed and built. Conversely, how data infrastructure is built can establish a reality to which privacy laws must adapt. We seem to be at a transition point, where new data infrastructure is clearly contemplated (some of it is sketched out in this document). At the same time, Privacy Act reform is underway to enable the new ways of collecting and handling data that this infrastructure will enable. Privacy reform is therefore in part about how privacy will be protected within this new infrastructure – but the new infrastructure, which will enable new uses of personal data across the federal government, will also transform long-held expectations about privacy that stem in part from what was and was not previously possible. There is a fundamental paradigm shift. This is a Privacy Act being rewritten for a government that has access to more data than ever before and has tools to do more with that data than ever imagined in 1983. The nature and scale of data use has changed. It is a vision of a Privacy Act that is about enabling use and reuse of data.
The next post in this series will consider the second theme in the document: Enhancing Accountability and Transparency.
Published in
Privacy
Monday, 06 April 2026 08:59
Consultation on long overdue Privacy Act reform promises a significant overhaulTreasury Board Secretariat has published a discussion paper and launched a consultation into the long-overdue reform of the federal Privacy Act. The consultation is open until July 10, 2026. The Privacy Act, which came into force in 1983, has not had a significant overhaul since that time, although we have seen dramatic changes in how personal data are collected and used. The Privacy Act’s woeful state of disrepair is no secret. The statute has been the subject of multiple reports and recommendations for reform from the Standing Committee on Access to Information, Privacy and Ethics, the Office of the Privacy Commissioner of Canada, the Information Commissioner, and from several public consultations. One thing that is different this time around is that responsibility for Privacy Act reform has shifted from the Department of Justice to Treasury Board Secretariat (TBS). Since Justice has failed to move privacy law reform forward over decades, this move offers some hope. Among other things, TBS is responsible for establishing and maintaining internal federal government policies on information management, privacy, automated decision-making, and cybersecurity. Taking responsibility for the legal framework that shapes these policies makes sense. Reform of the Privacy Act is sorely needed. Both the nature and volume of information collected by government has dramatically changed since the early 1980’s. So too have the uses to which such data are put. Another change is the desire of government (signaled in its strategy on the use of AI in the public service) to make greater use of data analytics and technology to derive value from data and to increase efficiency and improve service delivery. A 1980’s era privacy statute which relies on the strict vertical siloing of data to enhance privacy is not well adapted to an environment in which greater access to more complex data is seen as desirable. At the same time, the cybersecurity landscape has also dramatically changed, increasing the impact of privacy breaches and leaving Canadians more vulnerable where greater and greater volumes of data are collected. The Privacy Act must provide Canadians with modernized rules fit for our contemporary context. Although additional safeguards have been added over the years through directives and policies, these lack both the enforceability and independent oversight that privacy legislation can provide. Their scope of application across the public sector is also more limited. It is clear from the discussion document that TBS sees the reform process as a way to consolidate some of the approaches currently found in directives and policies and to extend them more broadly across the federal public sector. In framing their approach to privacy reform, TBS has identified three overarching policy approaches: o Enabling better services to Canadians o Strengthening privacy protections for the digital age o Updating foundations and oversight of the federal public sector privacy regime By setting enabling better services to Canadians as a priority, TBS signals that its reforms will seek to remove some of the friction experienced by Canadians when accessing government services (notably the need to provide the same personal information to multiple different departments or agencies). In this sense, one of the goals of Privacy Act reform is to make personal data more reusable by government – with appropriate safeguards in place. The safeguards, and oversight of privacy measures are part of the second and third policy approaches. The recommendations in the discussion paper are organized around 6 broad themes. These are: enabling integrated services; enhancing accountability and transparency; advancing safeguards across the spectrum of data sensitivity; modernizing the foundation for privacy and trust; Indigenous People’s access to, and protection of, their data; and updating the compliance framework. The themes and the discussion that accompanies them are not considered exhaustive or definitive, and feedback is invited. There are a number of interesting features in this proposal for reform. Notably, it seeks to integrate Indigenous data sovereignty within a reformed Privacy Act. This builds upon considerable work done by First Nations, Métis and Inuit on data sovereignty issues over the years, as well as government efforts towards truth and reconciliation. The document also includes proposals to create new legal safeguards for public sector automated decision-making and to include (long overdue) privacy breach notification requirements. There is a proposal to formally recognize privacy as a fundamental right in the statute. New transparency measures are also proposed, both with respect to automated decision-making and the use of personal data by departments and agencies. There is also a recommendation to shift requests for access to one’s personal data to the Access to Information Act. Proposed changes would also add new compliance features, including order-making powers for the OPC, a new offence for deliberate re-identification of anonymized data; expanded judicial remedies; and a mandatory 5-year review of the Privacy Act. Taken together there is much that is new and interesting in this document. There is also still room for criticism, comment and discussion. I will be diving into the TBS recommendations for reform over the next few weeks. My comments will be structured around each of the themes in the document. Stay tuned!
Published in
Privacy
Wednesday, 04 March 2026 09:20
BC Court of Appeal decision in Clearview AI saga is a win for privacyThe British Columbia Court of Appeal has ruled that the BC Privacy Commissioner’s enforcement order against Clearview AI is both reasonable and enforceable. Clearview AI is a US-based company that scrapes photographs from the internet, including from social media websites, to build a massive facial recognition database which it offers as a service to law enforcement (very broadly defined). At the time complaints were first lodged with Canadian privacy commissioners, the database held over 3 billion images. Today the number is estimated at around 70 billion. The order against the company followed a joint investigation report (from the federal Privacy Commissioner and the Commissioners of British Columbia, Alberta and Quebec). The laws of BC, Alberta, and Canada all contain exceptions to the requirements of knowledge and consent for the collection, use and disclosure of personal information where that information is “publicly available”. Clearview AI sought to rely on that exception, arguing that it needed no consent to collect and use personal information such as photographs that were available on the internet. The term “publicly available” is defined in narrow terms in the regulations, and the BC Court of Appeal found that the Commissioner’s interpretation of this exception to exclude information posted on social media sites was reasonable. In another judicial review application that challenged a similar order against Clearview AI from the Alberta Privacy Commissioner, the Alberta Court of King’s Bench also found the interpretation to be reasonable. However, that court struck down part of the exception in the regulations, finding that it breached Clearview AI’s right to freedom of expression under the Canadian Charter of Rights and Freedoms. Charter arguments were not raised before the BC courts, and so the reasonable interpretation of the BC regulation stands in BC. (You can find my discussion of the Alberta court decision and its implications here). The Court also found reasonable the BC Commissioner’s ruling that the scraping of photographs from the internet to create a massive facial recognition database was not a purpose that “a reasonable person would consider appropriate in the circumstances.” This baseline privacy norm is shared by the laws of Canada, Alberta and BC. The result of the BC Court of Appeal decision is therefore a clear win for the BC Privacy Commissioner – and frankly, for BC residents. Although the window of time is still open for Clearview AI to seek leave to appeal to the Supreme Court of Canada, without a constitutional angle to this case it is hard to see why the Supreme Court would consider it necessary to review the BC Court of Appeal’s ruling on this interpretation of BC law. What is perhaps most interesting about this decision is the strong signal it sends about privacy in a digital age. Clearview had argued (as it did in Alberta) that the province’s laws do not apply to its activities. The Court of Appeal disagreed, noting that the test for a “real and substantial connection” to the jurisdiction is necessarily contextual. It framed that context as “the internet as it exists today.” (at para 51) Writing for the unanimous court, Justice Iyer noted that “Clearview’s success as a business depends on its ability to acquire facial data on a global scale to build the databank on which its search engine runs” (at para 52). She observed that the scale of the company’s activities and its inability to exclude BC from its data scraping “supports a conclusion that BC’s relationship to Clearview is substantial, not incidental” (at para 52). She also noted that BC’s private sector data protection law is quasi-constitutional in nature, making transnational enforcement in a global digital age important. She rejected Clearview AI’s argument that just because PIPA is important within BC, its reach should note extend beyond the province’s borders, stating that: “PIPA is simply one of many legislative and common law mechanisms through which the protection of personal privacy is achieved. The importance of the public interest in protecting that fundamental right is highly relevant in the sufficient connection analysis.” (at para 54) Clearview AI’s business model and the scale of its activities were clearly relevant to the conclusion on jurisdiction. Justice Iyer stated that: [T]his case is not about the ‘incidental touching’ of a person’s publicly available data. It is about a systematic acquisition of facial data regardless of jurisdiction that enables an enterprise to commercially exploit that information by disclosing it to law enforcement and other entities who are interested in connecting with an individual. (at para 61) In these circumstances, the Court concluded that BC’s Personal Information Protection Act applies, giving the Commissioner jurisdiction. These findings on jurisdiction clearly reinforce both the importance of privacy protection and the significant impact of contemporary technology on privacy. Other statements in the decision also highlight this reality. In comments that are relevant to the anticipated reform (in the way that the arrival of the Easter Bunny is anticipated – with childlike faith that becomes cynical over the years) of Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA)), Justice Iyer reminds us of the Federal Court of Appeal’s admonition that PIPEDA (and its substantially similar counterparts) “does not aim to balance competing rights, it balances a need [of organizations to use personal data] with a right” (at para 82). The BC Court of Appeal decision joins the growing list of decisions in Canada that highlight the importance of privacy rights – particularly in the face of invasive transnational technologies and business models.
Published in
Privacy
Monday, 02 February 2026 08:36
New AI Medical Scribe Guidance from Ontario and BC Privacy CommissionersThe Ontario and British Columbia Information and Privacy Commissioners each released new AI medical scribes guidance on Privacy Day (January 28, 2026). This means that along with Alberta and Saskatchewan, a total for four provincial information and privacy commissioners have now issued similar guidance. BC’s guidance is aimed at health care practitioners running their own practices and governed by the province’s Personal Information Protection Act. It does not extend to health authorities and hospitals that fall under the province’s Freedom of Information and Protection of Privacy Act. Ontario’s guidance is for both public institutions and physicians in private practice who are governed by the Personal Health Information Protection Act. This flurry of guidance on AI Scribes shows how privacy regulators are responding to the very rapid adoption in the Canadian health sector of an AI-tool that raises sometimes complicated privacy issues with a broad public impact. At its most basic level, an AI medical scribe is a tool that records a doctor’s interaction with their patient. The recording is then transcribed by the scribe, and a summary is generated that can be cut and pasted by the doctor into the patient’s electronic medical record (EMR). The development and adoption of AI scribes has been rapid, in part because physicians have been struggling with both significant administrative burdens as well as burnout. This is particularly acute in the primary care sector. AI scribes offer the promise of better patient care (doctors are more focused on the patient as they are freed up from notetaking during appointments), as well as potentially significantly reduced time spent on administrative work. AI medical scribes raise a number of different privacy issues. These can include issues relating to the scribe tool itself (for example, how good is the data security of the scribe company? What kind of personal health information (PHI) is stored, where, and for how long? Are secondary uses made of de-identified PHI? Is the scribe company’s definition of de-identification consistent with the relevant provincial health information legislation?) They may also include issues around how the technology is adopted and implemented by the physician (including, for example” whether the physician retains the full transcription as well as the chart summary and for how long; what data security measures are in place within the physician’s practice; and how consent is obtained from patients to the use of this tool). As the BC IPC’s guidance notes, “What distinguishes an AI scribe’s collection of personal information from traditional notetaking with a pen and notepad is that there are many processes taking place with an AI scribe that are more complex, potentially more privacy invasive, and less obvious to the average person” (at 5). AI scribes raise issues other than privacy that touch on patient data. In their guidance, Ontario’s IPC notes the human rights considerations raised by AI scribes and refers to its recent AI Principles issued jointly with the Ontario Human Rights Commission (which I have written about here). The quality of AI technologies depends upon the quality of their training data. Where training data does not properly represent the populations impacted by the tool, there can be bias and discrimination. Concerns exist, for example, about how well AI scribes will function for people (or physicians) with accents, or for those with speech impaired by disease or disability. Certainly, the accuracy of personal health information that is recorded by the physician is a data protection issue; it is also a quality of health care issue. There are concerns that busy physicians may develop automation bias, increasingly trusting the scribe tool and reducing time spent on reviewing and correcting summaries – potentially leading to errors in the patient’s medical record. AI scribes are being adopted by individual physicians, but they are also adopted and used within institutions – either with the engagement of the institution, or as a form of ‘shadow use’. A recent response to a breach by Ontario’s IPC relating to the use of a general-purpose AI scribe illustrates how complex the privacy issues may be in such as case (I have written about this incident here). In that case, the scribe tool ‘attended’ nephrology rounds at a hospital, transcribed the meeting, sent a summary to all 65 people on the mailing list for the meeting and provided a link to the full transcript. The summary and transcript contained the sensitive personal information of the patients seen on those rounds. Complicating the matter was the fact that the physician whose scribe attended the meeting was no longer even at the hospital. Privacy commissioners are not the only ones who have stepped up to provide guidance and support to physicians in the choice of AI scribe tools. Ontario MD, for example, conducted an evaluation of AI medical scribes, and is assisting in assessing and recommending scribing tools that are considered safe and compliant with Ontario law. Of course, scribe technologies are not standing still. It is anticipated that these tools will evolve to include suggestions for physicians for diagnosis or treatment plans, raising new and complex issues that will extend beyond privacy law. As the BC guidance notes, some of these tools are already being used to “generate referral letters, patient handouts, and physician reminders for ordering lab work and writing prescriptions for medication” (at 2). Further, this is a volatile area where scribe tools are likely to be acquired by EMR companies to integrate with their offerings, reducing the number of companies and changing the profile of the tools. The mutable tools and volatile context might suggest that guidance is premature; but the AI era is presenting novel regulatory challenges, and this is an example of guidance designed not to consolidate and structure rules and approaches that have emerged over time; but rather to reduce risk and harm in a rapidly evolving context. Regulator guidance may serve other goals here as well, as it signals to developers and to EMR companies those design features which will be important for legal compliance. Both the BC and Ontario guidance caution that function creep will require those who adopt and use these technologies to be alert to potential new issues that may arise as the adopted tools’ functionalities change over time.
Note: Daniel Kim and I have written a paper on the privacy and other risks related to AI medical scribes which is forthcoming in the TMU Law Review. A pre-print version can be found here: Scassa, Teresa and Kim, Daniel, AI Medical Scribes: Addressing Privacy and AI Risks with an Emergent Solution to Primary Care Challenges (January 07, 2025). (2025) 3 TMU Law Review, Available at SSRN: https://ssrn.com/abstract=5086289
Published in
Privacy
Thursday, 22 January 2026 08:15
Ontario's Information & Privacy and Human Rights Commissioners issue joint Principles for the Responsible Use of Artificial IntelligenceOntario’s Office of the Information and Privacy Commissioner (IPC) and Human Rights Commission (OHRC) have jointly released a document titled Principles for the Responsible Use of Artificial Intelligence. Notably, this is the second collaboration of these two institutions on AI governance. Their first was a joint statement on the use of AI technologies in 2023, which urged the Ontario government to “develop and implement effective guardrails on the public sector’s use of AI technologies”. This new initiative, oriented towards “the Ontario public sector and the broader public sector” (at p. 1), is interesting because it deepens the cooperation between the IPC and the OHRC in relation to a rapidly evolving technology that is increasingly used in the public sector. It also fills a governance gap left by the province’s delay in developing its public sector AI regulatory framework. In 2024, the Ontario government enacted the Enhancing Digital Security and Trust Act, 2024 (EDSTA), which contains a series of provisions addressing the use of AI in the broader public sector (which includes hospitals and universities). It also issued the Responsible Use of Artificial Intelligence Directive which sets basic rules and principles for Ontario ministries and provincial agencies. The Directive is currently in force and is built around principles similar to those set out by the IPC and OHRC. It outlines a set of obligations for ministries and agencies that adopt and use AI systems. These include transparency, risk management, risk mitigation, and documentation requirements. The EDSTA, which would have a potentially broader application, creates a framework for transparency, accountability, and risk management obligations, but the actual requirements have been left to regulations. Those regulations will also determine to whom any obligations will apply. Although the EDSTA can apply to all actors within the public sector, broadly defined, its obligations can be tailored by regulations to specific departments or agencies, and can include or exclude universities and hospitals. There has been no obvious movement on the drafting of the regulations needed to breathe life into EDSTA’s AI provisions It is clear that AI systems will have both privacy and human rights implications, and that both the IPC and the OHRC will have to deal with complaints about such systems in relation to matters within their respective jurisdictions. As the Commissioners put it, the principles “will ground our assessment of organizations’ adoption of AI systems consistent with privacy and human rights obligations.” (at p. 1) The document clarifies what the IPC and OHRC expect from institutions. For example, conforming to the ‘Valid and reliable” principle will require compliance with independent testing standards and objective evidence will be required to demonstrate that systems “fulfil the intended requirements for a specified use or application”. (at p. 3) The safety principle also requires demonstrable cybersecurity protection and safeguards for privacy and human rights. The Commissioners also expect institutions to provide opportunities for access and correction of individuals’ personal data both used in and generated by AI systems. The “Human rights affirming” principle includes a caution that public institutions “should avoid the uniform use of AI systems with diverse groups”, since such practices could lead to adverse effects discrimination. The Commissioners also caution against uses of systems that may “unduly target participants in public or social movements, or subject marginalized communities to excessive surveillance that impedes their ability to freely associate with one another.” (at p. 6) The Commissioners’ “Transparency” principle requires that the use by the public sector of AI be visible. The IPC’s mandate covers both access to information and privacy. The Principles state that the documentation required for the “public account” of AI use “may include privacy impact assessments, algorithmic impact assessments, or other relevant materials.” (at p. 6) There must also be transparency regarding “the sources of any personal data collected and used to train or operate the system, the intended purposes of the system, how it is being used, and the ways in which its outputs may affect individuals or communities.” (at p. 6) The Principles also require that systems used in the public sector be understandable and explainable. The accountability principle requires public sector institutions to document design and application choices and to be prepared to explain how the system works to an oversight body. They should also establish mechanisms to receive and respond to complaints and concerns. The Principles call for whistleblower protections to support reporting of non-compliant systems. The joint nature of the Principles highlights how issues relating to AI do not easily fall within the sole jurisdiction of any one regulator. It also highlights that the dependence of AI systems on data – often personal data or de-identified personal data – carries with it implications both for privacy and human rights. That the IPC and OHRC will have to deal with complaints and investigations that touch on AI issues is indisputable. In fact, the IPC has already conducted formal and informal investigations that touch on AI-enabled remote proctoring, AI scribes, and vending machines on university campuses that incorporate face-detection technologies. The Principles offer important insights into how these two oversight bodies see privacy and human rights intersecting with the adoption and use of AI technologies, and what organizations should be doing to ensure that the systems they procure, adopt and deploy are legally compliant.
Published in
Privacy
Monday, 12 January 2026 08:45
Agentic AI transcription tool triggers health information data breachA recent communication from the Office of the Information and Privacy Commissioner of Ontario (IPC) highlights how rapidly evolving and widely available artificial intelligence-enabled tools can pose significant privacy risks for organizations. The communication in question was a letter to an unnamed hospital (“the hospital”) which had reported a data breach to the IPC. The letter reviewed the breach, set out a series of recommendations for the hospital, and requested an update on the hospital’s response to the recommendations by late January 2026. Although the breach occurred in the health sector, with its strict privacy laws, lessons extend more broadly to other sectors as well. The breach involved the use of a transcription tool of a kind now regularly in use by many physicians to document physician-patient interactions. AI Scribe tools record and transcribe physician-patient interactions and generate summaries suitable for inclusion in electronic medical records. These functions are designed to relieve physicians of significant note-taking and administrative burdens. Although there are many task-specific AI Scribe tools now commercially available, in this case, the tool used was the commonly available Otter.ai transcription tool designed for use in a broad range of contexts. This breach was complicated by the fact that the Otter.ai tool acted as an AI agent of the physician who had downloaded it. AI agents can perform a series of tasks with a certain level of autonomy. In this case, the tool can be integrated with different communications platforms, as well as with the user’s digital calendar (such as Outlook). Essentially, Otter.ai can scan a user’s digital calendar and join scheduled meetings. The tool then transcribes and summarizes the meeting. It can also share both the summary and the transcription with other meeting participants – all without direct user intervention. The physician had downloaded Otter.ai and provided it with access to his calendar over a year after he left the hospital that reported the breach. Because he had he used his personal email, rather than his hospital email, for internal communications while at that hospital, his departure in 2023 and the deactivation of his hospital email account had not led to the removal of his personal email from meeting invitation lists. When he downloaded Otter.ai in September 2024 and gave it access to his digital calendar, he was still receiving invitations from the hospital to hepatology rounds. Although the physician did not attend these rounds following his departure, his AI agent did. It attended a September 2024 meeting, produced a transcript and meeting summary and emailed the summary with a link to the full transcript to all 65 individuals on the meeting invitation. The breach was presumably reported to the hospital by one or more of the email recipients. Seven patients had been seen during the hepatology rounds, and the transcript and summary contained their sensitive personal health information. The hospital took immediate action to address the breach. It cancelled the digital invitation to the physician and contacted all recipients of the summary and transcript asking them to promptly delete all copies of the rogue email and attachments. It also sent a notice to all staff reminding them that they are not permitted to use non-approved tools in association with their hospital credentials and/or devices. It contacted the physician who had used Otter.ai and ensured that he removed all digital connections with the hospital. They also requested that he contact Otter.ai to request that all information related to the meeting be deleted from their systems. Patients affected by the breach were also notified by the hospital. To prevent future breaches, the hospital created firewalls to block on-site access to non-approved scribing tools, updated its training materials to address the use of unapproved tools, and revised its Appropriate Use of Information and Information Technology policy. The revised policy emphasizes the importance of using only hospital approved IT resources. It also advises regular review of participant lists for meetings to ensure that AI tools or automated agents are not included. In addition to these steps, the IPC made further recommendations, including that the hospital itself contact Otter.ai to request the deletion of any patient information that it may have retained. Twelve of the sixty-five email recipients had not confirmed that they had deleted the emails, and the IPC recommended that the hospital follow up to ensure this had been done. Updates to the hospital’s breach protocol were also recommended as well as changes to offboarding procedures to ensure that access to hospital information systems is “immediately revoked” when personnel leave the hospital. The OIPC also recommended the use of mandatory meeting lobbies for all virtual meetings so that unauthorized AI agents are not permitted access to meetings. This incident highlights some of the important challenges faced by hospitals – as well as by many other organizations – with the development of widely available generative and agentic AI tools. Where sophisticated and powerful tools in the workplace were once more easily controlled by the employer, it is increasingly the case that employees have independent access to such tools. Shadow AI usage is a growing concern for organizations, as it may pose unexpected – and even undetected – risks for privacy and confidentiality of information. Rapidly evolving agentic AI tools – with their capacity to act independently may also create challenges, particularly where employees are not fully familiar with their full range of functions or default settings. Medical associations and privacy commissioners’ offices have begun developing guidance for the use of AI Scribes in medical practice (see, e.g., guidance from Saskatchewan and Alberta OIPCs). Ontario MD has even gone so far as to develop a list of approved AI scribe vendors – ones that they consider meet privacy and security standards. However, the tool adopted in this case was designed for all contexts and is available in both free and paid versions, which only serves to highlight the risks and challenges in this area. The widespread availability of such tools poses important governance issues for privacy and security conscious organizations. Even where an organization may subscribe to a particular tool that has been customized to its own privacy and security standards, employees still have access to many other tools that they might already use in other contexts. The risk that an employee will simply decide to use a tool with which they are already familiar and with which they are comfortable must be considered. More generic transcription tools may also pose other risks in the medical context, since they are not specifically trained or designed for a particular context such as health care. For example, they may be less adept at dealing with medical terminology, prescription drug names, or other terms of art. This could increase the incidence of errors in any transcriptions or summaries. Risks that data collected through unauthorized tools may be used to train AI systems also underscores the potential consequences for privacy and confidentiality. Under Ontario’s Personal Health Information Protection Act (PHIPA), a health care custodian is not authorized to share personal health information with third parties without the patient’s express consent to do so. Using health-care related transcription or voice recordings to train third party AI systems without this express consent is not permitted. Although some services indicate that they only use “de-identified” information for system training, the term “de-identified” may not be defined in the same way as in PHIPA. For example, stripping information of all direct identifiers (names, ID numbers, etc.) does not count as de-identification under PHIPA which requires that in addition to the removal of all direct identifiers, it is also necessary to remove information “for which it is reasonably foreseeable in the circumstances that it could be utilized, either alone or with other information, to identify the individual”. This incident highlights the vulnerability of sensitive personal information in a context in which a proliferation of novel (and evolving) technological tools for personal and professional use is rampant. Organizations must act quickly to assess and mitigate risks, and this will require regular engagement with and training of personnel. Note: A pre-print version of my research paper with Daniel Kim on AI Scribes can be found here.
Published in
Privacy
Tuesday, 02 September 2025 06:48
Right to Be Forgotten Findings Raise Issues About Privacy Commissioner's Powers and Canadian Privacy Law Reform
Canada’s Privacy Commissioner has released a set of findings that recognize a right to be forgotten (RTBF) under the Personal Information Protection and Electronic Documents Act (PIPEDA). The complainant’s long legal journey began in 2017 when they complained that a search of their name in Google’s search engine returned news articles from many years earlier regarding an arrest and criminal charges relating to having sexual activity without disclosing their status as being HIV positive. Although these reports were accurate at the time they were published, the charges were stayed shortly afterwards, because the complainant posed no danger to public health. Charging guidelines for the offence in question indicated that no charges should be laid where there is no realistic possibility that HIV could be transmitted. The search results contain none of that information. Instead, they publicly disclose the HIV status of the complainant, and they create the impression that their conduct was criminal in nature. As a result of the linking of their name to these search results, the complainant experienced – and continues to experience – negative consequences including social stigma, loss of career opportunities and even physical violence. Google’s initial response to the complaint was to challenge the jurisdiction of the Privacy Commissioner to investigate the matter under PIPEDA, arguing that PIPEDA did not apply to its search engine functions. The Commissioner referred this issue to the Federal Court, which found that PIPEDA applied. That decision was (unsuccessfully) appealed by Google to the Federal Court of Appeal. When the matter was not appealed further to the Supreme Court of Canada, the Commissioner began his investigation which resulted in the current findings. Google has indicated that it will not comply with the Commissioner’s recommendation to delist the articles so that they do not appear in a search using the complainant’s name. This means that it is likely that an application will be made to Federal Court for a binding order. The matter is therefore not yet resolved. This post considers three issues. The first relates to the nature and scope of the RTBF in PIPEDA, as found by the Commissioner. The second relates to the Commissioner’s woeful lack of authority when it comes to the enforcement of PIPEDA. Law reform is needed to address this, yet Bill C-27, which would have given greater enforcement powers to the Commissioner, died on the order paper. The government’s intentions with respect to future reform and its timing remain unclear. The third point also addresses PIPEDA reform. I consider the somewhat fragile footing for the Commissioner’s version of the RTBF given how Bill C-27 had proposed to rework PIPEDA’s normative core. The Right to be Forgotten (RTBF) and PIPEDA In his findings, the Commissioner grounds the RTBF in an interpretation of s. 5(3) of PIPEDA: 5(3) An organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances. This is a core normative provision in PIPEDA. For example, although organizations may collect personal information with the consent of the individual, they cannot do so if the collection is for purposes that a reasonable person would not consider appropriate in the circumstances. This provision (or at least one very similar to it in Alberta’s Personal Information Protection Act), was recently found to place important limits on the scraping of photographs from the public internet by Clearview AI to create a massive facial recognition (FRT) database. Essentially, even though the court found that photographs posted on the internet were publicly available and could be collected and used without consent, they could not be collected and used to create a FRT database because this was not a purpose a reasonable person would consider appropriate in the circumstances. The RTBF would function much in the same way when it comes to the operations of platform search engines. Those search engines – such as Google’s – collect, use and disclose information found on the public internet when they return search results to users in response to queries. When searches involve individuals, search results may direct users to personal information about that individual. That is acceptable – as long as the information is being collected, used and disclosed for purposes a reasonable person would consider appropriate in the circumstances. In the case of the RTBF, according to the Commissioner, the threshold will be crossed when the privacy harms caused by the disclosure of the personal information in the search results outweigh the public interest in having that information shared through the search function. In order to make that calculation, the Commissioner articulates a set of criteria that can be applied on a case-by-case basis. The criteria include: a. Whether the individual is a public figure (e.g. a public office holder, a politician, a prominent business person, etc.); b. Whether the information relates to an individual’s working or professional life as opposed to their private life; c. Whether the information relates to an adult as opposed to a minor; d. Whether the information relates to a criminal charge that has resulted in a conviction or where the charges were stayed due to delays in the criminal proceedings; e. Whether the information is accurate and up to date; f. Whether the ability to link the information with the individual is relevant and necessary to the public consideration of a matter under current controversy or debate; g. The length of time that has elapsed since the publication of the information and the request for de-listing. (at para 109) In this case, the facts were quite compelling, and the Commissioner had no difficulty finding that the information at issue caused great harm to the complainant while providing no real public benefit. This led to the de-listing recommendation – which would mean that a search for the complainant’s name would no longer turn up the harmful and misleading articles – although the content itself would remain on the web and could be arrived at using other search criteria. The Privacy Commissioner’s ‘Powers’ Unlike his counterparts in other jurisdictions, including the UK, EU member countries, and Quebec, Canada’s Privacy Commissioner lacks suitable enforcement powers. PIPEDA was Canada’s first federal data protection law, and it was designed to gently nudge organizations into compliance. It has been effective up to a point. Many organizations do their best to comply proactively, and the vast majority of complaints are resolved prior to investigation. Those that result in a finding of a breach of PIPEDA contain recommendations to bring the organization into compliance, and in many cases, organizations voluntarily comply with the recommendations. The legislation works – up to a point. The problem is that the data economy has dramatically evolved since PIPEDA’s enactment. There is a great deal of money to be made from business models that extract large volumes of data that are then monetized in ways that are beyond the comprehension of individuals who have little choice but to consent to obscure practices laid out in complex privacy policies in order to receive services. Where complaint investigations result in recommendations that run up against these extractive business models, the response is increasingly to disregard the recommendations. Although there is still the option for a complainant or the Commissioner to apply to Federal Court for an order, the statutory process set out in PIPEDA requires the Federal Court to hold a hearing de novo. In other words, notwithstanding the outcome of the investigation, the court hears both sides and draws its own conclusions. The Commissioner, despite his expertise, is owed no deference. In the proposed Consumer Protection Privacy Act (CPPA) that was part of the now defunct Bill C-27, the Commissioner was poised to receive some important new powers, including order-making powers and the ability to recommend the imposition of steep administrative monetary penalties. Admittedly, these new powers came with some clunky constraints that would have put the Commissioner on training wheels in the privacy peloton of his international counterparts. Still, it was a big step beyond the current process of having to ask the Federal Court to redo his work and reach its own conclusions. Bill C-27, however, died on the order paper with the last federal election. The current government is likely in the process of pep-talking itself into reintroducing a PIPEDA reform bill, but as yet there is no clear timeline for action. Until a new bill is passed, the Commissioner is going to have to make do with his current woefully inadequate enforcement tools. The Dangers of PIPEDA Reform Assuming a PIPEDA reform bill will contain enforcement powers better adapted to a data-driven economy, one might be forgiven for thinking that PIPEDA reform will support the nascent RTBF in Canada (assuming that the Federal Court agrees with the Commissioner’s approach). The problem is, however, there could be some uncomfortable surprises in PIPEDA reform. Indeed, this RTBF case offers a good illustration of how tinkering with PIPEDA may unsettle current interpretations of the law – and might do so at the expense of privacy rights. As noted above, the Commissioner grounded the RTBF on the strong and simple principle at the core of PIPEDA and expressed in s. 5(3), which I repeat here for convenience: 5(3) An organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances. The Federal Court of Appeal has told us that this is a normative standard – in other words, the fact that millions of otherwise reasonable people may have consented to certain terms of service does not on its own make those terms something that a reasonable person would consider appropriate in the circumstances. The terms might be unduly exploitative but leave individuals with little or no choice. The reasonableness inquiry sets a standard for the level of privacy protection an individual should be entitled to in a given set of circumstances. Notably, Bill C-27 sought to disrupt the simplicity of s. 5(3), replacing it with the following: 12 (1) An organization may collect, use or disclose personal information only in a manner and for purposes that a reasonable person would consider appropriate in the circumstances, whether or not consent is required under this Act.
(2) The following factors must be taken into account in determining whether the manner and purposes referred to in subsection (1) are appropriate: (a) the sensitivity of the personal information; (b) whether the purposes represent legitimate business needs of the organization; (c) the effectiveness of the collection, use or disclosure in meeting the organization’s legitimate business needs; (d) whether there are less intrusive means of achieving those purposes at a comparable cost and with comparable benefits; and (e) whether the individual’s loss of privacy is proportionate to the benefits in light of the measures, technical or otherwise, implemented by the organization to mitigate the impacts of the loss of privacy on the individual. Although s. 12(1) is not so different from s. 5(3), the government saw fit to add a set of criteria in s. 12(2) that would shape any analysis in a way that leans the decision-maker towards accommodating the business needs of the organization over the privacy rights of the individual. Paragraph 12(2)(b) and (c) explicitly require the decision-maker to think about the legitimate business needs of the organization and the effectiveness of the particular collection, use or disclosure in meeting those needs. In an RTBF case, this might mean thinking about how indexing the web and returning search results meets the legitimate business needs of a search engine company and does so effectively. It then asks whether there are “less intrusive means of achieving those purposes at a comparable cost and with comparable benefits”. This too focuses on the organization. Not only is this criterion heavily weighted in favour of business in terms of its substance – less intrusive means should be of comparable cost – the issues it raises are ones about which an individual challenging the practice would have great difficulty producing evidence. While the Commissioner has greater resources, these are still limited. The fifth criterion returns us to the issue of privacy, but it asks whether “the individual’s loss of privacy is proportionate to the benefits [to the organization] in light of the measures, technical or otherwise, implemented by the organization to mitigate the impacts of the loss of privacy on the individual”. The criteria in s. 12(2) fall over themselves to nudge a decision-maker towards finding privacy-invasive practices to be “for purposes that a reasonable person would consider appropriate in the circumstances” – not because a reasonable person would find them appropriate in light of the human right to privacy, but because an organization has a commercial need for the data and has fiddled about a bit to attempt to mitigate the worst of the impacts. Privacy essentially becomes what the business model will allow – the reasonable person is now an accountant. It is also worth noting that by the time a reform bill is reintroduced (and if we dare to imagine it – actually passed), the Federal Court may have weighed in on the RTBF under PIPEDA, putting us another step closer to clarifying whether there is a RTBF in Canada’s private sector privacy law. Assuming that the Federal Court largely agrees with the Commissioner and his approach, if something like s. 12 of the CPPA becomes part of a new law, the criteria developed by the Commissioner for the reasonableness assessment in RTBF cases will be supplanted by the rather ugly list in s. 12(2). Not only will this cast doubt on the continuing existence of a RTBF, it may likely doom one. And this is not the only established interpretation/approach that will be unsettled by such a change. The Commissioner’s findings in the RTBF investigation demonstrate the flexibility and simplicity of s. 5(3). When a PIPEDA reform bill returns to Parliament, let us hope that the s. 12(2) is no longer part of it.
Published in
Privacy
Tuesday, 27 May 2025 05:18
New Clearview AI Decision Has Implications for OpenAI InvestigationThe Alberta Court of Queen’s Bench has issued a decision in Clearview AI’s application for judicial of an Order made by the province’s privacy commissioner. The Commissioner had ordered Clearview AI to take certain steps following a finding that the company had breached Alberta’s Personal Information Protection Act (PIPA) when it scraped billions of images – including those of Albertans – from the internet to create a massive facial recognition database marketed to police services around the world. The court’s decision is a partial victory for the commissioner. It is interesting and important for several reasons – including for its relevance to generative AI systems and the ongoing joint privacy investigation into OpenAI. These issues are outlined below. Brief Background Clearview AI became notorious in 2020 following a New York Times article which broke the story on the company’s activities. Data protection commissioners in Europe and elsewhere launched investigations, which overwhelmingly concluded that the company violated applicable data protection laws. In Canada, the federal privacy commissioner joined forces with the Quebec, Alberta and British Columbia (BC) commissioners, each of which have private sector jurisdiction. Their joint investigation report concluded that their respective laws applied to Clearview AI’s activities as there was a real and substantial connection to their jurisdictions. They found that Clearview collected, used and disclosed personal information without consent, and that no exceptions to consent applied. The key exception advanced by Clearview AI was the exception for “publicly available information”. The Commissioners found that the scope of this exception, which was similarly worded in the federal, Alberta and BC laws, required a narrow interpretation and that the definition in the regulations enacted under each of these laws did not include information published on the internet. The commissioners also found that, contrary to shared legislative requirements, the collection and use of the personal information by Clearview AI was not for a purpose that a reasonable person would consider appropriate in the circumstances. The report of findings made a number of recommendations that Clearview ultimately did not accept. The Quebec, BC and Alberta commissioners all have order making powers (which the federal commissioner does not). Each of these commissioners ordered Clearview to correct its practices, and Clearview sought judicial review of each of these orders. The decision of the BC Supreme Court (which upheld the Commissioner’s order) is discussed in an earlier post. The decision from Quebec has yet to be issued. In Alberta, Clearview AI challenged the commissioner’s jurisdiction on the basis that Alberta’s PIPA did not apply to its activities. It also argued that that the Commissioner’s interpretation of “publicly available information” was unreasonable. In the alternative, Clearview AI argued that ‘publicly available information’, as interpreted by the Commissioner, was an unconstitutional violation of its freedom of expression. It also contested the Commissioner’s finding that Clearview did not have a reasonable purpose for collecting, using and disclosing the personal information. The Jurisdictional Question Courts have established that Canadian data protection laws will apply where there is a real and substantial connection to the relevant jurisdiction. Clearview AI argued that it was a US-based company that scraped most of its data from social media websites mainly hosted outside of Canada, and that therefore its activities took place outside of Canada and its provinces. Yet, as Justice Feasby noted, “[s]trict adherence to the traditional territorial conception of jurisdiction would make protecting privacy interests impossible when information may be located everywhere and nowhere at once” (at para 50). He noted that there was no evidence regarding the actual location of the servers of social media platforms, and that Clearview AI’s scraping activities went beyond social media platforms. Justice Feasby ruled that he was entitled to infer from available evidence that images of Albertans were collected from servers located in Canada and in Alberta. He observed that in any event, Clearview marketed its services to police in Alberta, and its voluntary decision to cease offering those services did not alter the fact that it had been doing business in Alberta and could do so again. Further, the information at issue in the order was personal information of Albertans. All of this gave rise to a real and substantial connection with Alberta. Publicly Available Information The federal Personal Information Protection and Electronic Documents Act (PIPEDA) contains an exception to the consent requirement for “publicly available information”. The meaning of this term is defined in the Regulations Specifying Publicly Available Information. The relevant category is found in s. 1(e) which specifies “personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.” Alberta’s PIPA contains a similar exception (as does BC’s law), although the wording is slightly different. Section 7(e) of the Alberta regulations creates an exception to consent where: (e) the personal information is contained in a publication, including, but not limited to, a magazine, book or newspaper, whether in printed or electronic form, but only if (i) the publication is available to the public, and (ii) it is reasonable to assume that the individual that the information is about provided that information; [My emphasis]
In their joint report of findings, the Commissioners found that their respective “publicly available information” exceptions did not include social media platforms. Clearview AI made much of the wording of Alberta’s exception, arguing that even if it could be said that the PIPEDA language excluded social media platforms, the use of the words “including but not limited to” in the Alberta regulation made it clear that the list was not closed, nor was it limited to the types of publications referenced. In interpreting the exceptions for publicly available information, the Commissioners emphasized the quasi-constitutional nature of privacy legislation. They found that the privacy rights should receive a broad and expansive interpretation and the exceptions to those rights should be interpreted narrowly. The commissioners also found significant differences between social media platforms and the more conventional types of publications referenced in their respective regulations, making it inappropriate to broaden the exception. Justice Feasby, applying reasonableness as the appropriate standard of review, found that the Alberta Commissioner’s interpretation of the exception was reasonable. Freedom of Expression Had the court’s decision ended there, the outcome would have been much the same as the result in the BC Supreme Court. However, in this case, Clearview AI also challenged the constitutionality of the regulations. It sought a declaration that if the exception were interpreted as limited to books, magazines and comparable publications, then this violated its freedom of expression under s. 2(b) of the Canadian Charter of Rights and Freedoms. Clearview AI argued that its commercial purposes of scraping the internet to provide information services to its clients was expressive and was therefore protected speech. Justice Feasby noted that Clearview’s collection of internet-based information was bot-driven and not engaged in by humans. Nevertheless, he found that “scraping the internet with a bot to gather images and information may be protected by s. 2(b) when it is part of a process that leads to the conveyance of meaning” (at para 104). Interestingly, Justice Feasby noted that since Clearview no longer offered its services in Canada, any expressive activities took place outside of Canada, and thus were arguably not protected by the Charter. However, he acknowledged that the services had at one point been offered in Canada and could be again. He observed that “until Clearview removes itself permanently from Alberta, I must find that its expression in Alberta is restricted by PIPA and the PIPA Regulation” (at para 106). Having found a prima facie breach of s. 2(b), Justice Feasby considered whether this was a reasonable limit demonstrably justified in a free and democratic society, under s. 1 of the Charter. The Commissioner argued that the expression at issue in this case was commercial in nature and thus of lesser value. Justice Feasby was not persuaded by category-based assumptions of value; rather, he preferred an approach in which the regulation of commercial expression is consistent with and proportionate to its character. Justice Feasby found that the Commissioner’s reasonable interpretation of the exception in s. 7 of the regulations meant that it would exclude social media platforms or “other kinds of internet websites where images and personal information may be found” (at para 118). He noted that this is a source-based exception – in other words that some publicly available information may be used without knowledge or consent, but not other similar information. The exclusion depends on the source and not the purpose of use for the personal information. Justice Feasby expressed concern that the same exception that would exclude the scraping of images from the internet for the creation of a facial recognition database would also apply to the activities of search engines widely used by individuals to gain access to information on the internet. He thus found that the publicly available information exception was overbroad, stating: “Without a reasonable exception to the consent requirement for personal information made publicly available on the internet without use of privacy settings, internet search service providers are subject to a mandatory consent requirement when they collect, use and disclose such personal information by indexing and delivering search results” (at para 138). He stated: “I take judicial notice of the fact that search engines like Google are an important (and perhaps the most important) way individuals access information on the internet” (at para 144). Justice Feasby also noted that while it was important to give individuals some level of control over their personal information, “it must also be recognized that some individuals make conscious choices to make their images and information discoverable by search engines and that they have the tools in the form of privacy settings to prevent the collection, use, and disclosure of their personal information” (at para 143). His constitutional remedy – to strike the words “including, but not limited to magazines, books, and newspapers” from the regulation was designed to allow “the word ‘publication’ to take its ordinary meaning which I characterize as ‘something that has been intentionally made public’” (at para 149). The Belt and Suspenders Approach Although excising part of the publicly available information definition seems like a major victory for Clearview AI, in practical terms it is not. This is because of what the court refers to as the law’s “belt and suspenders approach”. This metaphor suggests that there are two routes to keep up privacy’s pants – and loosening the belt does not remove the suspenders. In this case, the suspenders are located in the clause found in PIPA, as well as in its federal and BC counterparts, that limits the collection, use and disclosure of personal information to only that which “a reasonable person would consider appropriate in the circumstances”. The court ruled that the Commissioner’s conclusion that the scraping of personal information was not for purposes that a reasonable person would consider appropriate in the circumstances was reasonable and should not be overturned. This approach, set out in the joint report of findings, emphasized that the company’s mass data scraping involved over 3 billion images of individuals, including children. It was used to create biometric face prints that would remain in Clearview’s databases even if the source images were removed from the internet, and it was carried out for commercial purposes. The commissioners also found that the purposes were not related to the reasons why individuals might have shared their photographs online, could be used to the detriment of those individuals, and created the potential for a risk of significant harm. Continuing with his analogy to search engines, Justice Feasby noted that Clearview AI’s use of publicly available images was very different from the use of the same images by search engines. The different purposes are essential to the reasonableness determination. Justice Feasby states: “The “purposes that are reasonable” analysis is individualized such that a finding that Clearview’s use of personal information is not for reasonable purposes does not apply to other organizations and does not threaten the operations of the internet” (at para 159). He noted that the commercial dimensions of the use are not determinative of reasonableness. However, he observed that “where images and information are posted to social media for the purpose of sharing with family and friends (or prospective friends), the commercialization of such images and information by another party may be a relevant consideration in determining whether the use is reasonable” (at para 160). The result is that Clearview AI’s scraping of images from the public internet violates Alberta’s PIPA. The court further ruled that the Commissioner’s order was clear and specific, and capable of being implemented. Justice Feasby required Clearview AI to report within 50 days on its good faith progress in taking steps to cease the collection, use and disclosure of images and biometric data collected from individuals in Alberta, and to delete images and biometric data in its database that are from individuals in Alberta. Harmonized Approaches to Data Protection Law in Canada This decision highlights some of the challenges to the growing collaboration and cooperation of privacy commissioners in Canada when it comes to interpreting key terms and concepts in substantially similar legislation. Increasingly, the commissioners engage in joint investigations where complaints involve organizations operating in multiple jurisdictions in Canada. While this occurs primarily in the private sector context, it is not exclusively the case, as a recent joint investigation between the BC and Ontario commissioners into a health data breach demonstrates. Joint investigations conserve regulator resources and save private sector organizations from having to respond to multiple similar and concurrent investigations. In addition, joint investigations can lead to harmonized approaches and interpretations of shared concepts in similar legislation. This is a good thing for creating certainty and consistency for those who do business across Canadian jurisdictions. However, harmonized approaches are vulnerable to multiple judicial review applications, as was the case following the Clearview AI investigation. Although the BC Supreme Court found that the BC Commissioner’s order was reasonable, what the Alberta King’s Bench decision demonstrates is that a common front can be fractured. Justice Feasby found that a slight difference in wording between Alberta’s regulations and those in BC and at the federal level was sufficient to justify finding the scope of Alberta’s publicly available information exception to be unconstitutional. Harmonized approaches may also be vulnerable to unilateral legislative change. In this respect, it is worth noting that an Alberta report on the impending reform of PIPA recommends “that the Government take all necessary steps, including through proposing amendments to the Personal Information Protection Act, to improve alignment of all provincial privacy legislation, including in the private, public and health sectors” (at p. 13). The Elephant in the Room: Generative AI and Data Protection Law in Canada In his reasons, Justice Feasby made Google’s search functions a running comparison for Clearview AI’s data scraping practices. Perhaps a better example would have been the data scraping that takes place in order to train generative AI models. However, the court may have avoided that example because there is an ongoing investigation by the Alberta, Quebec, BC and federal commissioners into OpenAI’s practices. The findings in that investigation are overdue – perhaps the delay has, at least in part, been caused by anticipation of what might happen with the Alberta Clearview AI judicial review. The Alberta decision will likely present a conundrum for the commissioners. Reading between the lines of Justice Feasby’s decision, it is entirely possible that he would find that the scraping of the public internet to gather training data for generative AI systems would both fall within the exception for publicly available information and be for a purpose that a reasonable person would consider appropriate in the circumstances. Generative AI tools are now widely used – more widely even than search engines since these tools are now also embedded in search engines themselves. To find that the collection and use of personal information that may be indiscriminately found on the internet cannot be used in this way because consent is required is fundamentally impractical. In the EU, the legitimate interest exception in the GDPR provides latitude for use in this way without consent, and recent guidance from the European Data Protection Supervisor suggestions that legitimate interests combined, where appropriate with Data Protection Impact Assessments may address key data protection issues. In this sense, the approach taken by Justice Feasby seems to carve a path for data protection in a GenAI era in Canada by allowing data scraping of publicly available sources on the Internet in principle, subject to the limit that any such collection or any ensuing use or disclosure of the personal information must be for purposes that a reasonable person would consider appropriate in the circumstances. However, this is not a perfect solution. In the first place, unlike the EU approach, which ensures that other privacy protective measures (such as privacy impact assessments) govern this kind of mass collection, Canadian law remains outdated and inadequate. Further, the publicly available information exceptions – including Alberta’s even after its constitutional nip and tuck – also emphasize that, to use the language of Alberta’s PIPA, it must be “reasonable to assume that the individual that the information is about provided the information”. In fact, there will be many circumstances in which individuals have not provided the information posted online about them. This is the case with photos from parties, family events and other social interactions. Further, social media – and the internet as a whole – is full of non-consensual images, gossip, anecdotes and accusations. The solution crafted by the Alberta Court of King’s Bench is therefore only a partial solution. A legitimate interest exception would likely serve much better in these circumstances, particularly if it is combined with broader governance obligations to ensure that privacy is adequately considered and assessed. Of course, before this happens, the federal government’s privacy reform measures in Bill C-27 must be resuscitated in some form or another.
Published in
Privacy
Monday, 24 March 2025 06:50
Routine Retail Facial Recognition Systems an Emerging Privacy No-Go Zone in Canada?The Commission d’accès à l’information du Québec (CAI) has released a decision regarding a pilot project to use facial recognition technology (FRT) in Métro stores in Quebec. When this is paired with a 2023 investigation report of the BC Privacy Commissioner regarding the use of FRT in Canadian Tire Stores in that province, there seems to be an emerging consensus around how privacy law will apply to the use of FRT in the retail sector in Canada. Métro had planned to establish a biometric database to enable the use of FRT at certain of its stores operating under the Métro, Jean Coutu and Super C brands, on a pilot basis. The objective of the system was to reduce shoplifting and fraud. The system would function in conjunction with video surveillance cameras installed at the entrances and exits to the stores. The reference database would consist of images of individuals over the age of majority who had been linked to security incidents involving fraud or shoplifting. Images of all shoppers entering the stores would be captured on the video surveillance cameras and then converted to biometric face prints for matching with the face prints in the reference database. The CAI initiated an investigation after receiving notice from Métro of the creation of the biometric database. The company agreed to put its launch of the project on hold pending the results of the investigation. The Quebec case involved the application of Quebec’s the Act respecting the protection of personal information in the private sector (PPIPS) as well as its Act to establish a legal framework for information technology (LFIT) The LFIT requires an organization that is planning to create a database of “biometric characteristics and measurements” to disclose this fact to the CAI no later than 60 days before it is to be used. The CAI can impose requirements and can also order the use suspended or the database destroyed if it is not in compliance with any such orders or if it “otherwise constitutes an invasion of privacy” (LFIT art. 45). Métro argued that the LFIT required individual consent only for the use of a biometric database to ‘confirm or verify’ the identity of an individual (LFIT s. 44). It maintained that its proposed use was different – the goal was not to confirm or verify the identities of shoppers; rather, it was to identify ‘high risk’ shoppers based on matches with the reference database. The CAI rejected this approach, noting the sensitivity of biometric data. Given the quasi-constitutional status of Canadian data protection laws, the CAI found that a ‘large and liberal’ approach to interpretation of the law was required. The CAI found that Métro was conflating the separate concepts of “verification” and “confirmation” of identity. In this case, the biometric faceprints in the probe images would be used to search for a match in the “persons of interest” database. Even if the goal of the generation of the probe images was not to determine the precise identity of all customers – or to add those face prints to the database – the underlying goal was to verify one attribute of the identity of shoppers – i.e., whether there was a match with the persons of interest database. This brought the system within the scope of the LTIF. The additional information in the persons of interest database, which could include the police report number, a description of the past incident, and related personal information would facilitate the further identification of any matches. Métro also argued that the validation or confirmation of identity did not happen in one single process and that therefore s. 44 of the LTIF was not engaged. The CAI dismissed what it described as the compartmentalisation of the process. Instead, the law required a consideration of the combined effect of all the steps in the operation of the system. The company also argued that they had obtained the consent required under art 12 of the PPIPS. It maintained that the video cameras captured shoppers’ images with their consent, as there was notice of use of the cameras and the shoppers continued into the stores. It argued that the purposes for which it used the biometric data were consistent with the purposes for which the security cameras were installed, making it a permissible secondary use under s. 12(1) of PPIPS. The CAI rejected this argument noting that it was not a question of a single collection and a related secondary use. Rather, the generation of biometric faceprints from images captured on video is an independent collection personal of data. That collection must comply with data protection requirements and cannot be treated a secondary use of already collected data. The system proposed by Métro would be used on any person entering the designated stores, and as such it was an entry requirement. Individuals would have no ability to opt out and still shop, and there were no alternatives to participation in the FRT scheme. Not only is consent not possible for the general population entering the stores, those whose images become part of the persons of interest database would also have no choice in the matter. Métro argued that its obligation to protect its employees and the public outweighed the privacy interests of its customers. The CAI rejected this argument, noting that this was not the test set out in the LTIF, which asked instead whether the database of biometric characteristics “otherwise constitutes an invasion of privacy” (art 45). The CAI was of the view that to create a database of biometric characteristics and to match these characteristics against face prints generated from data captured from the public without their consent in circumstances where the law required consent amounted to a significant infringement of privacy rights. The Commission emphasized again the highly sensitive character of the personal data and issued an order prohibiting the implementation of the proposed system. The December 2023 BC investigation report was based on that province’s Personal Information Protection Act. It followed a commissioner-initiated investigation into the use by several Canadian Tire Stores in BC of FRT systems integrated with video surveillance cameras. Like the Métro pilot, biometric face prints were generated from the surveillance footage and matched against a persons-of-interest database. The stated goals of the systems were similar as well – to reduce shoplifting and enhance the security of the stores. As was the case in Quebec, the BC Commissioner found that the generation of biometric face prints was a new collection of personal information that required express consent. The Commissioner had found that the stores had not provided adequate notice of collection, making the issue of consent moot. However, he went on to find that even if there had been proper notice, express consent had not been obtained, and consent could not be implied in the circumstances. The collection of biometric faceprint data of everyone entering the stores in question was not for a purpose that a reasonable person would consider appropriate, given the acute sensitivity of the data collected and the risks to the individual that might flow from its misuse, inaccuracy, or from data breaches. Interestingly, in BC, the four stores under investigation removed their FRT systems soon after receiving the notice of investigation. During the investigation, the Commissioner found little evidence to support the need for the systems, with store personnel admitting that the systems added little to their normal security functions. He chastised the retailers for failing both to conduct privacy impact assessments prior to adoption and to put in place measures to evaluate the effectiveness and performance of the systems. An important difference between the two cases relates to the ability of the CAI to be proactive. In Quebec, the LTIF requires notice to be provided to the Commissioner of the creation of a biometric database in advance of its implementation. This enabled it to rule on the appropriateness of the system before privacy was adversely impacted on a significant scale. By contrast, the systems in BC were in operation for three years before sufficient awareness surfaced to prompt an investigation. Now that powerful biometric technologies are widely available for retail and other uses, governments should be thinking seriously about reforming private sector privacy laws to provide for advance notice requirements – at the very least, for biometric systems. Following both the Quebec and the BC cases, it is difficult to see how broad-based FRT systems integrated with store security cameras could be deployed in a manner consistent with data protection laws – at least under current shopping business models. This suggests that such uses may be emerging as a de facto no-go zone in Canada. Retailers may argue that this reflects a problem with the law, to the extent that it interferes with their business security needs. Yet if privacy is to mean anything, there must be reasonable limits on the collection of personal data – particularly sensitive data. Just because something can be done, does not mean it should be. Given the rapid advance of technology, we should be carefully attuned to this. Being FRT face-printed each time one goes to the grocery store for a carton of milk may simply be an unacceptably disproportionate response to an admittedly real problem. It is a use of technology that places burdens and risks on ordinary individuals who have not earned suspicion, and who may have few other choices for accessing basic necessities.
Published in
Privacy
|
Electronic Commerce and Internet Law in Canada, 2nd EditionPublished in 2012 by CCH Canadian Ltd.
Intellectual Property for the 21st CenturyIntellectual Property Law for the 21st Century: Interdisciplinary Approaches
|