Teresa Scassa - Blog

Teresa Scassa

Teresa Scassa

Ontario’s Information and Privacy Commissioner has released a report on an investigation into the use by McMaster University of artificial intelligence (AI)-enabled remote proctoring software. In it, Commissioner Kosseim makes findings and recommendations under the province’s Freedom of Information and Protection of Privacy Act (FIPPA) which applies to Ontario universities. Interestingly, noting the absence of provincial legislation or guidance regarding the use of AI, the Commissioner provides additional recommendations on the adoption of AI technologies by public sector bodies.

AI-enabled remote proctoring software saw a dramatic uptake in use during the pandemic as university classes migrated online. It was also widely used by professional societies and accreditation bodies. Such software monitors those writing online exams in real-time, recording both audio and video, and using AI to detect anomalies that may indicate that cheating is taking place. Certain noises or movements generate ‘flags’ that lead to further analysis by AI and ultimately by the instructor. If the flags are not resolved, academic integrity proceedings may ensue. Although many universities, including the respondent McMaster, have since returned to in-person exam proctoring, AI-enabled remote exam surveillance remains an option where in-person invigilation is not possible. This can include in courses delivered online to students in diverse and remote locations.

The Commissioner’s investigation related to the use by McMaster University of two services offered by the US-based company Respondus: Respondus Lockdown Browser and Respondus Monitor. Lockdown Browser consists of software downloaded by students onto their computers that blocks access to the internet and to other files on the computer during an exam. Respondus Monitor is the AI-enabled remote proctoring application. This post focuses on Respondus Monitor.

AI-enabled remote proctoring systems have raised concerns about both privacy and broader human rights issues. These include the intrusiveness of the constant audio and video monitoring, the capturing of data from private spaces, uncertainty over the treatment of personal data collected by such systems, adverse impacts on already marginalised students, and the enhanced stress and anxiety that comes from both constant surveillance and easily triggered flags. The broader human rights issues, however, are an uncomfortable fit with public sector data protection law.

Commissioner Kosseim begins with the privacy issues, finding that Respondus Monitor collects personal information that includes students’ names and course information, images of photo identification documents, and sensitive biometric data in audio and video recordings. Because the McMaster University Act empowers the university to conduct examinations and appoint examiners, the Commissioner found that the collection was carried out as part of a lawfully authorized activity. Although exam proctoring had chiefly been conducted in-person prior to the pandemic, she found that there was no “principle of statute or common law that would confine the method by which the proctoring of examinations may be conducted by McMaster to an in-person setting” (at para 48). Further, she noted that even post-pandemic, there might still be reasons to continue to use remote proctoring in some circumstances. She found that the university had a legitimate interest in attempting to curb cheating, noting that evidence suggested an upward trend in academic integrity cases, and a particular spike during the pandemic. She observed that “by incorporating online proctoring into its evaluation methods, McMaster was also attempting to address other new challenges that arise in an increasingly digital and remote learning context” (at para 50).

The collection of personal information must be necessary to a lawful authorized activity carried out by a public body. Commissioner Kosseim found that the information captured by Respondus Monitor – including the audio and video recordings – was “technically necessary for the purpose of conducting and proctoring the exams” (at para 60). Nevertheless, she expressed concerns over the increased privacy risks that accompany this continual surveillance of examinees. She was also troubled by McMaster’s assertion that it “retains complete autonomy, authority, and discretion to employ proctored online exams, prioritizing administrative efficiency and commercial viability, irrespective of necessity” (at para 63). She found that the necessity requirement in s. 38(2) of FIPPA applied, and that efficiency or commercial advantage could not displace it. She noted that the kind of personal information collected by Respondus Monitor was particularly sensitive, creating “risks of unfair allegations or decisions being made about [students] based on inaccurate information” (at para 66). In her view, “[t]hese risks must be appropriately mitigated by effective guardrails that the university should have in place to govern its adoption and use of such technologies” (at para 66).

FIPPA obliges public bodies to provide adequate notice of the collection of personal information. Commissioner Kosseim reviewed the information made available to students by McMaster University. Although she found overall that it provided students with useful information, students had to locate different pieces of information on different university websites. The need to check multiple sites to get a clear picture of the operation of Respondus Monitor did not satisfy the notice requirement, and the Commissioner recommended that the university prepare a “clear and comprehensive statement either in a single source document, or with clear cross-references to other related documents” (at para 70).

Section 41(1) of FIPPA limits the use of personal information collected by a public body to the purpose for which it was obtained or compiled, or for a consistent purpose. Although the Commissioner found that the analysis of the audio and video recordings to generate flags was consistent with the collection of that information, the use by Respondus of samples of the recordings to improve its own systems – or to allow third party research – was not. On this point, there was an important difference in interpretation. Respondus appeared to define personal information as personal identifiers such as names and ID numbers; it treated audio and video clips that lacked such identifiers as “anonymized”. However, under FIPPA audio and video recordings of individuals are personal information. No provision was made for students either to consent to or opt out of this secondary use of their personal information. Commissioner Kosseim noted that Respondus had made public statements that when operating in some jurisdictions (including California and EU members states) it did not use audio or video recordings for research or to improve its products or services. She recommended that McMaster obtain a similar undertaking from Respondus to not use its students’ information for these purposes. The Commissioner also noted that Respondus’ treating the audio and video recordings as anonymized data meant that it did not have adequate safeguards in place for this personal information.

Respondus’ Terms of Service provide that the company reserved the right to disclose personal information for law enforcement purposes. Commissioner Kosseim found that McMaster should require, in its contact with Respondus, that Respondus notify it promptly of any compelled disclosure of its students’ personal information to law enforcement or to government, and to limit any such disclosure to the specific information it is legally required to disclose. She also set a retention limit for the audio and video recordings at one year, with confirmation to be provided by Respondus of deletions after the end of this period.

One of the most interesting aspects of this report is the section titled “Other Recommendations” in which the Commissioner addresses the adoption of an AI-enabled technology by a public institution in a context in which “there is no current law or binding policy specifically governing the use of artificial intelligence in Ontario’s public sector.” (at para 134). The development and adoption of these technologies is outpacing the evolution of law and policy, leaving important governance gaps. In May 2023, the Commissioner Kosseim and Commissioner DeGuire of the Ontario Human Rights Commission issued a joint statement urging the Ontario government to take action to put in place an accountability framework for public sector AI. Even as governments acknowledge that these technologies create risks of discriminatory bias and other potential harms, there remains little to govern AI systems outside the piecemeal coverage offered by existing laws such as, in this case, FIPPA. Although the Commissioner’s interpretation and application of FIPPA addressed issues relating to the collection, use and disclosure of personal information, there remain important issues that cannot be addressed through privacy legislation.

Commissioner Kosseim acknowledged that McMaster University had “already carried out a level of due diligence prior to adopting Respondus Monitor” (at para 138). Nevertheless, given the risks and potential harms of AI-enabled technologies, she made a number of further recommendations. The first was to conduct an Algorithmic Impact Assessment (AIA) in addition to a Privacy Impact Assessment. She suggested that the federal government’s AIA tool could be a useful guide while waiting for one to be developed for Ontario. An AIA could allow the adopter of an AI system to have better insight into the data used to train the algorithms, and could assess impacts on students going beyond privacy (which might include discrimination, increased stress, and harms from false positive flags). She also called for meaningful consultation and engagement with those affected by the adoption of the technology taking place both before the adoption of the system and on an ongoing basis thereafter. Although the university may have had to react very quickly given that the first COVID shutdown occurred shortly before an exam period, an iterative engagement process even now would be useful “for understanding the full scope of potential issue that may arise, and how these may impact, be perceived, and be experienced by others” (at para 142). She noted that this type of engagement would allow adopters to be alert and responsive to problems both prior to adoption and as they arise during deployment. She also recommended that the consultations include experts in both privacy and human rights, as well as those with technological expertise.

Commissioner Kosseim also recommended that the university consider providing students with ways to opt out of the use of these technologies other than through requesting accommodations related to disabilities. She noted “AI-powered technologies may potentially trigger other protected grounds under human rights that require similar accommodations, such as color, race or ethnic origin” (at para 147). On this point, it is worth noting that the use of remote proctoring software creates a context in which some students may need to be accommodated for disabilities or other circumstances that have nothing to do with their ability to write their exam, but rather that impact the way in which the proctoring systems read their faces, interpret their movements, or process the sounds in their homes. Commissioner Kosseim encouraged McMaster University “to make special arrangements not only for students requesting formal accommodation under a protected ground in human rights legislation, but also for any other students having serious apprehensions about the AI-enabled software and the significant impacts it can have on them and their personal information” (at para 148).

Commissioner Kosseim also recommended that there be an appropriate level of human oversight to address the flagging of incidents during proctoring. Although flags were to be reviewed by instructors before deciding whether to proceed to an academic integrity investigation, the Commissioner found it unclear whether there was a mechanism for students to challenge or explain flags prior to escalation to the investigation stage. She recommended that there be such a procedure, and, if there already was one, that it be explained clearly to students. She further recommended that a public institution’s inquiry into the suitability for adoption of an AI-enabled technology should take into account more than just privacy considerations. For example, the public body’s inquiries should consider the nature and quality of training data. Further, the public body should remain accountable for its use of AI technologies “throughout their lifecycle and across the variety of circumstances in which they are used” (at para 165). Not only should the public body monitor the performance of the tool and alert the supplier of any issues, the supplier should be under a contractual obligation to inform the public body of any issues that arise with the system.

The outcome of this investigation offers important lessons and guidance for universities – and for other public bodies – regarding the adoption of third-party AI-enabled services. For the many Ontario universities that adopted remote proctoring during the pandemic, there are recommendations that should push those still using these technologies to revisit their contracts with vendors – and to consider putting in place processes to measure and assess the impact of these technologies. Although some of these recommendations fall outside the scope of FIPPA, the advice is still sage and likely anticipates what one can only hope is imminent guidance for Ontario’s public sector.

Ontario is currently holding public hearings on a new bill which, among other things, introduces a provision regarding the use of AI in hiring in Ontario. Submissions can be made until February 13, 2024. Below is a copy of my submission addressing this provision.

 

The following is my written submission on section 8.4 of Bill 149, titled the Working for Workers Four Act, introduced in the last quarter of 2023. I am a law professor at the University of Ottawa. I am making this submission in my individual capacity.

Artificial intelligence (AI) tools are increasingly common in the employment context. Such tools are used in recruitment and hiring, as well as in performance monitoring and assessment. Section 8.4 would amend the Employment Standards Act to include a requirement for employers to provide notice of the use of artificial intelligence in the screening, assessment, or selection of applicants for a publicly advertised job position. It does not address the use of AI in other employment contexts. This brief identifies several weaknesses in the proposal and makes recommendations to strengthen it. In essence, notice of the use of AI in the hiring process will not offer much to job applicants without a right to an explanation and ideally a right to bring any concerns to the attention of a designated person. Employees should also have similar rights when AI is used in performance assessment and evaluation.

1. Definitions and exclusions

If passed, Bill 149 would (among other things) enact the first provision in Ontario to directly address AI. The proposed section 8.4 states:

8.4 (1) Every employer who advertises a publicly advertised job posting and who uses artificial intelligence to screen, assess or select applicants for the position shall include in the posting a statement disclosing the use of the artificial intelligence.

(2) Subsection (1) does not apply to a publicly advertised job posting that meets such criteria as may be prescribed.

The term “artificial intelligence” is not defined in the bill. Rather, s. 8.1 of Bill 149 leaves the definition to be articulated in regulations. This likely reflects concerns that the definition of AI will continue to evolve along with the rapidly changing technology and that it is best to leave its definition to more adaptable regulations. The definition is not the only thing left to regulations. Section 8.4(2) requires regulations to specify the criteria that will allow publicly advertised job postings to be exempted from the disclosure requirement in s. 8.4(1). The true scope and impact of s. 8.4(1) will therefore not be clear until these criteria are prescribed in regulations. Further, s. 8.4 will not take effect until the regulations are in place.

2. The Notice Requirement

The details of the nature and content of the notice that an employer must provide are not set out in s. 8.4, nor are they left to regulations. Since there are no statutory or regulatory requirements, presumably notice can be as simple as “we use artificial intelligence in our screening and selection process”. It would be preferable if notice had to at least specify the stage of the process and the nature of the technique used.

Section 8.4 is reminiscent of the 2022 amendments to the Employment Standards Act which required employers with more than 25 employees to provide their employees with notification of any electronic monitoring taking place in the workplace. As with s. 8.4(1), above, the main contribution of this provision was (at least in theory) enhanced transparency. However, the law did not provide for any oversight or complaints mechanism. Section 8.4(1) is similarly weak. If an employer fails to provide notice of the use of AI in the hiring process, then either the employer is not using AI in recruitment and hiring, or they are failing to disclose it. Who will know and how? A company that is found non-compliant with the notice requirement, once it is part of the Employment Standards Act, could face a fine under s. 132. However, proceedings by way of an offence are a rather blunt regulatory tool.

3. A Right to an Explanation?

Section 8.4(1) does not provide job applicants with any specific recourse if they apply for a job for which AI is used in the selection process and they have concerns about the fairness or appropriateness of the tool used. One such recourse could be a right to demand an explanation.

The Consumer Privacy Protection Act (CPPA), which is part of the federal government’s Bill C-27, currently before Parliament, provides a right to an explanation to those about whom an automated decision, prediction or recommendation is made. Sections 63(3) and (4) provide:

(3) If the organization has used an automated decision system to make a prediction, recommendation or decision about the individual that could have a significant impact on them, the organization must, on request by the individual, provide them with an explanation of the prediction, recommendation or decision.

(4) The explanation must indicate the type of personal information that was used to make the prediction, recommendation or decision, the source of the information and the reasons or principal factors that led to the prediction, recommendation or decision.

Subsections 63(3) and (4) are fairly basic. For example, they do not include a right of review of the decision by a human. But something like this would still be a starting point for a person seeking information about the process by which their employment application was screened or evaluated. The right to an explanation in the CPPA will extend to decisions, recommendations and predictions made with respect to employees of federal works, undertakings, and businesses. However, it will not apply to the use of AI systems in provincially regulated employment sectors. Without a private sector data protection law of its own – or without a right to an explanation to accompany the proposed s. 8.4 – provincially regulated employees in Ontario will be out of luck.

In contrast, Quebec’s recent amendments to its private sector data protection law provide for a more extensive right to an explanation in the case of automated decision-making – and one that applies to the employment and hiring context. Section 12.1 provides:

12.1. Any person carrying on an enterprise who uses personal information to render a decision based exclusively on an automated processing of such information must inform the person concerned accordingly not later than at the time it informs the person of the decision.

He must also inform the person concerned, at the latter’s request,

(1) of the personal information used to render the decision;

(2) of the reasons and the principal factors and parameters that led to the decision; and

(3) of the right of the person concerned to have the personal information used to render the decision corrected.

The person concerned must be given the opportunity to submit observations to a member of the personnel of the enterprise who is in a position to review the decision.

Section 12.1 thus combines a notice requirement with, at the request of the individual, a right to an explanation. In addition, the affected individual can “submit observations” to an appropriate person within the organization who “is in a position to review the decision”. This right to an explanation is triggered only by decisions that are based exclusively on automated processing of personal information – and the scope of the right to an explanation is relatively narrow. However, it still goes well beyond Ontario’s Bill 149, which creates a transparency requirement with nothing further.

4. Scope

Bill 149 applies to the use of “artificial intelligence to screen, assess or select applicants”. Bill C-27 and Quebec’s law, both referenced above, are focused on “automated decision-making”. Although automated decision-making is generally considered a form of AI (it is defined in C-27 as “any technology that assists or replaces the judgment of human decision-makers through the use of a rules-based system, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other technique”) it is possible that in an era of generative AI technologies, the wording chosen for Bill 149 is more inclusive. In other words, there may be uses of AI that are not decision-making, predicting or recommending, but that can still used in screening, assessing or hiring processes. However, it should be noted that Ontario’s Bill 149 is also less inclusive than Bill C-27 or Quebec’s law because it focuses only on screening, assessment or selecting applicants for a position. It does not apply to the use of AI tools to monitor, evaluate or assess the performance of existing employees or to make decisions regarding promotion, compensation, retention, or other employment issues – something which would be covered by Quebec’s law (and by Bill C-27 for employees in federally regulated employment). Although arguably the requirements regarding electronic workplace monitoring added to the Employment Standards Act in 2022 might provide transparency about the existence of electronic forms of surveillance (which could include those used to feed data to AI systems), these transparency obligations apply only in workplaces with more than 25 employees, and there are no employee rights linked to the use of these data in automated or AI-enabled decision-making systems.

5. Discriminatory Bias

A very significant concern with the use of AI systems for decision-making about humans is the potential for discriminatory bias in the output of these systems. This is largely because systems are trained on existing and historical data. Where such data are affected by past discriminatory practices (for example, a tendency to hire men rather than women, or white, able-bodied, heterosexual people over those from equity-deserving communities) then there is a risk that automated processes will replicate and exacerbate these biases. Transparency about the use of an AI tool alone in such a context is not much help – particularly if there is no accompanying right to an explanation. Of course, human rights legislation applies to the employment context, and it will still be open to an employee who believes they have been discriminated against to bring a complaint to the Ontario Human Rights Commission. However, without a right to an explanation, and in the face of proprietary and closed systems, proving discrimination may be challenging and may require considerable resources and expertise. It may also require changes to human rights legislation to specifically address algorithmic discrimination. Without these changes in place, and without adequate resourcing to support the OHRC’s work to address algorithmic bias, recourse under human rights legislation may be extremely challenging.

 

6. Conclusion and Recommendations

This exploration of Bill 149’s transparency requirements regarding the use of AI in the hiring process in Ontario reveals the limited scope of the proposal. Its need for regulations in order take effect has the potential to considerably delay its implementation. It provides for notice but not for a right to an explanation or for human review of AI decisions. There is also a need to make better use of existing regulators (particularly privacy and human rights commissions). The issue of the use of AI in recruitment (or in the workplace more generally in Ontario) may require more than just tweaks to the Employment Standards Act but may also demand amendments to Ontario’s Human Rights Code and perhaps even specific privacy legislation at the very least aimed at the employment sector in Ontario.

Recommendations:

1. Redraft the provision so that the core obligations take effect without need for regulations or ensure that the necessary regulations to give effect to this provision are put in place promptly.

2. Amend s. 8.4 (1) to either include the elements that are required in any notice of the use of an AI system or provide for the inclusion of such criteria in regulations (so long as doing so does not further delay the coming into effect of the provision).

3. Provide for a right to an explanation to accompany s. 8.4(1). An alternative to this would be a broader right to an explanation in provincial private sector legislation or in privacy legislation for employees in provincially regulated sectors in Ontario, but this would be much slower than the inclusion of a basic right to an explanation in s. 8.4. The right to an explanation could also include a right to submit observations to a person in a position to review any decision or outcome.

4. Extend the notice requirement to other uses of AI to assess, evaluate and monitor the performance of employees in provincially regulated workplaces in Ontario. Ideally, a right to an explanation should also be provided in this context.

5. Ensure that individuals who are concerned that they have been discriminated against by the use of AI systems in recruitment (as well as employees who have similar concerns regarding the use of AI in performance evaluation and assessment) have adequate and appropriate recourse under Ontario’s Human Rights Code, and that the Ontario Human Rights Commission is adequately resourced to address these concerns.

The federal government’s proposed Artificial Intelligence and Data Act (AIDA) (Part III of Bill C-27) - contained some data governance requirements for anonymized data used in AI in its original version. These were meant to dovetail with changes to PIPEDA reflected in the Consumer Privacy Protection Act (CPPA) (Part I of Bill C-27). The CPPA provides in s. 6(5) that “this Act does not apply in respect of personal information that has been anonymized.” Although no such provision is found in PIPEDA, this is, to all practical effects, the state of the law under PIPEDA. PIPEDA applies to “personal information”, which is defined as “information about an identifiable individual”. If someone is not identifiable, then it is not personal information, and the law does not apply. This was the conclusion reached, for example, in the 2020 Cadillac Fairview joint finding of the federal Privacy Commissioner and his counterparts from BC and Alberta. PIPEDA does apply to pseudonymized information because such information ultimately permits reidentification.

The standard for identifiability under PIPEDA had been set by the courts as a “’serious possibility’ that an individual could be identified through the use of that information, alone or in combination with other available information.” (Cadillac Fairview at para 143). It is not an absolute standard (although the proposed definition for anonymized data in C-27 currently seems closer to absolute). In any event, the original version of AIDA was meant to offer comfort to those concerned with the flat-out exclusion of anonymized data from the scope of the CPPA. Section 6 of AIDA provided that:

6. A person who carries out any regulated activity and who processes or makes available anonymized data in the course of that activity must, in accordance with the regulations, establish measures with respect to

(a) the manner in which data is anonymized; and

(b) the use or management of anonymized data.

Problematically, however, AIDA only provided for data governance with respect to this particular subset of data. It contained no governance requirements for personal, pseudonymized, or non-personal data. Artificial intelligence systems will be only as good as the data on which they are trained. Data governance is a fundamental element of proper AI regulation – and it must address more than anonymized personal data.

This is an area where the amendments to AIDA proposed by the Minister of Industry demonstrate clear improvements over the original version. To begin with, the old s. 6 is removed from AIDA. Instead of specific governance obligations for anonymized data, we see some new obligations introduced regarding data more generally. For example, as part of the set of obligations relating to general-purpose AI systems, there is a requirement to ensure that “measures respecting the data used in developing the system have been established in accordance with the regulations” (s. 7(1)a)). There is also an obligation to maintain records “relating to the data and processes used in developing the general-purpose system and in assessing the system’s capabilities and limitations” (s. 7(2)(b)). There are similar obligations the case of machine learning models that are intended to be incorporated into high-impact systems (s. 9(1)(a) and 9(2)(a)). Of course, whether this is an actual improvement will depend on the content of the regulations. But at least there is a clear signal that data governance obligations are expanded under the proposed amendments to AIDA.

Broader data governance requirements in AIDA are a good thing. They will apply to data generally including personal and anonymized data. Personal data used in AI will also continue to be governed under privacy legislation and privacy commissioners will still have a say about whether data have been properly anonymized. In the case of PIPEDA (or the CPPA if and when it is eventually enacted), the set of principles for the development and use of generative AI issued by federal, provincial, and territorial privacy commissioners on December 8, 2023 make it clear that the commissioners understand their enabling legislation to provide them with the authority to govern a considerable number of issues relating to the use of personal data in AI, whether in the public or private sector. This set of principles send a strong signal to federal and provincial governments alike that privacy laws and privacy regulators have a clear role to play in relation to emerging and evolving AI technologies and that the commissioners are fully engaged. It is also an encouraging example of federal, provincial and territorial co-operation among regulators to provide a coherent common position on key issues in relation to AI governance.

 

This is Part III of a series of posts that look at the proposed amendments to Canada’s Artificial Intelligence and Data Act (which itself is still a Bill, currently before the INDU Committee for study). Part I provided a bit of context and a consideration of some of the new definitions in the Bill. Part II looked at the categories of ‘high-impact’ AI that the Bill now proposes to govern. This post looks at the changed role of the AI and Data Commissioner.

The original version of the Artificial Intelligence and Data Act (Part II of Bill C-27) received considerable criticism for its oversight mechanisms. Legal obligations for the ethical and transparent governance of AI, after all, depend upon appropriate oversight and enforcement for their effectiveness. Although AIDA proposed the creation of an AI and Data Commissioner (Commissioner), this was never meant to be an independent regulator. Ultimately, AIDA placed most of the oversight obligations in the hands of the Minister of Industry – the same Minister responsible for supporting the growth of Canada’s AI sector. Critics considered this to be a conflict of interest. A series of proposed amendments to AIDA are meant to address these concerns by reworking the role of the Commissioner.

Section 33(1) of AIDA makes it clear that the AI and Data Commissioner will be a “senior official of the department over which the Minister presides”, and their appointment involves being designated by the Minister. This has not changed, although the amendments would delete from this provision language stating that the Commissioner’s role is “to assist the Minister in the administration and enforcement” of AIDA.

The proposed amendments elevate the Commissioner somewhat, giving them a series of powers and duties, to which the Minister can add through delegation (s. 33(3)). So, for example, it will be the newly empowered Commissioner (Commissioner 2.0) who receives reports from those managing a general-purpose or high impact system where there are reasonable grounds to suspect that the use of the system has caused serious harm (s. 8.2(1)(e), s. 11(1)(g)). Commissioner 2.0 can also order someone managing or making available a general-purpose system to provide them with the accountability framework they are required to create under s. 12 (s. 13(1)) and can provide guidance or recommend corrections to that framework (s. 13(2)). Commissioner 2.0 can compel those making available or managing an AI system to provide the Commissioner with an assessment of whether the system is high impact, and in relation to which subclass of high impact systems set out in the schedule. Commissioner 2.0 can agree or disagree with the assessment, although if they disagree, their authority seems limited to informing the entity in writing with their reasons for disagreement.

More significant are Commissioner 2.0’s audit powers. Under the original version of AIDA, these were to be exercised by the Minister – the powers are now those of the Commissioner (s. 15(1)). Further, Commissioner 2.0 may order (previously this was framed as “require”) that the person either conduct an audit themselves or that the person engage the services of an independent auditor. The proposed amendments also empower the Commissioner to conduct an audit to determine if there is a possible contravention of AIDA. This strengthens the audit powers by ensuring that there is at least an option that is not at least somewhat under the control of the party being audited. The proposed amendments give Commissioner 2.0 additional powers necessary to conduct an audit and to carry out testing of an AI system (s. 15(2.1)). Where Commissioner 2.0 conducts an audit, they must provide the audited party with a copy of the report (s. 15(3.1)) and where the audit is conducted by the person responsible or someone retained by them, they must provide a copy to the Commissioner (s. 15(4)).

The Minister still retains some role with respect to audits. He or she may request that the Commissioner conduct an audit. In an attempt to preserve some independence of Commissioner 2.0, the Commissioner, when receiving such a request, may either carry out the audit or decline to do so on the basis that there are no reasonable grounds for an audit, so long as they provide the Minister with their reasons (s. 15.1(1)(b)). The Minister may also order a person to take actions to bring themselves into compliance with the law (s. 16) or to cease making available or terminate the operation of a system if the Minister considers compliance to be impossible (s. 16(b)) or has reasonable grounds to believe that the use of the system “gives rise to a risk of imminent and serious harm” (s. 17(1)).

As noted above, Commissioner 2.0 (a mere employee in the Minister’s department) will have order making powers under the amendments. This is something the Privacy Commissioner of Canada, an independent agent of Parliament, appointed by the Governor in Council, is hoping to get in Bill C-27. If so, it will be for the first time since the enactment of PIPEDA in 2000. Orders of Commissioner 2.0 or the Minister can become enforceable as orders of the Federal Court under s. 20.

Commissioner 2.0 is also empowered to share information with a list of federal or provincial government regulators where they have “reasonable grounds to believe that the information may be relevant to the administration or enforcement by the recipient of another Act of Parliament or of a provincial legislature.” (s. 26(1)). Reciprocally, under a new provision, federal regulators may also share information with the Commissioner (s. 26.1). Additionally, Commissioner 2.0 may “enter into arrangements” with different federal regulators and/or the Ministers of Health and Transport in order to assist those actors with the “exercise of their powers or the performance of their functions and duties” in relation to AI (s. 33.1). These new provisions strengthen a more horizontal, multi-regulator approach to governing AI which is an improvement in the Bill, although this might eventually need to be supplemented by corresponding legislative amendments – and additional funding – to better enable the other commissioners to address AI-related issues that fit within their areas of competence.

The amendments also impose upon Commissioner 2.0 a new duty to report on the administration and enforcement of AIDA – such a report is to be “published on a publicly available website”. (s. 35.1) The annual reporting requirement is important as it will increase transparency regarding the oversight and enforcement of AIDA. For his or her part, the Minister is empowered to publish information, where it is in the public interest, regarding any contravention of AIDA or where the use of a system gives rise to a serious risk of imminent harm (ss. 27 and 28).

Interestingly, AIDA, which provides for the potential imposition of administrative monetary penalties for contraventions of the Act does not indicate who is responsible for setting and imposing these penalties. Section 29(1)(g) makes it clear that “the persons or classes of persons who may exercise any power, or perform any duty or function, in relation to the [AMP] scheme” is left to be articulated in regulations.

The AIDA also makes it an offence under s. 30 for anyone to obstruct or provide false or misleading information to “the Minister, anyone acting on behalf of the Minister or an independent auditor in the exercise of their powers or performance of their duties or functions under this Part.” This remains unchanged from the original version of AIDA. Presumably, since Commissioner 2.0 would exercise a great many of the oversight functions, this is meant to apply to the obstruction or misleading of the Commissioner – but it will only do so if the Commissioner is characterized as someone “acting on behalf of the Minister”. This is not language of independence, but then there are other features of AIDA that also counter any view that even Commissioner 2.0 is truly independent (and I mean others besides the fact that they are an employee under the authority of the Minister and handpicked by the Minister). Most notable of these is that should the Commissioner become incapacitated or absent, or should they simply never be designated by the Minister, it is the Minister who will exercise their powers and duties (s. 33(4)).

In sum, then, the proposed amendments to AIDA attempt to give some separation between the Minister and Commissioner 2.0 in terms of oversight and enforcement. At the end of the day, however, Commissioner 2.0 is still the Minister’s hand-picked subordinate. Commissioner 2.0 does not serve for a specified term and has no security of tenure. In their absence, the Minister exercises their powers. It falls far short of independence.

My previous post looked at some of the new definitions in the proposed amendments to the Artificial Intelligence and Data Act (AIDA) which is Part III of Bill C-27. These include a definition of “high impact” AI, and a schedule of classes of high-impact AI (the Schedule is reproduced at the end of this post). The addition of the schedule changes AIDA considerably, and that is the focus of this post.

The first two classes in the Schedule capture contexts that can clearly affect individuals. Class 1 addresses AI used in most aspects of employment, and Class 2 relates to the provision of services. On the provision of services (which could include things like banking and insurance), the wording signals that it will apply to decision-making about the provision of services, their cost, or the prioritization of recipients. To be clear, AIDA does not prohibit systems with these functions. They are simply characterized as “high impact” so that they will be subject to governance obligations. A system to determine creditworthiness can still reject individuals; and companies can still prioritize preferred customers – as long as the systems are sufficiently transparent, free from bias and do not cause harm.

There is, however, one area which seems to fall through the cracks of Classes 1 & 2: rental accommodation. A lease is an interest in land – it is not a service. Human rights legislation in Canada typically refers to accommodation separately from services for this reason. AI applications are already being used to screen and select tenants for rental accommodation. In the midst of a housing crisis, this is surely an area that is high-impact and where the risks of harm from flawed AI to individuals and families searching for a place to live are significant. This gap needs to be addressed – perhaps simply by adding “or accommodation” after each use of the term “service” in Class 2.

Class 3 rightly identifies biometric systems as high risk. It also includes systems that use biometrics in “the assessment of an individual’s behaviour or state of mind.” Key to the scope of this section will be the definition of “biometric”. Some consider biometric data to be exclusively physiological data (fingerprints, iris scans, measurements of facial features, etc.). Yet others include behavioral data in this class if it is used for the second identified purpose – the assessment of behaviour or state of mind. Behavioural data, though, is potentially a very broad category. It can include data about a person’s gait, or their speech or keystroke patterns. Cast even more broadly, it could include things such as “geo-location and IP addresses”, “purchasing habits”, “patterns of device use” or even “browser history and cookies”. If that is the intention behind Class 3, then conventional biometric AI should be Part One of this class; Part Two should be the use of an AI system to assess an individual’s behaviour or state of mind (without referring specifically to biometrics in order to avoid confusion). This would also, importantly, capture the highly controversial area of AI for affect recognition. It would be unfortunate if the framing of the class as ‘biometrics’ led to an unduly narrow interpretation of the kind of systems or data involved. The explanatory note in the Minister’s cover letter for this provision seems to suggest (although it is not clear) that it is purely physiological biometric data that is intended for inclusion and not a broader category. If this is so, then Class 3 seems unduly narrow.

Class 4 is likely to be controversial. It addresses content moderation and the prioritization and presentation of content online and identifies these as high-impact algorithmic activities. Such systems are in widespread use in the online context. The explanatory note from the Minister observes that such systems “have important potential impacts on Canadians’ ability to express themselves, as well as pervasive effects at societal scale” (at p. 4). This is certainly true although the impact is less direct and obvious than the impact of a hiring algorithm, for example. Further, although an algorithm that presents a viewer of online streaming services with suggestions for content could have the effect of channeling a viewer’s attention in certain directions, it is hard to see this as “high impact” in many contexts, especially since there are multiple sources of suggestions for online viewing (including word of mouth). That does not mean that feedback loops and filter bubbles (especially in social media) do not contribute to significant social harms – but it does make this high impact class feel large and unwieldy. The Minister’s cover letter indicates that each of the high-impact classes presents “distinct risk profiles and consequently will require distinct risk management strategies.” (at p. 2). Further, he notes that the obligations that will be imposed “are intended to scale in proportion to the risks they present. A low risk use within a class would require correspondingly minimal mitigation effort.” (at p. 2). Much will clearly depend on regulations.

Class 5 relates to the use of AI in health care or emergency services, although it explicitly excludes medical devices because these are already addressed by Health Canada (which recently consulted on the regulation of AI-enabled medical devices). This category also demonstrates some of the complexity of regulating AI in Canada’s federal system. Many hospital-based AI technologies are being developed by researchers affiliated with the hospitals and who are not engaged in the interprovincial or international trade and commerce which is necessary for AIDA to apply. AIDA will only apply to those systems developed externally and in the context of international or interprovincial trade and commerce. While this will still capture many applications, it will not capture all – creating different levels of governance within the same health care context.

It is also not clear what is meant, in Class 5, by “use of AI in matters relating to health care”. This could be interpreted to mean health care that is provided within what is understood as the health care system. Understood more broadly, it could extend to health-related apps – for example, one of the many available AI-enabled sleep trackers, or an AI-enabled weight loss tool (to give just two examples). I suspect that what is intended is the former, even though, with health care in crisis and more people turning to alternate means to address their health issues, health-related AI technologies might well deserve to be categorized as high-impact.

Class 6 involves the use of an AI system by a court or administrative body “in making a determination in respect of an individual who is a party to proceedings before the court or administrative body.” In the first place, this is clearly not meant to apply to automated decision-making generally – it seems to be limited to judicial or quasi-judicial contexts. Class 6 must also be reconciled with s. 3 of AIDA, which provides that AIDA does not apply “with respect to a government institution as defined in s. 3 of the Privacy Act.” This includes the Immigration and Refugee Board, for example, as well as the Canadian Human Rights Commission, the Parole Board, and the Veterans Review and Appeal Board. Making sense of this, then, it would be the tools used by courts or tribunals and developed or deployed in the course of interprovincial or international trade and commerce that would be considered high impact. The example given in the Minister’s letter seems to support this – it is of an AI system that provides an assessment of “risk of recidivism based on historical data” (at p. 5).

However, Class 6 is confusing because it identifies the context rather than the tools as high impact. Note that the previous classes address the use of AI “in matters relating to” the subject matter of the class, whereas class 6 identifies actors – the use of AI by a court or tribunal. There is a different focus. Yet the same tools used by courts and tribunals might also be used by administrative bodies or agencies that do not hold hearings or that are otherwise excluded from the application of AIDA. For example, in Ewert v. Canada, the Supreme Court of Canada considered an appeal by a Métis man who challenged the use of recidivism-risk assessment tools by Correctional Services of Canada (to which AIDA would not apply according to s. 3). If this type of tool is high-risk, it is so whether it is used by Correctional Services or a court. This suggests that the framing of Class 6 needs some work. It should perhaps be reworded to identify tools or systems as high impact if they are used to determine the rights, entitlements or status of individuals.

Class 7 addresses the use of an AI system to assist a peace officer “in the exercise and performance of their law enforcement powers, duties and function”. Although “peace officer” receives the very broad interpretation found in the Criminal Code, that definition is modified in the AIDA by language that refers to the exercise of specific law enforcement powers. This should still capture the use of a broad range of AI-enabled tools and technologies. It is an interesting question whether AIDA might apply more fulsomely to this class of AI systems (not just those developed in the course of interprovincial or international trade) as it might be considered to be rooted in the federal criminal law power.

These, then, are the different classes that are proposed initially to populate the Schedule if AIDA and its amendments are passed. The list is likely to spark debate, and there is certainly some wording that could be improved. And, while it provides much greater clarity as to what is proposed to be regulated, it is also evident that the extent to which obligations will apply will likely be further tailored in regulations to create sliding scales of obligation depending on the degree of risk posed by any given system.

AIDA Schedule:

High-Impact Systems — Uses

1. The use of an artificial intelligence system in matters relating to determinations in respect of employment, including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination.

2. The use of an artificial intelligence system in matters relating to

(a) the determination of whether to provide services to an individual;

(b) the determination of the type or cost of services to be provided to an individual; or

(c) the prioritization of the services to be provided to individuals.

3. The use of an artificial intelligence system to process biometric information in matters relating to

(a) the identification of an individual, other than in cases in which the biometric information is processed with the individual’s consent to authenticate their identity; or

(b) the assessment of an individual’s behaviour or state of mind.

4. The use of an artificial intelligence system in matters relating to

(a) the moderation of content that is found on an online communications platform, including a search engine or social media service; or

(b) the prioritization of the presentation of such content.

5. The use of an artificial intelligence system in matters relating to health care or emergency services, excluding a use referred to in any of paragraphs (a) to (e) of the definition device in section 2 of the Food and Drugs Act that is in relation to humans.

6. The use of an artificial intelligence system by a court or administrative body in making a determination in respect of an individual who is a party to proceedings before the court or administrative body.

7. The use of an artificial intelligence system to assist a peace officer, as defined in section 2 of the Criminal Code, in the exercise and performance of their law enforcement powers, duties and functions.

Note: This is the first in a series of posts that will look at the proposed amendments to Canada's Artificial Intelligence and Data Act, which is Part III of Bill C-27, currently before Parliament. The amendments are extensive and have only just been introduced. Please consider these assessments to be preliminary.

 

Canada’s Artificial Intelligence and Data Act (AIDA) (Part III of Bill C-27) has passed second reading and is currently before the INDU Committee for study. Early in this committee process, the Minister of Industry Philippe Champagne announced that his department was working on amendments to AIDA in response to considerable criticism. Those amendments have now been tabled for consideration by the committee.

One of the criticisms of the Bill was that it left almost all of its substance to be developed in regulations. It is unsurprising, then, that the amendments are almost as long as the original bill. While it is certainly the case that the amendments contain more detail than the original text, some of the additional length is attributable to new provisions intended to address generative AI systems. This highlights just how quickly things are moving in the AI space, as generative AI was not on anyone’s legislative radar when Bill C-27 was introduced in June 2022.

One of the criticisms of AIDA was the absence of any specific prior consultation before its appearance in Bill C-27. This, combined with its lack of substance on many issues, raised basic concerns about how it would apply and to what. For example, AIDA was to govern “high-impact” AI systems, but the definition of such systems was left to regulations. Concerns were also raised about oversight being largely in the hands of the Minister of Industry who is also responsible for supporting Canada’s AI sector.

The proposed amendments demonstrate that ISED has been listening to the feedback it has received since June 2022, just as it has been adapting to the challenges of generative AI, and engaging with its international partners on AI governance issues. The amendments, which include new definitions, more explicit obligations, and governance principles for generative AI, will make AIDA a better bill. They may be enough to garner sufficient support to pass it into law, something which the Minister describes as “pivotal”.

This is the first in a series of posts that will explore some of the changes proposed to AIDA – as well as some of the remaining issues. This post addresses some of the new definitions.

The proposed AIDA amendments propose a new definition of “artificial intelligence system” which would read: “a technological system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions” (s. 2). This provides greater alignment with the OECD definition of an AI system (“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”) It is an improvement over the previous definition, which was criticized for being too specific about the types of techniques used in AI. It is unclear, though, why the new AIDA definition does not include “content” as an output as is the case with the OECD definition. The AIDA definition is also supplemented by a separate definition for a “general-purpose system”, which is “an artificial intelligence system that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system’s development” (s. 5(1)). There is a further definition for a “machine learning model”, which is “a digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns”. A new s. 5(2) makes it clear that the definition of AI system includes general-purpose systems, and that general-purpose systems can also be high-impact. These new definitions reflect the major changes in both the technology and in the evolving regulatory context in the short time since AIDA was introduced. They also shape a new framework for obligations under the legislation.

The proposed amendments also contain a definition of “high-impact system”: “an artificial intelligence system of which at least one of the intended uses may reasonably be concluded to fall within a class of uses set out in the schedule”. (s. 5(1)). The previous version of AIDA left the articulation of “high impact” to future regulations. The schedule sets out a list of classes that describe certain uses. These are:

High-Impact Systems — Uses

1. The use of an artificial intelligence system in matters relating to determinations in respect of employment, including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination.

2. The use of an artificial intelligence system in matters relating to

(a) the determination of whether to provide services to an individual;

(b) the determination of the type or cost of services to be provided to an individual; or

(c) the prioritization of the services to be provided to individuals.

3. The use of an artificial intelligence system to process biometric information in matters relating to

(a) the identification of an individual, other than in cases in which the biometric information is processed with the individual’s consent to authenticate their identity; or

(b) the assessment of an individual’s behaviour or state of mind.

4. The use of an artificial intelligence system in matters relating to

(a) the moderation of content that is found on an online communications platform, including a search engine or social media service; or

(b) the prioritization of the presentation of such content.

5. The use of an artificial intelligence system in matters relating to health care or emergency services, excluding a use referred to in any of paragraphs (a) to (e) of the definition device in section 2 of the Food and Drugs Act that is in relation to humans.

6. The use of an artificial intelligence system by a court or administrative body in making a determination in respect of an individual who is a party to proceedings before the court or administrative body.

7. The use of an artificial intelligence system to assist a peace officer, as defined in section 2 of the Criminal Code, in the exercise and performance of their law enforcement powers, duties and functions.

(Note: the classes in this schedule will be the subject of the next blog post)

The list is not intended to be either closed or permanent. Under a proposed s. 36.1, the Governor in Council (GinC) can enact regulations amending the schedule by adding, modifying, or deleting a category of use. Any such decision by the GinC is to be guided by criteria set out in s. 36.1. These include the risks of adverse impact on “the economy or any other aspect of Canadian society and on individuals, including on individual’s health and safety and on their rights recognized in international human rights treaties to which Canada is a party”. The GinC must also consider the “severity and extent” of any adverse impacts, as well as the “social and economic circumstances of any individuals who may experience” such impacts. A final consideration is whether the uses in the category are adequately addressed under another Act of Parliament or of a provincial legislature.

The AIDA only applies to “high impact” systems, and since there is no screening or registration process, it is up to those who manage or make such systems available to identify them as such and to meet the obligations set out in the law. A proposed s. 14 would empower the AI and Data Commissioner to order a person who makes available or who manages an AI system to provide the Commissioner with their assessment of whether the system is a high impact system, a general purpose system (which can also be high impact), or a machine learning model intended to be incorporated into a high impact system.

My next post will look at the classes of “high-impact” AI as set out in the Schedule.

On October 26, 2023, I appeared as a witness before the INDU Committee of the House of Commons which is holding hearings on Bill C-27. Although I would have preferred to address the Artificial Intelligence and Data Act, it was clear that the Committee was prioritizing study of the Consumer Protection and Privacy Act in part because the Minister of Industry had yet to produce the text of amendments to the AI and Data Act which he had previously outlined in a letter to the Committee Chair. It is my understanding that witnesses will not be called twice. As a result, I will be posting my comments on the AI and Data Act on my blog.

The other witnesses heard at the same time included Colin Bennett, Michael Geist, Vivek Krishnamurthy and Brenda McPhail. The recording of that session is available here.

__________

Thank you, Mr Chair, for the invitation to address this committee.

I am a law professor at the University of Ottawa, where I hold the Canada Research Chair in Information Law and Policy. I appear today in my personal capacity. I have concerns with both the CPPA and AIDA. Many of these have been communicated in my own writings and in the report submitted to this committee by the Centre for Digital Rights. My comments today focus on the Consumer Privacy Protection Act. I note, however, that I have very substantial concerns about the AI and Data Act and would be happy to answer questions on it as well.

Let me begin by stating that I am generally supportive of the recommendations of Commissioner Dufresne for the amendment of Bill C-27 set out in his letter of April 26, 2023, to the Chair of this Committee. I will also address 3 other points.

The Minister has chosen to retain consent as the backbone of the CPPA, with specific exceptions to consent. One of the most significant of these is the “legitimate interest” exception in s. 18(3). This allows organizations to collect or use personal information without knowledge or consent if it is for an activity in which an organization has a legitimate interest. There are guardrails: the interest must outweigh any adverse effects on the individual; it must be one which a reasonable person would expect; and the information must not be collected or used to influence the behaviour or decisions of the individual. There are also additional documentation and mitigation requirements.

The problem lies in the continuing presence of “implied consent” in section 15(5) of the CPPA. PIPEDA allowed for implied consent because there were circumstances where it made sense, and there was no “legitimate interest” exception. However, in the CPPA, the legitimate interest exception does the work of implied consent. Leaving implied consent in the legislation provides a way to get around the guardrails in s. 18(3) (an organization can opt for the ‘implied consent’ route instead of legitimate interest). It will create confusion for organizations that might struggle to understand which is the appropriate approach. The solution is simple: get rid of implied consent. I note that “implied consent” is not a basis for processing under the GDPR. Consent must be express or processing must fall under another permitted ground.

My second point relates to s. 39 of the CPPA, which is an exception to an individual’s knowledge and consent where information is disclosed to a potentially very broad range of entities for “socially beneficial purposes”. Such information need only be de-identified – not anonymized – making it more vulnerable to reidentification. I question whether there is social licence for sharing de-identified rather than anonymized data for these purposes. I note that s. 39 was carried over verbatim from C-11, when “de-identify” was defined to mean what we understand as “anonymize”.

Permitting disclosure for socially beneficial purposes is a useful idea, but s. 39, especially with the shift in meaning of “de-identify”, lacks necessary safeguards. First, there is no obvious transparency requirement. If we are to learn anything from the ETHI Committee inquiry into PHAC’s use of Canadians’ mobility data, transparency is fundamentally important. At the very least, there should be a requirement that written notice of data sharing for socially beneficial purposes be given to the Privacy Commissioner of Canada; ideally there should also be a requirement for public notice. Further, s. 39 should provide that any such sharing be subject to a data sharing agreement, which should also be provided to the Privacy Commissioner. None of this is too much to ask where Canadians’ data are conscripted for public purposes. Failure to ensure transparency and some basic measure of oversight will undermine trust and legitimacy.

My third point relates to the exception to knowledge and consent for publicly available personal information. Bill C-27 reproduces PIPEDA’s provision on publicly available personal information, providing in s. 51 that “An organization may collect, use or disclose an individual’s personal information without their knowledge or consent if the personal information is publicly available and is specified by the regulations.” We have seen the consequences of data scraping from social media platforms in the case of Clearview AI, which used scraped photographs to build a massive facial recognition database. The Privacy Commissioner takes the position that personal information on social media platforms does not fall within the “publicly available personal information” exception. Yet not only could this approach be upended in the future by the new Personal Information and Data Protection Tribunal, it could also easily be modified by new regulations. Recognizing the importance of s. 51, former Commissioner Therrien had recommended amending it to add that the publicly available personal information be such “that the individual would have no reasonable expectation of privacy”. An alternative is to incorporate the text of the current Regulations Specifying Publicly Available Information into the CPPA, revising them to clarify scope and application in our current data environment. I would be happy to provide some sample language.

This issue should not be left to regulations. The amount of publicly available personal information online is staggering, and it is easily susceptible to scraping and misuse. It should be clear and explicit in the law that personal data cannot be harvested from the internet, except in limited circumstances set out in the statute.

Finally, I add my voice to those of so many others in saying that the data protection obligations set out in the CPPA should apply to political parties. It is unacceptable that they do not.

The following is a short excerpt from a new paper which looks at the public sector use of private sector personal data (Teresa Scassa, “Public Sector Use of Private Sector Personal Data: Towards Best Practices”, forthcoming in (2024) 47:2 Dalhousie Law Journal ) The full pre-print version of the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4538632

Governments seeking to make data-driven decisions require the data to do so. Although they may already hold large stores of administrative data, their ability to collect new or different data is limited both by law and by practicality. In our networked, Internet of Things society, the private sector has become a source of abundant data about almost anything – but particularly about people and their activities. Private sector companies collect a wide variety of personal data, often in high volumes, rich in detail, and continuously over time. Location and mobility data, for example, are collected by many different actors, from cellular service providers to app developers. Financial sector organizations amass rich data about the spending and borrowing habits of consumers. Even genetic data is collected by private sector companies. The range of available data is constantly broadening as more and more is harvested, and as companies seek secondary markets for the data they collect.

Public sector use of private sector data is fraught with important legal and public policy considerations. Chief among these is privacy since access to such data raises concerns about undue government intrusion into private lives and habits. Data protection issues implicate both public and private sector actors in this context, and include notice and consent, as well as data security. And, where private sector data is used to shape government policies and actions, important questions about ethics, data quality, the potential for discrimination, and broader human rights questions also arise. Alongside these issues are interwoven concerns about transparency, as well as necessity and proportionality when it comes to the conscription by the public sector of data collected by private companies.

This paper explores issues raised by public sector access to and use of personal data held by the private sector. It considers how such data sharing is legally enabled and within what parameters. Given that laws governing data sharing may not always keep pace with data needs and public concerns, this paper also takes a normative approach which examines whether and in what circumstances such data sharing should take place. To provide a factual context for discussion of the issues, the analysis in this paper is framed around two recent examples from Canada that involved actual or attempted access by government agencies to private sector personal data for public purposes. The cases chosen are different in nature and scope. The first is the attempted acquisition and use by Canada’s national statistics organization, Statistics Canada (StatCan), of data held by credit monitoring companies and financial institutions to generate economic statistics. The second is the use, during the COVID-19 pandemic, of mobility data by the Public Health Agency of Canada (PHAC) to assess the effectiveness of public health policies in reducing the transmission of COVID-19 during lockdowns. The StatCan example involves the compelled sharing of personal data by private sector actors; while the PHAC example involves a government agency that contracted for the use of anonymized data and analytics supplied by private sector companies. Each of these instances generated significant public outcry. This negative publicity no doubt exceeded what either agency anticipated. Both believed that they had a legal basis to gather and/or use the data or analytics, and both believed that their actions served the public good. Yet the outcry is indicative of underlying concerns that had not properly been addressed.

Using these two quite different cases as illustrations, the paper examines the issues raised by the use of private sector data by government. Recognizing that such practices are likely to multiply, it also makes recommendations for best practices. Although the examples considered are Canadian and are shaped by the Canadian legal context, most of the issues they raise are of broader relevance. Part I of this paper sets out the two case studies that are used to tease out and illustrate the issues raised by public sector use of private sector data. Part II discusses the different issues and makes recommendations.

The full pre-print version of the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4538632

A recent decision of the Federal Court of Canada ends (subject to any appeal) the federal Privacy Commissioner’s attempt to obtain an order against Facebook in relation to personal information practices linked to the Cambridge Analytica scandal. Following a joint investigation with British Columbia’s Information and Privacy Commissioner, the Commissioners had issued a Report of Findings in 2019. The Report concluded that Facebook had breached Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) and B.C.’s Personal Information Protection Act by failing to obtain appropriate consent, failing to adequately safeguard the data of its users and failing to be accountable for the data under its control. Under PIPEDA, the Privacy Commissioner has no order-making powers and can only make non-binding recommendations. For an order to be issued under PIPEDA, an application must be made to the Federal Court under s. 15, either by the complainant, or by the Privacy Commissioner with the complainant’s permission. The proceeding before the court is de novo, meaning that the court renders its own decision on whether there has been a breach of PIPEDA based upon the evidence presented to it.

The Cambridge Analytica scandal involved a researcher who developed a Facebook app. Through this app, the developer collected user data, ostensibly for research purposes. That data was later disclosed to third parties who used it to develop “psychographic” models for purposes of targeting political messages towards segments of Facebook users” (at para 35). It is important to note here that the complaint was not against the app developer, but rather against Facebook. Essentially, the complainants were concerned that Facebook did not adequately protect its users’ privacy. Although it had put in place policies and requirements for third party app developers, the complainants were concerned that it did not adequately monitor the third-party compliance with its policies.

The Federal Court dismissed the Privacy Commissioner’s application largely because of a lack of evidence to establish that Facebook had failed to meet its PIPEDA obligations to safeguard its users’ personal information. Referring to it as an “evidentiary vacuum” (para 71), Justice Manson found that there was a lack of expert evidence regarding what Facebook might have done differently. He also found that there was no evidence from users regarding their expectations of privacy on Facebook. The Court chastised the Commissioner, stating “ultimately it is the Commissioner’s burden to establish a breach of PIPEDA on the basis of evidence, not speculation and inferences derived from a paucity of material facts” (at para 72). Justice Manson found the evidence presented by the Commissioner to be unpersuasive, speculative, and required the court to draw “unsupported inferences”. He was unsympathetic to the Commissioner’s explanation that it did not use its statutory powers to compel evidence (under s. 12.1 of PIPEDA) because “Facebook would not have complied or would have had nothing to offer” (at para 72). Justice Manson noted that had Facebook failed to comply with requests under s. 12.1, the Commissioner could have challenged the refusal.

Yet there is more to this decision than just a dressing down of the Commissioner’s approach to the case. In discussing “meaningful consent” under PIPEDA, Justice Manson frames the question before the court as “whether Facebook made reasonable efforts to ensure users and users’ Facebook friends were advised of the purposes for which their information would be used by third-party applications” (at para 63). This argument is reflected in the Commissioner’s position that Facebook should have done more to ensure that third party app developers on its site complied with their contractual obligations, including those that required developers to obtain consent from app users to the collection of personal data. Facebook’s position was that PIPEDA only requires that it make reasonable efforts to protect the personal data of its users, and that it had done so through its “combination of network-wide policies, user controls and educational resources” (at para 68). It is here that Justice Manson emphasizes the lack of evidence before him, noting that it is not clear what else Facebook could have reasonably been expected to do. In making this point, he states:

There is no expert evidence as to what Facebook could feasibly do differently, nor is there any subjective evidence from Facebook users about their expectations of privacy or evidence that any user did not appreciate the privacy issues at stake when using Facebook. While such evidence may not be strictly necessary, it would have certainly enabled the Court to better assess the reasonableness of meaningful consent in an area where the standard for reasonableness and user expectations may be especially context dependent and ever-evolving. (at para 71) [My emphasis].

This passage should be deeply troubling to those concerned about privacy. By referring to the reasonable expectation of privacy in terms of what users might expect in an ever-evolving technological context, Justice Manson appears to abandon the normative dimensions of the concept. His comments lead towards a conclusion that the reasonable expectation of privacy is an ever-diminishing benchmark as it becomes increasingly naïve to expect any sort of privacy in a data-hungry surveillance society. Yet this is not the case. The concept of the “reasonable expectation of privacy” has significant normative dimensions, as the Supreme Court of Canada reminds us in R. v. Tessling and in the case law that follows it. In Tessling, Justice Binnie noted that subjective expectations of privacy should not be used to undermine the privacy protections in s. 8 of the Charter, stating that “[e]xpectation of privacy is a normative rather than a descriptive standard.” Although this comment is made in relation to the Charter, a reasonable expectation of privacy that is based upon the constant and deliberate erosion of privacy would be equally meaningless in data protection law. Although Justice Manson’s comments about the expectation of privacy may not have affected the outcome of this case, they are troublesome in that they might be picked up by subsequent courts or by the Personal Information and Data Protection Tribunal proposed in Bill C-27.

The decision also contains at least two observations that should set off alarm bells with respect to Bill C-27, a bill to reform PIPEDA. Justice Manson engages in some discussion of the duty of an organization to safeguard information that it has disclosed to a third party. He finds that PIPEDA imposes obligations on organizations with respect to information in their possession, and information transferred for processing. In the case of prospective business transactions, an organization sharing information with a potential purchaser must enter into an agreement to protect that information. However, Justice Manson interprets this specific reference to a requirement for such an agreement to mean that “[i]f an organization were required to protect information transferred to third parties more generally under the safeguarding principle, this provision would be unnecessary” (at para 88). In Bill C-27, s. 39, for example, permits organizations to share de-identified (not anonymized) personal information with certain third parties without the knowledge or consent of individuals for ‘socially beneficial’ purposes without imposing any requirement to put in place contractual provisions to safeguard that information. The comments of Justice Manson clearly highlight the deficiencies of s. 39 which must be amended to include a requirement for such safeguards.

A second issue relates to the human-rights based approach to privacy which both the former Privacy Commissioner Daniel Therrien and the current Commissioner Philippe Dufresne have openly supported. Justice Manson acknowledges, that the Supreme Court of Canada has recognized the quasi-constitutional nature of data protection laws such as PIPEDA, because “the ability of individuals to control their personal information is intimately connected to their individual autonomy, dignity, and privacy” (at para 51). However, neither PIPEDA nor Bill C-27 take a human-rights based approach. Rather, they place personal and commercial interests in personal data on the same footing. Justice Manson states: “Ultimately, given the purpose of PIPEDA is to strike a balance between two competing interests, the Court must interpret it in a flexible, common sense and pragmatic manner” (at para 52). The government has made rather general references to privacy rights in the preamble of Bill C-27 (though not in any preamble to the proposed Consumer Privacy Protection Act) but has steadfastly refused to reference the broader human rights context of privacy in the text of the Bill itself. We are left with a purpose clause that acknowledges “the right of privacy of individuals with respect to their personal information” in a context in which “significant economic activity relies on the analysis, circulation and exchange of personal information”. The purpose clause finishes with a reference to the need of organizations to “collect, use or disclose personal information for purposes that a reasonable person would consider appropriate in the circumstances.” While this reference to the “reasonable person” should highlight the need for a normative approach to reasonable expectations as discussed above, the interpretive approach adopted by Justice Manson also makes clear the consequences of not adopting an explicit human-rights based approach. Privacy is thrown into a balance with commercial interests without fundamental human rights to provide a firm backstop.

Justice Manson seems to suggests that the Commissioner’s approach in this case may flow from frustration with the limits of PIPEDA. He describes the Commissioner’s submissions as “thoughtful pleas for well-thought-out and balanced legislation from Parliament that tackles the challenges raised by social media companies and the digital sharing of personal information, not an unprincipled interpretation from this Court of existing legislation that applies equally to a social media giant as it may apply to the local bank or car dealership.” (at para 90) They say that bad cases make bad law; but bad law might also make bad cases. The challenge is to ensure that Bill C-27 does not reproduce or amplify deficiencies in PIPEDA.

 

The government of the United Kingdom has published a consultation paper seeking input into its proposal for AI regulation. The paper is aptly titled A pro-innovation approach to AI regulation, since it restates that point insistently throughout the document. The UK proposal provides an interesting contrast to Canada’s AI governance bill currently before Parliament.

Both Canada and the UK set out to regulate AI systems with the twin goals of supporting innovation on the one hand, and building trust in AI on the other. (Note here that the second goal is to build trust in AI, not to protect the public. Although the protection of the public is acknowledged as one way to build trust, there is a subtle distinction here). However, beyond these shared goals, the proposals are quite different. Canada’s approach in Part 3 of Bill C-27 (the Artificial Intelligence and Data Act (AIDA)) is to create a framework to regulate as yet undefined “high impact” AI. The definition of “high impact” as well as many other essential elements of the bill are left to be articulated in regulations. According to a recently published companion document to the AIDA, leaving so much of the detail to regulations is how the government proposes to keep the law ‘agile’ – i.e. capable of responding to a rapidly evolving technological context. The proposal would also provide some governance for anonymized data by imposing general requirements to document the use of anonymized personal information in AI innovation. The Minister of Innovation is made generally responsible for oversight and enforcement. For example, the AIDA gives the Minister of Innovation the authority (eventually) to impose stiff administrative monetary penalties on bad actors. The Canadian approach is similar to that in the EU AI Act in that it aims for a broad regulation of AI technologies, and it chooses legislation as the vehicle to do so. It is different in that the EU AI Act is far more detailed and prescriptive; the AIDA leaves the bulk of its actual legal requirements to be developed in regulations.

The UK proposal is notably different from either of these approaches. Rather than create a new piece of legislation and/or a new regulatory authority, the UK proposes to set out five principles for responsible AI development and use. Existing regulators will be encouraged and, if necessary, specifically empowered, to regulate AI according to these principles within their spheres of regulatory authority. Examples of regulators who will be engaged in this framework include the Information Commissioner’s Office, regulators for human rights, consumer protection, health care products and medical devices, and competition law. The UK scheme also accepts that there may need to be an entity within government that can perform some centralized support functions. These may include monitoring and evaluation, education and awareness, international interoperability, horizon scanning and gap analysis, and supporting testbeds and sandboxes. Because of the risk that some AI technologies or issues may fall through the cracks between existing regulatory schemes, the government anticipates that regulators will assist government in identifying gaps and proposing appropriate actions. These could include adapting the mandates of existing regulators or providing new legislative measures if necessary.

Although Canada’s federal government has labelled its approach to AI regulation as ‘agile’, it is clear that the UK approach is much closer to the concept of agile regulation. Encouraging existing regulators to adapt the stated AI principles to their remit and to provide guidance on how they will actualize these principles will allow them to move quickly, so long as there are no obvious gaps in legal authority. By contrast, even once passed, it will take at least two years for Canada’s AIDA to have its normative blanks filled in by regulations. And, even if regulations might be somewhat easier to update than statutes, guidance is even more responsive, giving regulators greater room to manoeuvre in a changing technological landscape. Embracing the precepts of agile regulation, the UK scheme emphasizes the need to gather data about the successes and failures of regulation itself in order to adapt as required. On the other hand, while empowering (and resourcing) existing regulators will have clear benefits in terms of agility, the regulatory gaps could well be important ones – with the governance of large language models such as ChatGPT as one example. While privacy regulators are beginning to flex their regulatory muscles in the direction of ChatGPT, data protection law will only address a subset of the issues raised by this rapidly evolving technology. In Canada, AIDA’s governance requirements will be specific to risk-based regulation of AI, and will apply to all those who design, develop or make AI systems available for use (unless of course they are explicitly excluded under one of the many actual and potential exceptions).

Of course, the scheme in the AIDA may end up as more of a hybrid between the EU and the UK approaches in that the definition of “high impact” AI (to which the AIDA will apply) may be shaped not just by the degree of impact of the AI system at issue but also by the existence of other suitable regulatory frameworks. In other words, the companion document suggests that some existing regulators (health, consumer protection, human rights, financial institutions) have already taken steps to extend their remit to address the use of AI technologies within their spheres of competence. In this regard, the companion document speaks of “regulatory gaps that must be filled” by a statute such as AIDA as well as the need for the AIDA to integrate “seamlessly with existing Canadian legal frameworks”. Although it is still unclear whether the AIDA will serve only to fill regulatory gaps, or will provide two distinct layers of regulation in some cases, one of the criteria for identifying what constitutes a “high impact” system includes “[t]he degree to which the risks are adequately regulated under another law”. The lack of clarity in the Canadian approach is one of its flaws.

There is a certain attractiveness in the idea of a regulatory approach like that proposed by the UK – one that begins with existing regulators being both specifically directed and further enabled to address AI regulation within their areas of responsibility. As noted earlier, it seems far more agile than Canada’s rather clunky bill. Yet such an approach is much easier to adopt in a unitary state than in a federal system such as Canada’s. In Canada, some of the regulatory gaps are with respect to matters otherwise under provincial jurisdiction. Thus, it is not so simple in Canada to propose to empower and resource all implicated regulators, nor is it as easy to fill gaps once they are identified. These regulators and the gaps between them might fall under the jurisdiction of any one of 13 different governments. The UK acknowledges (and defers) its own challenges in this regard with respect to devolution at paragraph 113 of its white paper, where it states: “We will continue to consider any devolution impacts of AI regulation as the policy develops and in advance of any legislative action”. Instead, the AIDA, Canada leverages its general trade and commerce power in an attempt to provide AI governance that is as comprehensive as possible. It isn’t pretty (since it will not capture all AI innovation that might have impacts on people) but it is part of the reality of the federal state (or the state of federalism) in which we find ourselves.

<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 1 of 37

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law