Teresa Scassa - Blog

Teresa Scassa

Teresa Scassa

Note: this is the first in a series of blog posts on Bill C-27, also known as An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act.

Bill C-27 is a revised version of the former Bill C-11 which died on the order paper just prior to the last federal election in 2021. The former Privacy Commissioner called Bill C-11 ‘a step backwards’ for privacy, and issued a series of recommendations for its reform. At the same time, industry was also critical of the Bill, arguing that it risked making the use of data for innovation too burdensome.

Bill C-27 takes steps to address the concerns of both privacy advocates and those from industry with a series of revisions, although there is much that is not changed from Bill C-11. Further, it adds an entirely new statute – the Artificial Intelligence and Data Act (AIDA) – meant to govern some forms of artificial intelligence. This series of posts will assess a number of the changes found in Bill C-27. It will also consider the AIDA.

_________________________________

The federal government has made it clear that it considers consent to be a cornerstone of Canadian data protection law. They have done so in the Digital Charter, in Bill C-11 (the one about privacy), and in the recent reincarnation of data protection reform legislation in Bill C-27. On the one hand, consent is an important means by which individuals can exercise control over their personal information; on the other hand, it is widely recognized that the consent burden has become far too high for individuals who are confronted with long, complex and often impenetrable privacy policies at every turn. At the same time, organizations that see new and emerging uses for already-collected data seek to be relieved of the burden of obtaining fresh consents. The challenge in privacy law reform has therefore been to make consent meaningful, while at the same time reducing the consent burden and enabling greater use of data by private and public sector entities. Bill C-11 received considerable criticism for how it dealt with consent (see, for example, my post here, and the former Privacy Commissioner’s recommendations to improve consent in C-11 here). Consent is back, front and centre in Bill C-27, although with some important changes.

Section 15 of Bill C-27 reaffirms that consent is the default rule for collection, use or disclosure of personal information, although the statute creates a long list of exceptions to this general rule. One criticism of Bill C-11 was that it removed the definition of consent in s. 6.1 of PIPEDA, which provided that consent “is only valid if it is reasonable to expect that an individual to whom the organization’s activities are directed would understand the nature, purpose and consequences of the collection, use or disclosure of the personal information to which they are consenting.” Instead, Bill C-11 simply relied upon a list of information that must be provided to individuals prior to consent. Bill C-27’s compromise is found in the addition of a new s. 15(4) which requires that the information provided to individuals to obtain their consent must be “in plain language that an individual to whom the organization’s activities are directed would reasonably be expected to understand.” This has the added virtue of ensuring, for example, that privacy policies for products or services directed at youth or children must take into account the sophistication of their audience. The added language is not as exigent as s. 6.1 (for example, s. 6.1 requires an understanding of the nature, purpose and consequences of the collection, use and disclosure, while s. 15(4) requires only an understanding of the language used), so it is still a downgrading of consent from the existing law. It is, nevertheless, an improvement over Bill C-11.

A modified s. 15(5) and a new s. 15(6) also muddy the consent waters. Subsection 15(5) provides that consent must be express unless it is appropriate to imply consent. The exception to this general rule is the new subsection 15(6) which provides:

(6) It is not appropriate to rely on an individual’s implied consent if their personal information is collected or used for an activity described in subsection 18(2) or (3).

Subsections 18(2) and (3) list business activities for which personal data may be collected or used without an individual’s knowledge or consent. At first glance, it is unclear why it is necessary to provide that implied consent is inappropriate in such circumstances, since no consent is needed at all. However, because s. 18(1) sets out certain conditions criteria for collection without knowledge or consent, it is likely that the goal of s. 15(6) is to ensure that no organization circumvents the limited guardrails in s. 18(1) by relying instead on implied consent. The potential breadth of s. 18(3) (discussed below), combined with s. 2(3) makes it difficult to distinguish between the two, in which case, the cautious organization will comply with s. 18(3) rather than rely on implied consent in any event.

The list of business activities for which no knowledge or consent is required for the collection or use of personal information is pared down from that in Bill C-11. The list in C-11 was controversial, as it included some activities which were so broadly stated that they would have created gaping holes in any consent requirement (see my blog post on consent in C-11 here). The worst of these have been removed. This is a positive development, although the provision creates a backdoor through which other exceptions can be added by regulation. Further, Bill C-27 has added language to s. 12(1) to clarify that the requirement that the collection, use or disclosure of personal information must be “only in a manner and for purposes that a reasonable person would consider appropriate in the circumstances” applies “whether or not consent is required under this Act.”

[Note that although the exceptions in s. 18 are to knowledge as well as consent, s. 62(2)(b) of Bill C-27 will require that an organization provide plain language information about how it makes use of personal information, and how it relies upon exceptions to consent “including a description of any activities referred to in subsection 18(3) in which it has a legitimate interest”.]

Bill C-27 does, however, contain an entirely new exception to the collection or use of personal data with knowledge or consent. This is found in s. 18(3):

18 (3) An organization may collect or use an individual’s personal information without their knowledge or consent if the collection or use is made for the purpose of an activity in which the organization has a legitimate interest that outweighs any potential adverse effect on the individual resulting from that collection or use and

(a) a reasonable person would expect the collection or use for such an activity; and

(b) the personal information is not collected or used for the purpose of influencing the individual’s behaviour or decisions.

So as not to leave this as open-ended as it seems at first glance, a new s. 18(4) sets conditions precedent for the collection or use of personal information for ‘legitimate purposes’:

(4) Prior to collecting or using personal information under subsection (3), the organization must

(a) identify any potential adverse effect on the individual that is likely to result from the collection or use;

(b) identify and take reasonable measures to reduce the likelihood that the effects will occur or to mitigate or eliminate them; and

(c) comply with any prescribed requirements.

Finally, a new s. 18(5) requires the organization to keep a record of its assessment under s. 18(4) and it must be prepared to provide a copy of this assessment to the Commissioner at the Commissioner’s request.

It is clear that industry had the ear of the Minister when it comes to the addition of ss. 18(3). A ‘legitimate interest’ exception was sought in order to enable the use of personal data without consent in a broader range of circumstances. Such an exception is found in the EU’s General Data Protection Regulation (GDPR). Here is how it is worded in the GDPR:

6(1) Processing shall be lawful only if and to the extent that at least one of the following applies:

[. . . ]

(f) processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.

Under the GDPR, an organization that relies upon legitimate interests instead of consent, must take into account, among other things:

6(4) [. . . ]

(a) any link between the purposes for which the personal data have been collected and the purposes of the intended further processing;

(b) the context in which the personal data have been collected, in particular regarding the relationship between data subjects and the controller;

(c) the nature of the personal data, in particular whether special categories of personal data are processed, pursuant to Article 9, or whether personal data related to criminal convictions and offences are processed, pursuant to Article 10;

(d) the possible consequences of the intended further processing for data subjects;

(e) the existence of appropriate safeguards, which may include encryption or pseudonymisation.

Bill C-27’s ‘legitimate interests’ exception is different in important respects from that in the GDPR. Although Bill C-27 gives a nod to the importance of privacy as a human right in a new preamble, the human rights dimensions of privacy are not particularly evident in the body of the Bill. The ‘legitimate interests’ exception is available unless there is an “adverse effect on the individual” that is not outweighed by the organization’s legitimate interest (as opposed to the ‘interests or fundamental freedoms of the individual’ under the GDPR). Presumably it will be the organization that does this initial calculation. One of the problems in data protection law has been quantifying adverse effects on individuals. Data breaches, for example, are shocking and distressing to those impacted, but it is often difficult to show actual damages flowing from the breach, and moral damages have been considerably restricted by courts in many cases. Some courts have even found that ordinary stress and inconvenience of a data breach is not compensable harm since it has become such a routine part of life. If ‘adverse effects’ on individuals are reduced to quantifiable effects, the ‘legitimate interests’ exception will be far too broad.

This is not to say that the ‘legitimate interests’ provision in Bill C-27 is incapable of facilitating data use while at the same time protecting individuals. There is clearly an attempt to incorporate some checks and balances, such as reasonable expectations and a requirement to identify and mitigate any adverse effects. But what C-27 does is take something that, in the GDPR, was meant to be quite exceptional to consent and make it potentially a more mainstream basis for the use of personal data without knowledge or consent. It is able to do this because rather than reinforce the centrality and importance of privacy rights, it places privacy on an uneasy par with commercial interests in using personal data. The focus on ‘adverse effects’ runs the risk of equating privacy harm with quantifiable harm, thus trivializing the human and social value of privacy.

 

 

 

Note: The following is my response to the call for submissions on the recommendations following the third review of Canada’s Directive on Automated Decision-Making. Comments are due by June 30, 2022. If you are interested in commenting, please consult the Review Report and the Summary of Key Issues and Proposed Amendments. Comments can be sent to This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

 

The federal Directive on Automated Decision-Making (DADM) and its accompanying Algorithmic Impact Assessment tool (AIA) are designed to provide governance for the adoption and deployment of automated decision systems (ADS) by Canada’s federal government. Governments are increasingly looking to ADS in order to speed up routine decision-making processes and to achieve greater consistency in decision-making. At the same time, there are reasons to be cautious. Automated decision systems carry risks of incorporating and replicating discriminatory bias. They may also lack the transparency required of government decision-making, particularly where important rights or interests are at stake. The DADM, which has been in effect since April 2019 (with compliance mandatory no later than April 2020), sets out a series of obligations related to the design and deployment of automated decision-making systems. The extent of the obligations depends upon a risk assessment, and the AIA is the tool by which the level of risk of the system is assessed.

Given that this is a rapidly evolving area, the DADM provides that it will be reviewed every six months. It is now in its third review. The first two reviews led to the clarification of certain obligations in the DADM and to the development of guidelines to aid in its interpretation. This third review proposes a number of more substantive changes. This note comments on some of these changes and proposes an issue for future consideration.

Clarify and Broaden the Scope

A key recommendation in this third round of review relates to the scope of the DADM. Currently, the DADM applies only to ‘external’ services of government – in other words services offered to individuals or organizations by government. It does not apply internally. This is a significant gap when one considers the expanding use of ADS in the employment context. AI-enabled decision systems have been used in hiring processes, and they can be used to conduct performance reviews, and to make or assist in decision-making about promotions and internal workforce mobility. The use of AI tools in the employment context can have significant impacts on the lives and careers of employees. It seems a glaring oversight to not include such systems in the governance regime for ADM. The review team has recommended expanding the scope of the DADM to include internal as well as external services. They note that this move would also extend the DADM to any ADS used for “grants and contributions, awards and recognition, and security screening” (Report at 11). This is an important recommendation and one which should be implemented.

The review team also recommends a clarification of the language regarding the application of the DADM. Currently it puts within its scope “any system, tool, or statistical models used to recommend or make an administrative decision about a client”. Noting that “recommend” could be construed as including only those systems that recommend a specific outcome, as opposed to systems that process information on behalf of a decision-maker, the team proposes replacing “recommend” with “support”. This too is an important recommendation which should be implemented.

Periodic Reviews

Currently the DADM provides for its review every six months. This was always an ambitious review schedule. No doubt it was motivated by the fact that the DADM was a novel tool designed to address a rapidly emerging and evolving technology with potentially significant implications. The idea was to ensure that it was working properly and to promptly address any issues or problems. In this third review, however, the team recommends changing the review period from six months to two years. The rationale is that the six-month timetable makes it challenging for the team overseeing the DADM (which is constantly in a review cycle), and makes it difficult to properly engage stakeholders. They also cite the need for the DADM to “display a degree of stability and reliability, enabling federal institutions and the clients they serve to plan and act with a reasonable degree of confidence.” (Report at 12).

This too is a reasonable recommendation. While more frequent reviews were important in the early days of the DADM and the AIA, reviews every six months seem unduly burdensome once initial hiccups are resolved. A six-month review cycle engages the team responsible for the DADM in a constant cycle of review, which may not be the best use of resources. The proposed two-year review cycle would allow for a more experience to be garnered with the DADM and AIA, enabling a more substantive assessment of issues arising. Further, a two-year window is much more realistic if stakeholders are to be engaged in a meaningful way. Being asked to comment on reports and proposed changes every six months seems burdensome for anyone – including an already stretched civil society sector. The review document suggests that Canada’s Chief Information Officer could request completion of an off-cycle review if the need arose, leaving room for the possibility that a more urgent issue could be addressed outside of the two-year review cycle.

Data Model and Governance

The third review also proposes amendments to provide for what it describes as a more ‘holistic’ approach to data governance. Currently, the DADM focuses on data inputs – in other words on assessing the quality, relevance and timeliness of the data used in the model. The review report recommends the addition of an obligation to establish “measures to ensure that data used and generated by the Automated Decision System are traceable, protected, and appropriately retained and disposed of in accordance with the Directive on Service and Digital, Directive on Privacy Practices, and Directive on Security Management”. It will also recommend amendments to extend testing and assessment beyond data to underlying models, in order to assess both data and algorithms for bias or other problems. These are positive amendments which should be implemented.

Explanation

The review report notes that while the DADM requires “meaningful explanations” of how automated decisions were reached, and while guidelines provide some detail as to what is meant by explainability, there is still uncertainty about what explainability entails. The Report recommends adding language in Appendix C, in relation to impact assessment, that will set out the information necessary for ‘explainability’. This includes:

  • The role of the system in the decision-making process;
  • The training and client data, their source and method of collection, if applicable;
  • The criteria used to evaluate client data and the operations applied to process it; and
  • The output produced by the system and any relevant information needed to interpret it in the context of the administrative decision.

Again, this recommendation should be implemented.

Reasons for Automation

The review would also require those developing ADM systems for government to specifically identify why it was considered necessary or appropriate to automate the existing decision-making process. The Report refers to a “clear and demonstrable need”. This is an important additional criterion as it requires transparency as to the reasons for automation – and that these reasons go beyond the fact that vendor-demonstrated technologies look really cool. As the authors of the review note, requiring justification also helps to assess the parameters of the system adopted – particularly if the necessity and proportionality approach favoured by the Office of the Privacy Commissioner of Canada is adopted.

Transparency

The report addresses several issues that are relevant to the transparency dimensions of the DADM and the accompanying AIA. Transparency is an important element of the DADM, and it is key both to the legitimacy of the adoption of ADS by government, but also to its ongoing use. Without transparency in government decision-making that impacts individuals, organizations and communities, there can be no legitimacy. There are a number of transparency elements that are built into the DADM. For example, there are requirements to provide notice of automated decision systems, a right to an explanation of decisions that is tailored to the impact of the decision, and a requirement not just to conduct an AIA, but to publish the results. The review report includes a number of recommendations to improve transparency. These include a recommendation to clarify when an AIA must be completed and released, greater transparency around peer review results, more explicit criteria for explainability, and adding additional questions to the AIA. These are all welcome recommendations.

At least one of these recommendations may go some way to allaying my concerns with the system as it currently stands. The documents accompanying the report (slide 3 of summary document) indicate that there are over 300 AI projects across 80% of federal institutions. However, at the time of writing, only four AIAs were published on the open government portal. There is clearly a substantial lag between development of these systems and release of the AIAs. The recommendation that an AIA be not just completed but also released prior to the production of the system is therefore of great importance to ensuring transparency.

It may be that some of the discrepancy in the numbers is attributable to the fact that the DADM came into effect in 2020, and it was not grandfathered in for projects already underway. For transparency’s sake, I would also recommend that a public register of ADS be created that contains basic information about all government ADS. This could include their existence and function, as well as some transparency regarding explainability, the reasons for adoption, and measures taken to review, assess and ensure the reliability of these systems. Although it is too late, in the case of these systems, to perform a proactive AIA, there should be some form of reporting tool that can be used to provide important information, for transparency purposes, to the public.

Consideration for the Future

The next review of the DADM and the AIA should also involve a qualitative assessment of the AIAs that have been published to date. If the AIA is to be a primary tool not just for assessing ADS but for providing transparency about them, then they need to be good. Currently there is a requirement to conduct an AIA for a system within the scope of the DADM – but there is no explicit requirement for it to be of a certain quality. A quick review of the four AIAs currently available online shows some discrepancy between them in terms of the quality of the assessment. For example, the project description for one such system is an unhelpful 9-word sentence that does not make clear how AI is actually part of the project. This is in contrast to another that describes the project in a 14-line paragraph. These are clearly highly divergent in terms of the level of clarity and detail provided.

The first of these two AIAs also seems to contain contradictory answers to the AIA questionnaire. For example, the answer to the question “Will the system only be used to assist a decision-maker” is ‘yes’. Yet the answer to the question “Will the system be replacing a decision that would otherwise be made by a human” is also ‘yes’. Either one of these answers is incorrect, or the answers do not capture how the respondent interpreted these questions. These are just a few examples. It is easy to see how use of the AIA tool can range from engaged to pro forma.

The obligations imposed on departments with respect to ADS vary depending upon the risk assessment score. This score is evaluated through the questionnaire, and one of the questions asks “Are clients in this line of business particularly vulnerable?” In the AIA for an access to information (ATIP) tool, the answer given to this question is “no”. Of course, the description of the tool is so brief that it is hard to get a sense of how it functions. However, I would think that the clientele for an ATIP portal would be quite diverse. Some users will be relatively sophisticated (e.g., journalists or corporate users). Others will be inexperienced. For some of these, information sought may be highly important to them as they may be seeking access to government information to right a perceived wrong, to find out more about a situation that adversely impacts them, and so on. In my view, this assessment of the vulnerability of the clients is not necessarily accurate. Yet the answer provided contributes to a lower overall score and thus a lower level of accountability. My recommendation for the next round of reviews is to assess the overall effectiveness of the AIA tool in terms of the information and answers provided and in terms of their overall accuracy.

I note that the review report recommends adding questions to the AIA in order to improve the tool. Quite a number of these are free text answers, which require responses to be drafted by the party completing the AIA. Proposed questions include ones relating to the user needs to be addressed, how the system will meet those needs, and the effectiveness of the system in meeting those needs, along with reasons for this assessment. Proposed questions will also ask whether non-AI-enabled solutions were also considered, and if so, why AI was chosen as the preferred method. A further question asks what the consequences would be of not deploying the system. This additional information is important both to assessing the tool and to providing transparency. However, as noted above, the answers will need to be clear and sufficiently detailed in order to be of any use.

The AIA is crucial to assessing the level of obligation and to ensuring transparency. If AIAs are pro forma or excessively laconic, then the DADM can be as finely tuned as can be, but it will still not achieve desired results. The review committee’s recommendation that plain language summaries of peer review assessments also be published will provide a means of assessing the quality of the AIAs, and thus it is an important recommendation to strengthen both transparency and compliance.

A final issue that I would like to address is that, to achieve transparency, people will need to be able to easily find and access the information about the systems. Currently, AIAs are published on the Open Government website. There, they are listed alphabetically by title. This is not a huge problem right now, since there are only four of them. As more are published, it would be helpful to have a means of organizing them by department or agency, or by other criteria (including risk/impact score) to improve their findability and usability. Further, it will be important that any peer review summaries are linked to the appropriate AIAs. In addition to publication on the open government portal, links to these documents should be made available from department, agency or program websites. It would also be important to have an index or registry of AI in the federal sector – including not just those projects covered by the DADM, but also those in production prior to the DADM’s coming into force.

[Note: I have written about the DADM and the AIA from an administrative law perspective. My paper, which looks at the extent to which the DADM addresses administrative law concerns regarding procedural fairness, can be found here.]

 

On March 30, 2022 Alberta introduced Bill 13, the Financial Innovation Act. The Bill aims to create a regulatory sandbox for innovators in the growing financial technology (fintech) sector. This is a sector in which there is already considerable innovation and development – with more to come as Canada moves towards open banking. (Canada just appointed a new open banking lead on March 22, 2022). In addition to open banking, we are seeing a proliferation of cryptocurrencies, growing interest in central bank digital currencies, and platform-based digital currencies.

The concept of a regulatory sandbox is gaining traction in different sectors. Some forms of innovation in the new digital and data-driven economy run up against regulatory frameworks designed for more conventional forms of technological development. The existing regulatory system becomes a barrier to innovation – not because the innovation is necessarily harmful or undesirable, but simply because it does not fit easily within the conventional framework. A regulatory sandbox is meant to give innovators some regulatory flexibility to develop their products or services, while at the same time allowing regulators to experiment with tailoring regulation to the emerging technological environment.

Some examples of regulatory sandboxes in Canada include one developed by the Canadian Securities Administration largely for the emerging fintech sector (the CSA Regulatory Sandbox), a Health Canada regulatory sandbox for advanced therapeutic products, and the Law Society of Ontario’s legal tech regulatory sandbox. These are sandboxes developed by regulatory bodies which provide flexibility within their existing regulatory frameworks. What is different about Alberta’s Bill 13 is that it legislates a broader regulatory sandbox. The Bill provides for qualified participants to receive exemptions from rules within multiple existing regulatory frameworks, including rules under the Loan & Trust Corporation Act and the Credit Union Act (among others – see s. 8 of the Bill)– as well as provincial privacy legislation.

Access to and use of personal data will be necessary for fintech apps, and existing privacy legislation can create challenges in this context. Certainly, for open banking to work in Canada, the federal government’s Personal Information Protection and Electronic Documents Act will need to be amended. Bill C-11, which died on the order paper in late 2021 contained an amendment that would have allowed for the creation of sector-specific data mobility frameworks via regulation. An amendment of this kind, for example, would have facilitated open banking. With such an approach, privacy protection is not abandoned; rather, it is customized.

Alberta’s Bill 13 appears to be designed to provide some form of customization in order to protect privacy while facilitating innovation. Section 5 of the Bill provides that when a company seeks an exemption from provisions of the Personal Information Protection Act (PIPA), this application for exemption must be reviewed by Alberta’s Information and Privacy Commissioner. The Commissioner is empowered to require the company to provide it with all necessary information to assess the request. The Commissioner may then approve or deny the exemption outright, or approve it subject to terms and conditions. The Commissioner may also withdraw any previously granted approval. The role of the IPC is thus firmly embedded in the legislation. Section 8, which empowers the Minister to grant a certificate of acceptance to a sandbox participant, provides that the Minister may grant an exemption to any provision of PIPA only with the prior written approval of the Commissioner and only on terms and conditions jointly agreed to by the Minister and the Commissioner. Similarly, the Minister’s power to add, amend or revoke an exemption to PIPA in s. 10(4) of the Act can only be exercised in conjunction with the Information and Privacy Commissioner. The Commissioner retains the power to withdraw a written approval (s. 10(5)) and doing so will require the Minister to promptly revoke the exemption.

Bill 13 also provides for transparency with respect to regulatory sandbox exemptions via requirements to publish information about sandbox participants, exemptions, terms and conditions imposed on them, expiry dates, and any amendments, revocations or cancellations of certificates of acceptance.

Given the federal-provincial division of powers, the scope of Bill 13 is somewhat limited, as it cannot provide exemptions to federal regulatory requirements. While Credit Unions are under provincial jurisdiction, banks are federally regulated, and the federal private sector data protection law – PIPEDA – also applies to interprovincial flows of data. Nevertheless, s. 19 of the Bill provides for reciprocal agreements between Alberta and “other governments that have a regulatory sandbox framework, or agencies of those other governments”. There is room here for collaboration and co-operation.

Bill 13 is clearly designed to attract fintech startups to Alberta by providing a more supple regulatory environment in which to operate. This is an interesting bill, and one to watch as it moves through the legislature in Alberta. Not only is it a model for a legislated regulatory sandbox its approach to addressing privacy issues is worth some examination.

On March 29, I appeared before Ontario's Standing Committee on Social Policy on the topic of the government's proposed Bill 88. My statement, which builds on an earlier post about this same bill is below. Note that the Bill has since received Royal Assent. No definition (as proposed below) of electronic monitoring was added to the bill by amendment. None of the amendments proposed by the Ontario Information and Privacy Commissioner were added.

Remarks by Teresa Scassa to the Standing Committee on Social Policy of the Ontario Legislature – Hearing on Bill 88, An Act to enact the Digital Platform Workers' Rights Act, 2022 and to amend various Acts

March 29, 2022

Thank you for this invitation to appear before the Standing Committee on Social Policy. My name is Teresa Scassa and I hold the Canada Research Chair in Information Law and Policy at the University of Ottawa.

The portion of Bill 88 that I wish to address in my remarks is that dealing with electronic monitoring of employees in Schedule 2. This part of the Bill would amend the Employment Standards Act to require employers with 25 or more employees to put in place a written policy on electronic monitoring and to provide employees with a copy. This is an improvement over having no requirements at all regarding employee monitoring. However, it is only a small step forward, and I will address my remarks to why it is important to do more, and where that might start.

Depending on the definition of electronic monitoring that is adopted (and I note that the bill does not contain a definition), electronic monitoring can include such diverse practices as GPS tracking of drivers and vehicles; cellphone tracking; and video camera surveillance. It can also include tracking or monitoring of internet usage, email monitoring, and the recording of phone conversations for quality control. Screen-time and key-stroke monitoring are also possible, as is tracking to measure the speed of task performance. Increasingly, monitoring tools are paired with AI-enabled analytics. Some electronic monitoring is for workplace safety and security purposes; other monitoring protects against unauthorized internet usage. Monitoring is now also used to generate employee metrics for performance evaluation, with the potential for significant impacts on employment, retention and advancement. Although monitoring was carried out prior to the pandemic, pandemic conditions and remote work have spurred the adoption of new forms of electronic monitoring. And, while electronic monitoring used to be much easier to detect (for example, surveillance cameras mounted in public view were obvious), much of it is now woven into the fabric of the workplace or embedded on workplace devices and employees may be unaware of the ways in which they are monitored and the uses to which their data will be put. The use of remote and AI-enabled monitoring services may also see employee data leaving the country, and may expose it to secondary uses (for example, in training the monitoring company’s AI algorithms).

An amendment that requires employers to establish a policy that gives employees notice of any electronic monitoring will at least address the issue of awareness of such monitoring, but it does very little for employee privacy. This is particularly disappointing since there had been some hope that a new Ontario private sector data protection law would have included protections for employee privacy. Privacy protection in the workplace is typically adapted to that context – it does not generally require employee consent for employment-related data collection. However, it does set reasonable limits on the data that is collected and on the purposes to which it is put. It also provides for oversight by a regulator like the Ontario Information and Privacy Commissioner (OIPC), and provides workers with a means of filing complaints in cases where they feel their rights have been infringed. Privacy laws also provide additional protections that are increasingly important in an era of cyber-insecurity, as they can address issues such as the proper storage and deletion of data, and data breach notification. In Canada, private sector employees have this form of privacy protection in Quebec, BC and Alberta, as well as in the federally-regulated private sector. Ontarians should have it too.

Obviously, Bill 88 will not be the place for this type of privacy protection. My focus here is on changes that could be made to Bill 88 that could enhance the small first step it takes on this important issue.

First, I would encourage this committee to recommend the addition of a definition of ‘electronic monitoring’. The broad range of technologies and applications that could constitute electronic monitoring and the lack of specificity in the Bill could lead to underinclusive policies from employers who struggle to understand the scope of the requirement. For example, do keypad entry systems constitute electronic monitoring? Are vehicular GPS systems fleet management devices or electronic employee monitoring or both? I propose the following definition:

electronic monitoring” is the collection and/or use of information about an employee by an employer, or by a third party for the benefit of an employer, by means of electronic equipment, computer programs, or electronic networks. Without limiting the generality of the forgoing, this includes collection and use of information gathered by employer-controlled electronic equipment, vehicles or premises, video cameras, electronic key cards and key pads, mobile devices, or software installed on computing devices or mobile devices.

Ontario’s Privacy Commissioner has made recommendations to improve the employee monitoring provisions of Bill 88. She has proposed that it be amended to require a digital copy of all electronic-monitoring policies drafted to comply with this Bill to be submitted to her office. This would be a small additional obligation that would not expose employers to complaints or liability. It would allow the OIPC to gather important data on the nature and extent of electronic workplace monitoring in Ontario. It would also give the OIPC insight into current general practices and emerging best practices. It could be used to understand gaps and shortcomings. Data gathered in this way could help inform future law and policy-making in this area. For example, I note that the Lieutenant Governor in Council will have the power under Bill 88 to make regulations setting out additional requirements for electronic monitoring policies, terms or conditions of employment related to electronic monitoring, and prohibitions related to electronic monitoring. The Commissioner’s recommendation would enhance both transparency and data gathering when it comes to workplace surveillance.

 

Note: My paper The Surveillant University: Remote Proctoring, AI and Human Rights is forthcoming in the Canadian Journal of Comparative and Contemporary Law. It explores a necessity and proportionality approach to the adoption by universities of remote proctoring solutions. Although the case discussed in the post below addresses a different set of issues, it does reflect some of the backlash and resistance to remote proctoring.

In 2020, the remote AI-enabled exam proctoring company Proctorio filed suit for copyright infringement and breach of confidence lawsuit against Ian Linkletter, a BC-based educational technologist. It also obtained an interim injunction prohibiting Linkletter from downloading or sharing information about Proctorio’s services from its Help Center or online Academy. Linkletter had posted links on Twitter to certain ‘unlisted’ videos on the company’s YouTube channel. His tweets were highly critical of the company and its AI-enabled exam surveillance software. He responded to the suit and the interim injunction with an application to have the underlying action thrown out under BC’s Protection of Public Participation Act (PPPA). This anti-SLAPP (strategic litigation against public participation) statute allows a court to dismiss proceedings that arise from an expression on a matter of public interest made by the applicant. On March 11, 2022, Justice Milman of the BC Supreme Court handed down his decision rejecting the PPPA application.

Linkletter first became concerned with Proctorio (a service to which the University of British Columbia subscribed at the time) after a University of British Columbia (UBC) student had her chat logs with Proctorio published online by the company after she complained about the service she received during an exam. In order to learn more about Proctorio, Linkletter developed a ‘sandbox’ course for which he was the instructor. This enabled him to access Proctorio’s online Help Center and its ‘Academy’ via UBC. These sites provide information and training to instructors. The Help Center had a number of videos available through YouTube. The URLs for these videos were unlisted, which meant that they were not searchable through YouTube’s site, although anyone with the link could access the video. Mr. Linkletter posted some of these links to Twitter, expressing his concerns with the contents of the videos. The company disabled the links, and created new ones. Linkletter also posted a screenshot of the Academy website with a message indicating that the original links were not available.

Justice Milman did not hesitate to find that the applicant had expressed himself on a matter of public interest. He noted that the software adopted by UBC “has generated controversy, there and elsewhere, due to concerns about its perceived invasiveness and what is thought by some to be its disparate and discriminatory impacts on some students.” (at para 3). The onus shifted to the respondent Proctorio to demonstrate the substantial merit of its proceedings, the lack of a valid defence by the applicant, and the seriousness of the harm it would suffer relative to the public interest in the expression. The threshold to be met by Proctorio was to demonstrate “that there are grounds to believe that its underlying claim is legally tenable and supported by evidence that is reasonably capable of belief such that the claim can be said to have a real prospect of success” (at para 56).

Proctorio’s lawsuit is essentially based on three intellectual property claims. The first of these was a breach of confidence claim relating to the unlisted YouTube video links. To succeed with this claim, the information at issue must be confidential; the circumstances under which it was communicated must give rise to an obligation of confidence; and the defendant must have made unauthorized use of the information to the detriment of the party communicating it. Justice Milman found that the respondent met the threshold of ‘substantial merit’ on this cause of action.

What Linkletter posted publicly on Twitter were links to videos. Proctorio claimed that it was these videos (along with a screen shot of a message on its Academy website) that were the confidential information it sought to protect. Although there are a number of factors that a court will take into account in assessing the confidentiality of information, the information must have a confidential nature and the party seeking to protect it must have taken appropriate steps to protect its confidentiality.

Unlisted YouTube video links are not publicly searchable, yet anyone with the link can access the content – and YouTube’s terms of service permit the sharing of unlisted links. However, Justice Milman found that Linkletter accessed Proctorio’s videos (and their links) via Proctorio’s website, which had its own terms of service to which Linkletter had clicked to agree. Those terms prohibit the copying or duplication of the materials found in their Help Centre – although they do not identify any of the content as confidential information. Canadian courts have found users of websites to be bound by terms of service regardless of whether they have read them; it is not a stretch to find that Linkletter had a contractual obligation not to share the contents. However, when it comes to taking the steps necessary to protect the confidentiality of information, one can question whether terms of service buried in links on a website – and that do not specifically identify the material as confidential – constitute a confidentiality or non-disclosure agreement. There was evidence that much of the material could be found elsewhere on the internet. It was also available to tens of thousands of instructors who were given access to the site at the discretion of university clients, not Proctorio. Justice Milman noted that “none of the videos stated on their face that they were commercially sensitive or should be kept from public view” (at para 64). He also found that “the choice to make them available on a public platform like YouTube when more secure options could have been used, dilutes the strength of Proctorio’s case” (at 64). In these circumstances, the court’s ruling that the confidential information claim had sufficient merit seems generous. In order to make out a claim of breach of confidence, it is also necessary for the plaintiff to show that the defendant made use of the information to the company’s detriment. Although the information was used to criticize the company, it is hard to see how Proctorio suffered any real damage particular to this breach of confidence. Much of the content was available through other sources, and the court described the company’s assertions that the videos could permit students to game their algorithms or could reveal their algorithmic secrets to competitors as ‘speculative’. Nonetheless, Justice Milman found enough here to satisfy the Proctorio’s onus to repel the PPPA application.

The copyright infringement argument depended upon a finding that the sharing of a hyperlink amounted to the sharing of the content that could be accessed by following the link. In spite of the fact that there is Canadian case law that suggests that sharing hyperlinks is not copyright infringement, Justice Milman was prepared to distinguish these cases. He found it significant that the materials were not publicly available except to those who had access to the links; sharing the links amounted to more than just pointing people to information otherwise available on the internet. Having found likely infringement, Justice Milman next considered available defences. He found that Linkletter did not meet the test for fair dealing as set out by the Supreme Court of Canada in CCH Canadian. It was conceded by Proctorio that Linkletter passed the first part of the fair dealing test – that the dealing was for a purpose listed in ss. 29, 29.1 or 29.2 of the Copyright Act. Presumably it was for the purposes of criticism or comment, although this is not made explicit in the decision. In assessing the fair dealing criteria, however, Justice Milman found that Linkletter’s circulation of the links on social media mitigated against fair dealing, as did the fact that anyone who followed the link had access to the full work.

On ‘alternatives to the dealing’, Justice Milman noted that rather than share the videos publicly, Linkletter could have reported on what he saw in the videos (although he earlier had found the videos (or the links to the videos – it is not entirely clear) to be confidential information). He could also have referred to other publicly available sources on the contents of the videos to make his point. On the issue of the nature of the work, Justice Milman found that the works were confidential (thus working against a finding of fair dealing) “even if most of the information in the videos was already available elsewhere on the internet”. Oddly, then, the fair dealing analysis not only underscores the fact that the material was largely publicly available, it suggests that an alternative to providing links to the videos was to discuss their contents freely. This suggests that the issue was not really the confidentiality of the content, but the fact that Linkletter had breached contractual terms of service in order to provide access to it.

On the final fair dealing criterion, the effect of the dealing on the work, Justice Milman found that by making the videos available through their links, “Mr. Linkletter created a risk that Proctorio’s product would be rendered less effective for its intended purposes (because students could more easily anticipate how instructors can configure the settings) and its proprietary information more readily available to competitors.” (at para 112). He conceded that this risk was ‘speculative’ given the amount of information about Proctorio’s services already in the public domain. Justice Milman found that, on balance, the fair dealing defence was not available to Linkletter. He also found that the defence of ‘user-generated content’ was not applicable.

Justice Milman declined to find that there had been circumvention of technical protection measures by Linkletter. He found that Linkletter had gained access to the materials by legitimate means. His subsequent copyright infringing acts were carried out without avoiding, bypassing, removing, deactivating or impairing any effective technology, device or component as required by s. 41.1 of the Copyright Act.

The final element of the test under the PPPA is that the interest of the plaintiff in carrying on with the action must outweigh its deleterious effects on expression and public participation. Justice Milman found that this test was met, notwithstanding the fact that he also found that the “corresponding harm that Proctorio has been able to demonstrate is limited” (at para 124). He found that the risks identified by Proctorio of students circumventing its technology or competitors learning how its software worked were “unlikely to materialize”. Nonetheless, he found that Linkletter’s actions “compromised the integrity of its Help Center and Academy screens, which were put in place in order to segregate the information made available to instructors and administrators from that intended for students and members of the public” (at para 126). He credited the interim injunction for limiting the adverse impacts in this regard. However, he was critical of the broad scope of that injunction and narrowed it to ensure that Linkletter was not enjoined from sharing or linking to content available from public sources. Justice Milman also noted that Linkletter remained free to express his views, as have been others who have also criticized Proctorio online.

The breach of copyright and breach of confidence claims in this case are weak, although their consideration is admittedly superficial given that this is not a decision on the merits. The court found just enough in the copyright and breach of confidence claims to keep them on the right side of the PPPA. Clearly Proctorio objects to the provision of direct public access to its instructional videos beyond the tens of thousands of instructors who have access to them each year – and who are apparently otherwise free to discuss their content in public fora. In this case, Proctorio quickly mitigated any harm by changing the links in question. It could also deny Linkletter access to its services on the basis that he breached the terms of use, and can better protect its content by no longer providing it on as unlisted content on YouTube. The narrowed injunction leaves Linkletter free to criticize Proctorio and to link to other publicly available information on the internet. In the circumstances, even if the underlying lawsuit is not a SLAPP suit, as Justice Milman concludes, it is hard to fathom why it should continue to consume scare judicial resources.

 

On February 28, 2022, the Ontario government introduced Bill 88, titled: An Act to enact the Digital Platform Workers’ Rights Act, 2022 and to amend various Acts. The Bill is now at the second reading stage.

Most of the attention received by the bill has been directed towards provisions that establish new rights for digital platform workers. The focus of this post is on a set of amendments relating to electronic monitoring of employees.

Bill 88 will amend the Employment Standards Act, 2000 to require employers with more than 25 employees to put in place written policies regarding employee monitoring. The policies must specify whether the employer monitors employees electronically, how and in what circumstances it does so, and for what purposes. Policies must include the date that they were prepared along with any dates of amendment. Regulations may also specify additional information to be contained in the policies. Employers will also have to provide – within set time limits – copies of the policy to each employee, as well as copies of any policies that have been revised or updated. There are policy record-keeping requirements as well.

The term “electronic monitoring” is not defined in the Bill, and there may be issues regarding its scope. Certainly, it would seem likely that audio and video surveillance, as well as key-stroke monitoring and other forms of digital surveillance would be captured by the concept. Less obvious to some employers might be things such as access cards that allow employees to enter and access certain areas of the workplace. Such cards track employee movements, and thus may also count as electronic monitoring. Beyond this, the bill provides significant scope for changes to obligations via regulation – the government may exempt employees from the requirement to have policies for certain forms of electronic monitoring in specified circumstances. Regulations may also prohibit some forms of electronic monitoring.

Given the extent to which employees are increasingly subject to electronic monitoring in the workplace – including in work-from-home contexts – these new provisions are welcome. They will provide employees with a right to know how and when they are being digitally monitored and for what purposes. However, the rights do not go much beyond this. Employees can only complain if they do not receive a copy of their employer’s policy within the specified timelines; the bill states that “a person may not file a complaint alleging a contravention of any other provision of this section or have such a complaint investigated” (s. 41.1.1(6)). Further, the bill places no limits on what employers may do with the information gathered. Section 41.1.1(7) provides: “nothing in this section affects or limits an employer’s ability to use information obtained through electronic monitoring of its employees”.

In 2021, the Ontario government floated the idea of enacting its own private sector data protection law. Such a law would have most likely included provisions protecting employee workplace privacy. Indeed, the province’s White Paper proposed the following:

An organization may collect, use or disclose personal information about an employee if the information is collected, used or disclosed solely for the purposes of,

(a) establishing, managing or terminating an employment or volunteer-work relationship between the organization and the individual; or

(b) managing a post-employment or post-volunteer-work relationship between the organization and the individual.

Although such a provision gives significant room for employers to collect data about their employees, including through electronic means, there is at least a purpose limitation that is absent from the Bill 88 amendments. Including employee personal information under a general data protection law would also have brought with it other protections contained within such legislation, including the right to complain of any perceived breach. All employees – not just those in work forces of 25 or more employees would have some rights with respect to data collected through electronic surveillance; such information would have to be collected, used or disclosed solely for the specified workplace-related purposes. Such an obligation would also be measurable against the general reasonableness requirement in privacy legislation.

The amendments to the Employment Standards Act, 2000 to address electronic surveillance of employees are better than nothing at all. Yet they do not go nearly as far as privacy legislation would in protecting employees’ privacy rights and in providing them with some recourse if they feel that employment surveillance goes beyond what is reasonably required in the employment context. With a provincial election looming it is highly unlikely that we will see a private sector data protection law introduced in the near future. One might also wonder whether the current government has lost its appetite entirely for such a move. In its submissions on the province’s White Paper, for example, the Ontario Chamber of Commerce chastised the province for considering the introduction of privacy legislation that would impose an additional burden on businesses at a time when they were seeking to recover from the effects of the pandemic. They advocated instead for reform to the federal government’s private sector data protection law which would build on the existing law and provide some level of national harmonization. Yet there are places where the federal law does not and cannot reach – and employment outside of federal sectors is one of them. Privacy protections for workers in Ontario must be grounded in provincial law; the proposed changes to the Employment Standards Act, 2000 fall far short of what a basic privacy law would provide.

 

 

 

I was invited to appear before the Standing Committee on Access to Information, Privacy and Ethics (ETHIC) on February 10, 2022. The Committee was conducting hearings into the use of de-identified, aggregate mobility data by the Public Health Agency of Canada. My opening statement to the committee is below. The recording of this meeting (as well as all of the other meetings on this topic) can be found here: https://www.ourcommons.ca/Committees/en/ETHI/Meetings

Thank you for the invitation to address this Committee on this important issue.

The matter under study by this Committee involves a decision by the Public Health Agency of Canada (PHAC) to use de-identified aggregate mobility data sourced from the private sector to inform public health decision-making during a pandemic.

This use of mobility data – and the reaction to it – highlight some of the particular challenges of our digital and data society:

· It confirms that people are genuinely concerned about how their data are used. It also shows that they struggle to keep abreast of the volume of collection, the multiple actors engaged in collection and processing, and the ways in which their data are shared with and used by others. In this context, consent alone is insufficient to protect individuals.

· The situation also makes clear that data are collected and curated for purposes that go well beyond maintaining customer relationships. Data are the fuel of analytics, profiling, and AI. Some of these uses are desirable and socially beneficial; others are harmful or deeply exploitative. The challenge is to facilitate the positive uses and to stop the harmful and exploitative ones.

· The situation also illustrates how easily data now flow from the private sector to the public sector in Canada. Our current legal framework governs public and private sector uses of personal data separately. Our laws need to be better adapted to address the flow of data across sectors.

Governments have always collected data and used it to inform decision-making. Today they have access to some of the same tools for big data analytics and AI as the private sector, and they have access to vast quantities of data to feed those analytics.

We want governments to make informed decisions based on the best available data, but we want to prevent excessive intrusions upon privacy.

Both PIPEDA and the Privacy Act must be modernized so that they can provide appropriate rules and principles to govern the use of data in a transformed and transforming digital environment. The work of this Committee on the mobility data issue could inform this modernization process.

As you have heard already from other witnesses, PIPEDA and the Privacy Act currently apply only to data about identifiable individuals. This creates an uncomfortable grey zone for de-identified data. The Privacy Commissioner must have some capacity to oversee the use of de-identified data, at the very least to ensure that re-identification does not take place. For example, the province of Ontario addressed this issue in 2019 amendments to its public sector data protection law. Amendments defined de-identified information for the purposes of use by government, required the development of data standards for de-identified data, and provided specific penalties for the re-identification of de-identified personal data.

The Discussion Paper on the Modernization of the Privacy Act speaks of the need for a new framework to facilitate the use of de-identified personal information by government, but we await a Bill to know what form that might take.

The former Bill C-11 – the bill to amend the Personal Information Protection and Electronic Documents Act that died on the Order Paper last fall, specifically defined de-identified personal information. It also created exceptions to the requirements of knowledge and consent to enable organizations to de-identify personal information in their possession; and to use or disclose it in some circumstances – also without knowledge and consent. It would have required de-identification measures proportional to the sensitivity of the information, and would have prohibited the re-identification of de-identified personal information – with stiff penalties.

The former Bill C-11 would also have allowed private sector organizations to share de-identified data without knowledge or consent, with certain entities (particularly government actors), for socially beneficial purposes. This provision would have applied to the specific situation before this committee right now – it would have permitted this kind of data sharing – and without the knowledge or consent of the individuals whose data were de-identified and shared.

This same provision or a revised version of it will likely be in the next bill to reform PIPEDA that is introduced into Parliament. When this happens, important questions to consider will be the scope of this provision (how should socially beneficial purposes be defined?); what degree of transparency should be required on the part of organizations who share our de-identified information?; and how will the sharing of information for socially beneficial purposes by private sector organizations with the government dovetail with any new obligations for the public sector -- including whether there should be any prior review or approval of plans to acquire and/or use the data, and what degree of transparency is required. I hope that the work of this Committee on the mobility data issue will help to inform these important discussions.

 

Ontario has just released its Beta principles for the ethical use of AI and data enhanced technologies in Ontario. These replace the earlier Alpha principles, and are revised based upon commentary and feedback on the Alpha version. Note that these principles are designed for use in relation to AI technologies adopted for the Ontario public sector.

Below you will find a comparison table I created to provide a quick glance at what has been changed since the previous version. I have flagged significant additions with italics in the column for the Beta version. I have also flagged some words or concepts that have disappeared in the Beta version by using strikethrough in the column with the Alpha version. I have focused on the principles, and have not flagged changes to the “Why it Matters” section of each principle.

One important change to note is that the Beta version now refers not just to technologies used to make decisions, but also technologies used to assist in decision-making.

 

 

Principles for Ethical Use [Alpha]

Principles for Ethical Use [Beta]

The alpha Principles for Ethical Use set out six points to align the use of data-driven technologies within government processes, programs and services with ethical considerations and values. Our team has undertaken extensive jurisdictional scans of ethical principles across the world, in particular the US the European Union and major research consortiums. The Ontario “alpha” principles complement the Canadian federal principles by addressing a gap concerning specificity. Ontario’s principles support our diverse economic ecosystem by not clashing with existing best practices, principles and frameworks. This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.

 

These Principles for Ethical Use set out six points to align the use of data enhanced technologies within government processes, programs and services with ethical considerations and values.

 

The Trustworthy AI team within Ontario’s Digital Service has undertaken extensive jurisdictional scans of ethical principles across the world, in particular New Zealand, the United States, the European Union and major research consortiums.

 

The Ontario “beta” principles complement the Canadian federal principles by addressing a gap concerning specificity. Ontario’s principles support our diverse economic ecosystem by not clashing with existing best practices, principles and frameworks. This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.

 

We’re in the early days of bringing these principles to life. We encourage you to adopt as much of the principles as possible, and to share your feedback with us. You can email This e-mail address is being protected from spambots. You need JavaScript enabled to view it for more details.

 

You can also check out the Transparency Guidelines (GitHub).

1. Transparent and Explainable

 

There must be transparent and responsible disclosure around data-driven technology like Artificial Intelligence (AI), automated decisions and machine learning (ML) systems to ensure that people understand outcomes and can discuss, challenge and improve them.

 

 

Where automated decision making has been used to make individualized and automated decisions about humans, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject should be available.

 

Why it Matters

 

There is no way to hold data-driven technologies accountable, particularly as they impact various historically disadvantaged groups if the public is unaware of the algorithms and automated decisions the government is making. Transparency of use must be accompanied with plain language explanations for the public to have access to and not just the technical or research community. For more on this, please consult the Transparency Guidelines.

 

1. Transparent and explainable

 

There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used.

 

When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences.

 

Why it matters

 

Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it.

 

Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups.

 

For more on this, please consult the Transparency Guidelines.

 

2. Good and Fair

 

Data-driven technologies should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards to ensure a fair and just society.

 

Designers, policy makers and developers should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.

 

Why it matters

 

Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the lifecycle of use. The definitions of good and fair are intentionally vague to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

 

2. Good and fair

 

Data enhanced technologies should be designed and operated in a way throughout their life cycle that respects the rule of law, human rights, civil liberties, and democratic values. These include dignity, autonomy, privacy, data protection, non-discrimination, equality, and fairness.

 

Why it matters

 

Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the life cycle of use. The definitions of good and fair are intentionally broad to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

 

3. Safe

 

Data-driven technologies like AI and ML systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.

 

Designers and developers should implement mechanisms and safeguards, such as capacity for human determination and complete halt of the system operations, that are appropriate to the context and predetermined at initial deployment.

 


Why it matters

Creating safe data-driven technologies means embedding safeguards throughout the life cycle of the deployment of the algorithmic system. Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. Despite our best efforts there will be unexpected outcomes and impacts. Systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are no longer agreeable that a human can adapt, correct or improve the system.

3. Safe

 

Data enhanced technologies like AI and ML systems must function in a safe and secure way throughout their life cycles and potential risks should be continually assessed and managed.

 

Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended. This would include mechanisms related to system testing, piloting, scaling and human intervention as well as alternative processes in case a complete halt of system operations is required. The mechanisms must be appropriate to the context and determined before deployment but should be iterated upon throughout the system’s life cycle.

 

Why it matters

Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. As well, issues with the system can arise that only become apparent after the system is deployed.

 

Therefore, despite our best efforts unexpected outcomes and impacts need to be considered. Accordingly, systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are not intended, a human can adapt, correct or improve the system.

 

4. Accountable and Responsible

 

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the above principles. Algorithmic systems should be periodically peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.

 

Where AI is used to make decisions about individuals there needs to be a process for redress to better understand how a given decision was made.

 

Why it matters

 

In order for there to be accountability for decisions that are made by an AI or ML system a person, group of people or organization needs to be identified prior to deployment. This ensures that if redress is needed there is a preidentified entity that is responsible and can be held accountable for the outcomes of the algorithmic systems.

 

4. Accountable and responsible

 

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the other principles. Human accountability and decision making over AI systems within an organization needs to be clearly identified, appropriately distributed and actively maintained throughout the system’s life cycle. An organizational culture around shared ethical responsibilities over the system must also be promoted.

 

Where AI is used to make or assist with decisions, a public and accessible process for redress should be designed, developed, and implemented with input from a multidisciplinary team and affected stakeholders. Algorithmic systems should also be regularly peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.

 

Why it matters

 

Identifying and appropriately distributing accountability within an organization helps ensure continuous human oversight over the system is properly maintained. In addition to clear roles related to accountability, it is also important to promote an organizational culture around shared ethical responsibilities. This helps prevent gaps and avoids the situation where ethical considerations are always viewed as someone else’s responsibility.

 

While our existing legal framework includes numerous traditional processes of redress related to governmental decision making, AI systems can present unique challenges to those traditional processes with their complexity. Input from a multidisciplinary team and affected stakeholders will help identify those issues in advance and design appropriate mechanisms to mitigate them.

 

Regular peer review of AI systems is also important. Issues around bias may not be evident when AI systems are initially designed or developed, so it's important to consider this requirement throughout the lifecycle of the system.

 

5. Human Centric

 

The processes and outcomes behind an algorithm should always be developed with human users as the main consideration. Human centered AI should reflect the information, goals, and constraints that a human decision-maker weighs when arriving at a decision.

 

Keeping human users at the center entails evaluating any outcomes (both direct and indirect) that might affect them due to the use of the algorithm. Contingencies for unintended outcomes need to be in place as well, including removing the algorithms entirely or ending their application.

 

Why it matters

 

Placing the focus on human user ensures that the outcomes do not cause adverse effects to users in the process of creating additional efficiencies.

 

In addition, Human-centered design is needed to ensure that you are able to keep a human in the loop when ensuring the safe operation of an algorithmic system. Developing algorithmic systems with the user in mind ensures better societal and economic outcomes from the data-driven technologies.

 

5. Human centric

 

AI systems should be designed with a clearly articulated public benefit that considers those who interact with the system and those who are affected by it. These groups should be meaningfully engaged throughout the system’s life cycle, to inform development and enhance operations. An approach to problem solving that embraces human centered design is strongly encouraged.

 

Why it matters

 

Clearly articulating a public benefit is an important step that enables meaningful dialogue early with affected groups and allows for measurement of success later.

 

Placing the focus on those who interact with the system and those who are affected by it ensures that the outcomes do not cause adverse effects in the process of creating additional efficiencies.

 

Developing algorithmic systems that incorporate human centred design will ensure better societal and economic outcomes from the data enhanced technologies.

 

6. Sensible and Appropriate

 

Data-driven technologies like AI or ML shall be developed with consideration of how it may apply to specific sectors or to individual cases and should align with the Canadian Charter of Human Rights and Freedoms and with Federal and Provincial AI Ethical Use.

 

Other biproducts of deploying data-driven technologies such as environmental, sustainability, societal impacts should be considered as they apply to specific sectors and use cases and applicable frameworks, best practices or laws.

 

Why it matters

 

Algorithmic systems and machine learning applications will differ by sector and user. As a result, while the above principles are a good starting point for developing ethical data-driven technologies it is important that additional considerations be given to the specific sectors and environments to which the algorithm is applied.

 

Experts in both technology and ethics should be consulted in development of data-driven technologies such as AI to guard against any adverse effects (including societal, environmental and other long-term effects).

6. Sensible and appropriate

 

Every data enhanced system exists not only within its use case, but also within a particular sector of society and a broader context that can feel its impact. Data enhanced technologies should be designed with consideration of how they may apply to a particular sector along with awareness of the broader context. This context could include relevant social or discriminatory impacts.

 

Why it matters

 

Algorithmic systems and machine learning applications will differ by sector. As a result, while the above principles are a good starting point for developing ethical data enhanced technologies it is important that additional considerations be given to the specific sectors to which the algorithm is applied.

 

Encouraging sector specific guidance also helps promote a culture of shared ethical responsibilities and a dialogue around the important issues raised by data enhanced systems.

 

 

On December 7, 2021, the privacy commissioners of Quebec, British Columbia and Alberta issued orders against the US-based company Clearview AI, following its refusal to voluntarily comply with the findings in the joint investigation report they issued along with the federal privacy commissioner on February 3, 2021.

Clearview AI gained worldwide attention in early 2020 when a New York Times article revealed that its services had been offered to law enforcement agencies for use in a largely non-transparent manner in many countries around the world. Clearview AI’s technology also has the potential for many different applications including in the private sector. It built its massive database of over 10 billion images by scraping photographs from publicly accessible websites across the Internet, and deriving biometric identifiers from the images. Users of its services upload a photograph of a person. The service then analyzes that image and compares it with the stored biometric identifiers. Where there is a match, the user is provided with all matching images and their metadata, including links to the sources of each image.

Clearview AI has been the target of investigation by data protection authorities around the world. France’s Commission Nationale de l'Informatique et des Libertés has found that Clearview AI breached the General Data Protection Regulation (GDPR). Australia and the UK conducted a joint investigation which similarly found the company to be in violation of their respective data protection laws. The UK commissioner has since issued a provisional view, stating its intent to levy a substantial fine. Legal proceedings are currently underway in Illinois, a state which has adopted biometric privacy legislation. Canada’s joint investigation report issued by the federal, Quebec, B.C. and Alberta commissioners found that Clearview AI had breached the federal Personal Information Protection and Electronic Documents Act, as well as the private sector data protection laws of each of the named provinces.

The Canadian joint investigation set out a series of recommendations for Clearview AI. Specifically, it recommended that Clearview AI cease offering its facial recognition services in Canada, “cease the collection, use and disclosure of images and biometric facial arrays collected from individuals in Canada”, and delete any such data in its possession. Clearview AI responded by saying that it had temporarily ceased providing its services in Canada, and that it was willing to continue to do so for a further 18 months. It also indicated that if it offered services in Canada again, it would require its clients to adopt a policy regarding facial recognition technology, and it would offer an audit trail of searches.

On the second and third recommendations, Clearview AI responded that it was simply not possible to determine which photos in its database were of individuals in Canada. It also reiterated its view that images found on the Internet are publicly available and free for use in this manner. It concluded that it had “already gone beyond its obligations”, and that while it was “willing to make some accommodations and met some of the requests of the Privacy Commissioners, it cannot commit itself to anything that is impossible and or [sic] required by law.” (Letter reproduced at para 3 of Order P21-08).

In this post I consider three main issues that flow from the orders issued by the provincial commissioners. The first relates to the cross-border reach of Canadian law. The second relates to enforcement (or lack thereof) in the Canadian context, particularly as compared with what is available in other jurisdictions such as the UK and the EU. The third issue relates to the interest shown by the commissioners in a compromise volunteered by Clearview AI in the ongoing Illinois litigation – and what this might mean for Canadians’ privacy.

 

1. Jurisdiction

Clearview AI maintains that Canadian laws do not apply to it. It argues that it is a US-based company with no physical presence in Canada. Although it initially provided its services to Canadian law enforcement agencies (see this CBC article for details of the use of Clearview by Toronto Police Services), it had since ceased to do so – thus, it no longer had clients in Canada. It scraped its data from platform companies such as Facebook and Instagram, and while many Canadians have accounts with such companies, Clearview’s scraping activities involved access to data hosted on platforms outside of Canada. It therefore argued that it not only did not operate in Canada, it had no ‘real and substantial’ connection to Canada.

The BC Commissioner did not directly address this issue. In his Order, he finds a hook for jurisdiction by referring to the personal data as having been “collected from individuals in British Columbia without their consent”, although it is clear there is no direct collection. He also notes Clearview’s active contemplation of resuming its services in Canada. Alberta’s Commissioner makes a brief reference to jurisdiction, simply stating that “Provincial privacy legislation applies to any private sector organization that collects, uses and discloses information of individuals within that province” (at para 12). The Quebec Commissioner, by contrast, gives a thorough discussion of the jurisdictional issues. In the first place, she notes that some of the images came from public Quebec sources (e.g., newspaper websites). She also observes that nothing indicates that images scraped from Quebec sources have been removed from the database; they therefore continue to be used and disclosed by the company.

Commissioner Poitras cited the Federal Court decision in Lawson for the principle that PIPEDA could apply to a US-based company that collected personal information from Canadian sources – so long as there is a real and substantial connection to Canada. She found a connection to Quebec in the free accounts offered to, and used by, Quebec law enforcement officials. She noted that the RCMP, which operates in Quebec, had also been a paying client of Clearview’s. When Clearview AI was used by clients in Quebec, those clients uploaded photographs to the service in the search for a match. This also constituted a collection of personal information by Clearview AI in Quebec.

Commissioner Poitras found that the location of Clearview’s business and its servers is not a determinative jurisdictional factor for a company that offers its services online around the world, and that collects personal data from the Internet globally. She found that Clearview AI’s database was at the core of its services, and a part of that database was comprised of data from Quebec and about Quebeckers. Clearview had offered its service in Quebec, and its activities had a real impact on the privacy of Quebeckers. Commissioner Poitras noted that millions of images of Quebeckers were appropriated by Clearview without the consent of the individuals in the images; these images were used to build a global biometric facial recognition database. She found that it was particularly important not to create a situation where individuals are denied recourse under quasi-constitutional laws such as data protection laws. These elements in combination, in her view, would suffice to create a real and substantial connections.

Commissioner Poitras did not accept that Clearview’s suspension of Canadian activities changed the situation. She noted that information that had been collected in Quebec remained in the database, which continued to be used by the company. She stated that a company could not appropriate the personal information of a substantial number of Quebeckers, commercialise this information, and then avoid the application of the law by saying they no longer offered services in Quebec.

The jurisdictional questions are both important and thorny. This case is different from cases such as Lawson and Globe24hrs, where the connections with Canada were more straightforward. In Lawson, there was clear evidence that the company offered its services to clients in Canada. It also directly obtained some of its data about Canadians from Canadian sources. In Globe24hrs, there was likewise evidence that Canadians were being charged by the Romanian company to have their personal data removed from the database. In addition, the data came from Canadian court decisions that were scraped from websites located in Canada. In Clearview AI, while some of the scraped data may have been hosted on servers located in Canada, most were scraped from offshore social media platform servers. If Clearview AI stopped offering its services in Canada and stopped scraping data from servers located in Canada, what recourse would Canadians have? The Quebec Commissioner attempts to address this question, but her reasons are based on factual connections that might not be present in the future, or in cases involving other data-scraping respondents. There needs to be a theory of real and substantial connection that specifically addresses the scraping of data from third-party websites, contrary to those websites’ terms of use, and contrary to the legal expectations of the sites’ users that can anchor the jurisdiction of Canadian law, even when the scraper has no other connection to Canada.

Canada is not alone with these jurisdictional issue – Australia’s orders to Clearview AI are currently under appeal, and the jurisdiction of the Australian Commissioner to make such orders will be one of the issues on appeal. A jurisdictional case – one that is convincing not just to privacy commissioners but to the foreign courts that may have to one day determine whether to enforce Canadian decisions – needs to be made.

 

2. Enforcement

At the time the facts of the Clearview AI investigation arose, all four commissioners had limited enforcement powers. The three provincial commissioners could issue orders requiring an organization to change its practices. The federal commissioner has no order-making powers, but can apply to Federal Court to ask that court to issue orders. The relative impotence of the commissioners is illustrated by Clearview’s hubristic response, cited above, that indicates that it had already “gone beyond its obligations”. Clearly, it considers anything that the commissioners had to say on the matter did not amount to an obligation.

The Canadian situation can be contrasted with that in the EU, where commissioners’ orders requiring organizations to change their non-compliant practices are now reinforced by the power to levy significant administrative monetary penalties (AMPs). The same situation exists in the UK. There, the data commissioner has just issued a preliminary enforcement notice and a proposed fine of £17M against Clearview AI. As noted earlier, the enforcement situation is beginning to change in Canada – Quebec’s newly amended legislation permits the levying of substantial AMPs. When some version of Bill C-11 is reintroduced in Parliament in 2022, it will likely also contain the power to levy AMPs. BC and Alberta may eventually follow suit. When this happens, the challenge will be first, to harmonize enforcement approaches across those jurisdictions; and second, to ensure that these penalties can meaningfully be enforced against offshore companies such as Clearview AI.

On the enforcement issue, it is perhaps also worth noting that the orders issued by the three Commissioners in this case are all slightly different. The Quebec Commissioner orders Clearview AI to cease collecting images of Quebeckers without consent, and to cease using these images to create biometric identifiers. It also orders the destruction, within 90 days of receipt of the order, all of the images collected without the consent of Quebeckers, as well as the destruction of the biometric identifiers. Alberta’s Commissioner orders that Clearview cease offering its services to clients in Alberta, cease the collection and use of images and biometrics collected from individuals in Alberta, and delete the same from its databases. BC’s order prohibits the offering of Clearview AI’s services using data collected from British Columbians without their consent to clients in British Columbia. He also orders that Clearview AI use “best efforts” to cease its collection, use and disclosure of images and biometric identifiers of British Columbians without its consent, as well as to use the same “best efforts” to delete images and biometric identifiers collected without consent.

It is to these “best efforts” that I next turn.

 

3. The Illinois Compromise

All three Commissioners make reference to a compromise offered by Clearview AI in the course of ongoing litigation in Illinois under Illinois’ Biometric Information Privacy Act. By referring to “best efforts” in his Order, the BC Commissioner seems to be suggesting that something along these lines would be an acceptable compromise in his jurisdiction.

In its response to the Canadian commissioners, Clearview AI raised the issue that it cannot easily know which photographs in its database are of residents of particular provinces, particularly since these are scraped from the Internet as a whole – and often from social media platforms hosted outside Canada.

Yet Clearview AI has indicated that it has changed some of its business practices to avoid infringing Illinois law. This includes “cancelling all accounts belonging to any entity based in Illinois” (para 12, BC Order). It also includes blocking from any searches all images in the Clearview database that are geolocated in Illinois. In the future, it also offers to create a “geofence” around Illinois. This means that it “will not collect facial vectors from any scraped images that contain metadata associating them with Illinois” (para 12 BC Order). It will also “not collect facial vectors from images stored on servers that are displaying Illinois IP addresses or websites with URLs containing keywords such as “Chicago” or “Illinois”.” Clearview apparently offers to create an “opt-out” mechanism whereby people can ask to have their photos excluded from the database. Finally, it will require its clients to not upload photos of Illinois residents. If such a photo is uploaded, and it contains Illinois-related metadata, no search will be performed.

The central problem with accepting the ‘Illinois compromise’ is that it allows a service built on illegally scraped data to continue operating with only a reduced privacy impact. Ironically, it also requires individuals who wish to benefit from this compromise, to provide more personal data in their online postings. Many people actually suppress geolocation information from their photographs to protect their privacy. Ironically, the ‘Illinois compromise’ can only exclude photos that contain geolocation data. Even with geolocation turned on, it would not exclude the vacation pics of any BC residents taken outside of BC (for example). Further, limiting scraping of images from Illinois-based sites will not prevent the photos of Illinois-based individuals from being included within the database a) if they are already in there, and b) if the images are posted on social media platforms hosted elsewhere.

Clearview AI is a business built upon data collection practices that are illegal in a large number of countries outside the US. The BC Commissioner is clearly of the opinion that a compromise solution is the best that can be hoped for, and he may be right in the circumstances. Yet it is a bitter pill to think that such flouting of privacy laws will ultimately be rewarded, as Clearview gets to keep and commercialize its facial recognition database. Accepting such a compromise could limit the harms of the improper exploitation of personal data, but it does not stop the exploitation of that data in all circumstances. And even this unhappy compromise may be out of reach for Canadians given the rather toothless nature of our current laws – and the jurisdictional challenges discussed earlier.

If anything, this situation cries out for global and harmonized solutions. Notably it requires the US to do much more to bring its wild-west approach to personal data exploitation in line with the approaches of its allies and trading partners. It also will require better cooperation on enforcement across borders. It may also call for social media giants to take more responsibility when it comes to companies that flout their terms and conditions to scrape their sites for personal data. The Clearview AI situation highlights these issues – as well as the dramatic impacts data misuse may have on privacy as personal data continues to be exploited for use in powerful AI technologies.

 

It has been quite a while since I posted to my blog. The reason has simply been a crushing workload that has kept me from writing anything that did not have an urgent deadline! In the meantime, so much has been going on in terms of digital and data law and policy in Canada and around the world. I will try to get back on track!

Artificial intelligence (AI) has been garnering a great deal of attention globally –for its potential to drive innovation, its capacity to solve urgent challenges, and its myriad applications across a broad range of sectors. In an article that is forthcoming in the Canadian Journal of Law and Technology, Bradley Henderson, Colleen Flood and I examine issues of algorithmic and data bias leading to discrimination in the healthcare context. AI technologies have tremendous potential across the healthcare system – AI innovation can improve workflows, enhance diagnostics, accelerate research and refine treatment. Yet at the same time, AI technologies bring with them many concerns, among them, bias and discrimination.

Bias can take many forms. In our paper, we focus on those manifestations of bias that can lead to discrimination of the kind recognized in human rights legislation and the Charter. Discrimination can arise either from flawed assumptions being coded into algorithms, from adaptive AI that makes its own correlations, or from unrepresentative data (or from a combination of these).

There are some significant challenges when it comes to the data used to train AI algorithms. Available data may reflect existing disparities and discrimination within the healthcare system. For example, some communities may be underrepresented in the data because of lack of adequate access to healthcare, or from a lack of trust in the healthcare system that tends to keep them away until health issues become acute. Lack of prescription drug coverage or access to paid sick leave may also impact when and how people access health care services. Racial or gender bias in terms of how symptoms or concerns are recorded or how illness is diagnosed can also affect the quality and representativeness of existing stores of data. AI applications developed and trained on data from US-based hospitals may reflect the socio-economic biases that impact access to health care in the US. It may also be questionable the extent to which they are generalizable to the Canadian population or sub-populations. In some cases, data about race or ethnicity may be important markers for understanding diseases and how they manifest themselves but these data may be lacking.

There are already efforts afoot to ensure better access to high quality health data for research and innovation in Canada, and our paper discusses some of these. Addressing data quality and data gaps is certainly one route to tackling bias and discrimination in AI. Our paper also looks at some of the legal and regulatory mechanisms available. On the legal front, we note that there are some recourses available where things go wrong, including human rights complaints, lawsuits for negligence, or even Charter challenges. However, litigating the harms caused by algorithms and data is likely to be complex, expensive, and fraught with difficulty. It is better by far to prevent harms than to push a system to improve itself after costly litigation. We consider the evolving regulatory landscape in Canada to see what approaches are emerging to avoid or mitigate harms. These include regulatory approaches for AI-enabled medical devices, and advanced therapeutic products. However, these systems focus on harms to human health, and would not apply to AI tools developed to improve access to healthcare, manage workflows, conduct risk assessments, and so on. There are regulatory gaps, and we discuss some of these. The paper also makes recommendations regarding improving access to better data for research and innovation, with the accompanying necessary enhancements to privacy laws and data governance regimes to ensure the protection of the public.

One of the proposals made in the paper is that bias and discrimination in healthcare-related AI applications should be treated as a safety issue, bringing a broader range of applications under Health Canada regulatory regimes. We also discuss lifecycle regulatory approaches (as opposed to one-off approvals), and providing warnings about data gaps and limitations. We also consider enhanced practitioner licensing and competency frameworks, requirements at the procurement stage, certification standards and audits. We call for law reform to human rights legislation which is currently not well-adapted to the AI context.

In many ways, this paper is just a preliminary piece. It lays out the landscape and identifies areas where there are legal and regulatory gaps and a need for both law reform and regulatory innovation. The paper is part of the newly launched Machine MD project at uOttawa, which is funded by the Canadian Institutes for Health Research, and that will run for the next four years.

The full pre-print text of the article can be found here.

<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 1 of 35

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law