Teresa Scassa - Blog

Displaying items by tag: AI governance

The government of the United Kingdom has published a consultation paper seeking input into its proposal for AI regulation. The paper is aptly titled A pro-innovation approach to AI regulation, since it restates that point insistently throughout the document. The UK proposal provides an interesting contrast to Canada’s AI governance bill currently before Parliament.

Both Canada and the UK set out to regulate AI systems with the twin goals of supporting innovation on the one hand, and building trust in AI on the other. (Note here that the second goal is to build trust in AI, not to protect the public. Although the protection of the public is acknowledged as one way to build trust, there is a subtle distinction here). However, beyond these shared goals, the proposals are quite different. Canada’s approach in Part 3 of Bill C-27 (the Artificial Intelligence and Data Act (AIDA)) is to create a framework to regulate as yet undefined “high impact” AI. The definition of “high impact” as well as many other essential elements of the bill are left to be articulated in regulations. According to a recently published companion document to the AIDA, leaving so much of the detail to regulations is how the government proposes to keep the law ‘agile’ – i.e. capable of responding to a rapidly evolving technological context. The proposal would also provide some governance for anonymized data by imposing general requirements to document the use of anonymized personal information in AI innovation. The Minister of Innovation is made generally responsible for oversight and enforcement. For example, the AIDA gives the Minister of Innovation the authority (eventually) to impose stiff administrative monetary penalties on bad actors. The Canadian approach is similar to that in the EU AI Act in that it aims for a broad regulation of AI technologies, and it chooses legislation as the vehicle to do so. It is different in that the EU AI Act is far more detailed and prescriptive; the AIDA leaves the bulk of its actual legal requirements to be developed in regulations.

The UK proposal is notably different from either of these approaches. Rather than create a new piece of legislation and/or a new regulatory authority, the UK proposes to set out five principles for responsible AI development and use. Existing regulators will be encouraged and, if necessary, specifically empowered, to regulate AI according to these principles within their spheres of regulatory authority. Examples of regulators who will be engaged in this framework include the Information Commissioner’s Office, regulators for human rights, consumer protection, health care products and medical devices, and competition law. The UK scheme also accepts that there may need to be an entity within government that can perform some centralized support functions. These may include monitoring and evaluation, education and awareness, international interoperability, horizon scanning and gap analysis, and supporting testbeds and sandboxes. Because of the risk that some AI technologies or issues may fall through the cracks between existing regulatory schemes, the government anticipates that regulators will assist government in identifying gaps and proposing appropriate actions. These could include adapting the mandates of existing regulators or providing new legislative measures if necessary.

Although Canada’s federal government has labelled its approach to AI regulation as ‘agile’, it is clear that the UK approach is much closer to the concept of agile regulation. Encouraging existing regulators to adapt the stated AI principles to their remit and to provide guidance on how they will actualize these principles will allow them to move quickly, so long as there are no obvious gaps in legal authority. By contrast, even once passed, it will take at least two years for Canada’s AIDA to have its normative blanks filled in by regulations. And, even if regulations might be somewhat easier to update than statutes, guidance is even more responsive, giving regulators greater room to manoeuvre in a changing technological landscape. Embracing the precepts of agile regulation, the UK scheme emphasizes the need to gather data about the successes and failures of regulation itself in order to adapt as required. On the other hand, while empowering (and resourcing) existing regulators will have clear benefits in terms of agility, the regulatory gaps could well be important ones – with the governance of large language models such as ChatGPT as one example. While privacy regulators are beginning to flex their regulatory muscles in the direction of ChatGPT, data protection law will only address a subset of the issues raised by this rapidly evolving technology. In Canada, AIDA’s governance requirements will be specific to risk-based regulation of AI, and will apply to all those who design, develop or make AI systems available for use (unless of course they are explicitly excluded under one of the many actual and potential exceptions).

Of course, the scheme in the AIDA may end up as more of a hybrid between the EU and the UK approaches in that the definition of “high impact” AI (to which the AIDA will apply) may be shaped not just by the degree of impact of the AI system at issue but also by the existence of other suitable regulatory frameworks. In other words, the companion document suggests that some existing regulators (health, consumer protection, human rights, financial institutions) have already taken steps to extend their remit to address the use of AI technologies within their spheres of competence. In this regard, the companion document speaks of “regulatory gaps that must be filled” by a statute such as AIDA as well as the need for the AIDA to integrate “seamlessly with existing Canadian legal frameworks”. Although it is still unclear whether the AIDA will serve only to fill regulatory gaps, or will provide two distinct layers of regulation in some cases, one of the criteria for identifying what constitutes a “high impact” system includes “[t]he degree to which the risks are adequately regulated under another law”. The lack of clarity in the Canadian approach is one of its flaws.

There is a certain attractiveness in the idea of a regulatory approach like that proposed by the UK – one that begins with existing regulators being both specifically directed and further enabled to address AI regulation within their areas of responsibility. As noted earlier, it seems far more agile than Canada’s rather clunky bill. Yet such an approach is much easier to adopt in a unitary state than in a federal system such as Canada’s. In Canada, some of the regulatory gaps are with respect to matters otherwise under provincial jurisdiction. Thus, it is not so simple in Canada to propose to empower and resource all implicated regulators, nor is it as easy to fill gaps once they are identified. These regulators and the gaps between them might fall under the jurisdiction of any one of 13 different governments. The UK acknowledges (and defers) its own challenges in this regard with respect to devolution at paragraph 113 of its white paper, where it states: “We will continue to consider any devolution impacts of AI regulation as the policy develops and in advance of any legislative action”. Instead, the AIDA, Canada leverages its general trade and commerce power in an attempt to provide AI governance that is as comprehensive as possible. It isn’t pretty (since it will not capture all AI innovation that might have impacts on people) but it is part of the reality of the federal state (or the state of federalism) in which we find ourselves.

Published in Privacy

This post is the fifth in a series on Canada’s proposed Artificial Intelligence and Data Act in Bill C-27. It considers the federal government’s constitutional authority to enact this law, along with other roles it might have played in regulating AI in Canada. Earlier posts include ones on the purpose and application of the AIDA; regulated activities; the narrow scope of the concepts of harm and bias in the AIDA and oversight and protection.

AI is a transformative technology that has the power to do amazing things, but which also has the potential to cause considerable harm. There is a global clamour to regulate AI in order to mitigate potential negative effects. At the same time, AI is seen as a driver of innovation and economies. Canada’s federal government wants to support and nurture Canada’s thriving AI sector while at the same time ensuring that there is public trust in AI. Facing similar issues, the EU introduced a draft AI Act, which is currently undergoing public debate and discussion (and which itself was the product of considerable consultation). The US government has just proposed its Blueprint for an AI Bill of Rights, and has been developing policy frameworks for AI, including the National Institute of Standards and Technology (NIST) Risk Management Framework. The EU and the US approaches are markedly different. Interestingly, in the US (which, like Canada, is a federal state) there has been considerable activity at the state level on AI regulation. Serious questions for Canada include what to do about AI, how best to do it – and who should do it.

In June 2022, the federal government introduced the proposed Artificial Intelligence and Data Act (AIDA) in Bill C-27. The AIDA takes the form of risk regulation; in other words, it is meant to anticipate and mitigate AI harms to the public. This is an ex ante approach; it is intended to address issues before they become problems. The AIDA does not provide personal remedies or recourses if anyone is harmed by AI – this is left for ex post regimes (ones that apply after harm has occurred). These will include existing recourses such as tort law (extracontractual civil liability in Quebec), and complaints to privacy, human rights or competition commissioners.

I have addressed some of the many problems I see with the AIDA in earlier posts. Here, I try to unpack issues around the federal government’s constitutional authority to enact this bill. It is not so much that they lack jurisdiction (although they might); rather, how they understand their jurisdiction can shape the nature and substance of the bill they are proposing. Further, the federal government has acted without any consultation on the AIDA prior to its surprising insertion in Bill C-27. Although it promises consultation on the regulations that will follow, this does not make up for the lack of discussion around how we should identify and address the risks posed by AI. This rushed bill is also shaped by constitutional constraints – it is AI regulation with structural limitations that have not been explored or made explicit.

Canada is a federal state, which means that the powers typically exercised by a nation state are divided between a federal and regional governments. In theory, federalism allows for regional differences to thrive within an overarching framework. However, some digital technology issues (including data protection and AI) fit uneasily within Canada’s constitutional framework. In proposing the Consumer Privacy Protection Act part of Bill C-27, for example, the federal government appears to believe that it does not have the jurisdiction to address data protection as a matter of human rights – this belief has impacted the substance of the bill.

In Canada, the federal government has jurisdiction over criminal law, trade and commerce, banking, navigation and shipping, as well as other areas where it makes more sense to have one set of rules than to have ten. The cross-cutting nature of AI, the international competition to define the rules of the game, and the federal government’s desire to take a consistent national approach to its regulation are all factors that motivated the inclusion of the AIDA in Bill C-27. The Bill’s preamble states that “the design, development and deployment of artificial intelligence systems across provincial and international borders should be consistent with national and international standards to protect individuals from potential harm”. Since we do not yet have national or international standards, the law will also enable the creation (and imposition) of standards through regulation.

The preamble’s reference to the crossing of borders signals both that the federal government is keenly aware of its constitutional limitations in this area and that it intends to base its jurisdiction on the interprovincial and international dimensions of AI. The other elements of Bill C-27 rely on the federal general trade and commerce power – this follows the approach taken in the Personal Information Protection and Electronic Documents Act (PIPEDA), which is reformed by the first two parts of C-27. There are indications that trade and commerce is also relevant to the AIDA. Section 4 of the AIDA refers to the goal of regulating “international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements applicable across Canada, for the design, development and use of those systems.” Yet the general trade and commerce power is an uneasy fit for the AIDA. The Supreme Court of Canada has laid down rules for the exercise of this power, and one of these is that it should not be used to regulate a single industry; a legislative scheme should regulate trade as a whole.

The Minister of Industry, in discussing Canada’s AI strategy has stated:

Artificial intelligence is a key part of our government’s plan to make our economy stronger than ever. The second phase of the Pan-Canadian Artificial Intelligence Strategy will help harness the full potential of AI to benefit Canadians and accelerate trustworthy technology development, while fostering diversity and cooperation across the AI domain. This collaborative effort will bring together the knowledge and expertise necessary to solidify Canada as a global leader in artificial intelligence and machine learning.

Clearly, the Minister is casting the role of AI as an overall economic transformer rather than a discrete industry. Nevertheless, although it might be argued that AI is a technology that cuts across all sectors of the economy, the AIDA applies predominantly to its design and development stages, which makes it look as if it targets a particular industry. Further, although PIPEDA (and the CPPA in the first Part of Bill C-27), are linked to trade and commerce through the transactional exchange of personal data – typically when it is collected from individuals in the course of commercial activity – the AIDA is different. Its regulatory requirements are meant to apply before any commercial activity takes place –at the design and development stage. This is worth pausing over because design and development stages may be non-commercial (in university-based research, for example) or may be purely intra-provincial. As a result, the need to comply with a law at the design and development stage, when that law is premised on interprovincial or international commercial activity, may only be discovered well after commercialization becomes a reality.

Arguably, AI might also be considered a matter of ‘national concern’ under the federal government’s residual peace, order and good government power. Matters of national concern that would fall under this power would be ones that did not exist at the time of confederation. The problem with addressing AI in this way is that it is simply not obvious that provinces could not enact legislation to govern AI – as many states have begun to do in the US.

Another possible constitutional basis is the federal criminal law power. This is used, for example, in the regulation of certain matters relating to health such as tobacco, food and drugs, medical devices and controlled substances. The Supreme Court of Canada has ruled that this power “is broad, and is circumscribed only by the requirements that the legislation must contain a prohibition accompanied by a penal sanction and must be directed at a legitimate public health evil”. The AIDA contains some prohibitions and provides for both administrative monetary penalties (AMPs). Because the AIDA focuses on “high impact” AI systems, there is an argument that it is meant to target and address those systems that have the potential to cause the most harm to health or safety. (Of course, the bill does not define “high impact” systems, so this is only conjecture.) Yet, although AMPs are available in cases of egregious non-compliance with the AIDA’s requirements, AMPs are not criminal sanctions, they are “a civil (rather than quasi-criminal) mechanism for enforcing compliance with regulatory requirements”, as noted in a report from the Ontario Attorney-General. That leaves a smattering of offences such as obstructing the work of the Minister or of auditors; knowingly designing, developing or using an AI system where the data were obtained as a result of an offence under another Act; being reckless as to whether the use of an AI system made available by the accused is likely to cause harm to an individual, and using AI intentionally to defraud the public and cause substantial economic loss to an individual. Certainly, such offences are criminal in nature and could be supported by the federal criminal law power. Yet they are easily severable from the rest of the statute. For the most part, the AIDA focuses on “establishing common requirements applicable across Canada, for the design, development and use of [AI] systems” (AIDA, s. 4).

The provinces have not been falling over themselves to regulate AI, although neither have they been entirely inactive. Ontario, for example, has been developing a framework for the public sector use of AI, and Quebec has enacted some provisions relating to automated decision-making systems in its new data protection law. Nevertheless, these steps are clearly not enough to satisfy a federal government anxious to show leadership in this area. It is thus unsurprising that Canada’s federal government has introduced legislation to regulate AI. What is surprising is that they have done so without consultation – either regarding the form of the intervention or the substance. We have yet to have an informed national conversation about AI. Further, legislation of this kind was only one option. The government could have consulted and convened experts to develop something along the lines of the US’s NIST Framework that could be adopted as a common standard/approach across jurisdictions in Canada. A Canadian framework could have been supported by the considerable work on standards already ongoing. Such an approach could have involved the creation of an agency under the authority of a properly-empowered Data Commissioner to foster co-operation in the development of national standards. This could have supported the provinces in the harmonized regulation of AI. Instead, the government has chosen to regulate AI itself through a clumsy bill that staggers uneasily between constitutional heads of power, and that leaves its normative core to be crafted in a raft of regulations that may take years to develop. It also leaves it open to the first company to be hit with an AMP to challenge the constitutionality of the framework as a whole.

Published in Privacy

The Artificial Intelligence and Data Act (AIDA) in Bill C-27 will create new obligations for those responsible for AI systems (particularly high impact systems), as well as those who process or make available anonymized data for use in AI systems. In any regulatory scheme that imposes obligations, oversight and enforcement are key issues. A long-standing critique of the Personal Information Protection and Electronic Documents Act (PIPEDA) has been that it is relatively toothless. This is addressed in the first part of Bill C-27, which reforms the data protection law to provide a suite of new enforcement powers that include order-making powers for the Privacy Commissioner and the ability to impose stiff administrative monetary penalties (AMPs). The AIDA comes with ‘teeth’ as well, although these teeth seem set within a rather fragile jaw. I will begin by identifying the oversight and enforcement powers (the teeth) and will then look at the agent of oversight and enforcement (the jaw). The table below sets out the main obligations accompanied by specific compliance measures. There is also the possibility that any breach of these obligations might be treated as either a violation or offence, although the details of these require elaboration in as-yet-to-be-drafted regulations.

 

Obligation

Oversight Power

To keep records regarding the manner in which data is anonymized and the use or management of anonymized data as well as records of assessment of whether an AI system is high risk (s. 10)

Minister may order the record-keeper to provide any of these records (s. 13(1))

 

 

Any record-keeping obligations imposed on any actor in as-yet undrafted regulations

Where there are reasonable grounds to believe that the use of a high impact system could result in harm or biased output, the Minister can order the specified person to provide these records (s. 14)

Obligation to comply with any of the requirements in ss. 6-12, or any order made under s. 13-14

Minister (on reasonable grounds to believe there has a contravention) can require the person to conduct either an internal or an external audit with respect to the possible contravention (s. 15); the audit must be provided to the Minister

 

A person who has been audited may be ordered by the Minister to implement any measure specified in the order, or to address any matter in the audit report (s. 16)

Obligation to cease using or making available a high-impact system that creates a serious risk of imminent harm

Minister may order a person responsible for a high-impact system to cease using it or making it available for use if the Minister has reasonable grounds to believe that its use gives rise to a serious risk of imminent harm (s. 17)

Transparency requirement (any person referred to in sections 6 to 12, 15 and 16)

Minister may order the person to publish on a publicly available website any information related to any of these sections of the AIDA, but there is an exception for confidential business information (s. 18)

 

Compliance with orders made by the Minister is mandatory (s. 19) and there is a procedure for them to become enforceable as orders of the Federal Court.

Although the Minister is subject to confidentiality requirements, they may disclose any information they obtain through the exercise of the above powers to certain entities if they have reasonable grounds to believe that a person carrying out a regulated activity “has contravened, or is likely to contravene, another Act of Parliament or a provincial legislature” (s. 26(1)). Those entities include the Privacy Commissioner, the Canadian Human Rights Commission, the Commissioner of Competition, the Canadian Radio-television and Telecommunications Commission, their provincial analogues, or any other person prescribed by regulation. An organization may therefore be in violation of statutes other than AIDA and may be subject to investigation and penalties under those laws.

The AIDA itself provides no mechanism for individuals to file complaints regarding any harms they may believe they have suffered, nor is there any provision for the investigation of complaints.

The AIDA sets up the Minister as the actor responsible for oversight and enforcement, but the Minister may delegate any or all of their oversight powers to the new Artificial Intelligence and Data Commissioner who is created by s. 33. The Data Commissioner is described in the AIDA as “a senior official of the department over which the Minister presides”. They are not remotely independent. Their role is “to assist the Minister” responsible for the AIDA (most likely the Minister of Industry), and they will also therefore work in the Ministry responsible for supporting the Canadian AI industry. There is essentially no real regulator under the AIDA. Instead, oversight and enforcement are provided by the same group that drafted the law and that will draft the regulations. It is not a great look, and, certainly goes against the advice of the OECD on AI governance, as Mardi Wentzel has pointed out.

The role of Data Commissioner had been first floated in the 2019 Mandate Letter to the Minister of Industry, which provided that the Minister would: “create new regulations for large digital companies to better protect people’s personal data and encourage greater competition in the digital marketplace. A newly created Data Commissioner will oversee those regulations.” The 2021 Federal Budget provided funding for the Data Commissioner, and referred to the role of this Commissioner as to “inform government and business approaches to data-driven issues to help protect people’s personal data and to encourage innovation in the digital marketplace.” In comparison with these somewhat grander ideas, the new AI and Data Commissioner role is – well – smaller than the title. It is a bit like telling your kids you’re getting them a deluxe bouncy castle for their birthday party and then on the big day tossing a couple of couch cushions on the floor instead.

To perhaps add a gloss of some ‘independent’ input into the administration of the statute, the AIDA provides for the creation of an advisory committee (s. 35) that will provide the Minister with “advice on any matters related to this Part”. However, this too is a bit of a throwaway. Neither the AIDA nor any anticipated regulations will provide for any particular composition of the advisory committee, for the appointment of a chair with a fixed term, or for any reports by the committee on its advice or activities. It is the Minister who may choose to publish advice he receives from the committee on a publicly available website (s. 35(2)).

The AIDA also provides for enforcement, which can take one of two routes. Well, one of three routes. One route is to do nothing – after all, the Minister is also responsible for supporting the AI industry in Canada– so this cannot be ruled out. A second option will be to treat a breach of any of the obligations specified in the as-yet undrafted regulations as a “violation” and impose an administrative monetary penalty (AMP). A third option is to treat a breach as an “offence” and proceed by way of prosecution (s. 30). A choice must be made between proceeding via the AMP or the offense route (s. 29(3)). Providing false information and obstruction are distinct offences (s. 30(2)). There are also separate offences in ss. 38 and 39 relating to the use of illegally obtained data and knowingly or recklessly making an AI system available for use that is likely to cause harm.

Administrative monetary penalties under Part 1 of Bill C-27 (relating to data protection) are quite steep. However, the necessary details regarding the AMPs that will be available for breach of the AIDA are to be set out in regulations that have yet to be drafted (s. 29(4)(d)). All that the AIDA really tells us about these AMPs is that their purpose is “to promote compliance with this Part and not to punish” (s. 29(2)). Note that at the bottom of the list of regulation-making powers for AMPs set out in s. 29(4). This provision allows the Minister to make regulations “respecting the persons or classes of persons who may exercise any power, or perform any duty or function, in relation to the scheme.” There is a good chance that the AMPs will (eventually) be administered by the new Personal Information and Data Tribunal, which is created in Part 2 of Bill C-27. This, at least, will provide some separation between the Minister and the imposition of financial penalties. If this is the plan, though, the draft law should say so.

It is clear that not all breaches of the obligations in the AIDA will be ones for which AMPs are available. Regulations will specify the breach of which provisions of the AIDA or its regulations will constitute a violation (s. 29(4)(a)). The regulations will also indicate whether the breach of the particular obligation is classified as minor, serious or very serious (s. 29(4)(b)). The regulations will also set out how any such proceedings will unfold. As-yet undrafted regulations will also specify the amounts or ranges of AMPS, and factors to take into account in imposing them.

This lack of important detail makes it hard not to think of the oversight and enforcement scheme in the AIDA as a rough draft sketched out on a cocktail napkin after an animated after-hours discussion of what enforcement under the AIDA should look like. Clearly, the goal is to be ‘agile’, but ‘agile’ should not be confused with slapdash. Parliament is being asked to enact a law that leaves many essential components undefined. With so much left to regulations, one wonders whether all the missing pieces can (or will) be put in place within this decade. There are instances of other federal laws left incomplete by never-drafted regulations. For example, we are still waiting for the private right of action provided for in Canada’s Anti-Spam Law, which cannot come into effect until the necessary regulations are drafted. A cynic might even say that failing to draft essential regulations is a good way to check the “enact legislation on this issue” box on the to-do list, without actually changing the status quo.

Published in Privacy

This is the third in my series of posts on the Artificial Intelligence and Data Act (AIDA) found in Bill C-27, which is part of a longer series on Bill C-27 generally. Earlier posts on the AIDA have considered its purpose and application, and regulated activities. This post looks at the harms that the AIDA is designed to address.

The proposed Artificial Intelligence and Data Act (AIDA), which is the third part of Bill C-27, sets out to regulate ‘high-impact’ AI systems. The concept of ‘harm’ is clearly important to this framework. Section 4(b) of the AIDA states that a purpose of the legislation is “to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests”.

Under the AIDA, persons responsible for high-impact AI systems have an obligation to identify, assess, and mitigate risks of harm or biased output (s. 8). Those persons must also notify the Minister “as soon as feasible” if a system for which they are responsible “results or is likely to result in material harm”. There are also a number of oversight and enforcement functions that are triggered by harm or a risk of harm. For example, if the Minister has reasonable grounds to believe that a system may result in harm or biased output, he can demand the production of certain records (s. 14). If there is a serious risk of imminent harm, the Minister may order a person responsible to cease using a high impact system (s. 17). The Minister is also empowered to make public certain information about a system where he believes that there is a serious risk of imminent harm and the publication of the information is essential to preventing it (s. 28). Elevated levels of harm are also a trigger for the offence in s. 39, which involves “knowing or being reckless as to whether the use of an artificial intelligence system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property”.

‘Harm’ is defined in s. 5(1) to mean:

(a) physical or psychological harm to an individual;

(b) damage to an individual’s property; or

(c) economic loss to an individual.

I have emphasized the term “individual” in this definition because it places an important limit on the scope of the AIDA. First, it is unlikely that the term ‘individual’ includes a corporation. Typically, the word ‘person’ is considered to include corporations, and the word ‘person’ is used in this sense in the AIDA. This suggests that “individual” is meant to have a different meaning. The federal Interpretation Act is silent on the issue. It is a fair interpretation of the definition of ‘harm’ that “individual” is not the same as “person”, and means an individual (human) person. The French version uses the term “individu”, and not “personne”. The harms contemplated by this legislation are therefore to individuals and not to corporations.

Defining harm in terms of individuals has other ramifications. The AIDA defines high-risk AI systems in terms of their impacts on individuals. Importantly, this excludes groups and communities. It also very significantly focuses on what are typically considered quantifiable harms, and uses language that suggests quantifiability (economic loss, damage to property, physical or psychological harm). Some important harms may be difficult to establish or to quantify. For example, class action lawsuits relating to significant data breaches have begun to wash up on the beach of lost causes due to the impossibility of proving material loss either because, although thousands may have been impacted, the individual losses are impossible to quantify, or because it is impossible to prove a causal link between very real identity theft and that particular data breach. Consider an AI system that manipulates public opinion through an algorithm that drives content to individuals based on its shock value rather than its truth. Say this happens during a pandemic and it convinces people that they should not get vaccinated or take other recommended public health measures. Say some people die because they were misled in this way. Say other people die because they were exposed to infected people who were misled in this way. How does one prove the causal link between the physical harm of injury or death of an individual and the algorithm? What if there is an algorithm that manipulates voter sentiment in a way that changes the outcome of an election? What is the quantifiable economic loss or psychological harm to any individual? How could causation be demonstrated? The harm, once again, is collective.

The EU AI Act has also been criticized for focusing on individual harm, but the wording of that law is still broader than that in the AIDA. The EU AI Act refers to high-risk systems in terms of “harm to the health and safety or a risk of adverse impact on fundamental rights of persons”. This at least introduces a more collective dimension, and it avoids the emphasis on quantifiability.

The federal government’s own Directive on Automated Decision-Making (DADM) which is meant to guide the development of AI used in public sector automated decision systems (ADS) also takes a broader approach to impact. In assessing the potential impact of an ADS, the DADM takes into account: “the rights of individuals or communities”, “the health or well-being of individuals or communities”, “the economic interests of individuals, entities, or communities”, and “the ongoing sustainability of an ecosystem”.

With its excessive focus on individuals, the AIDA is simply tone deaf to the growing global understanding of collective harm caused by the use of human-derived data in AI systems.

One response of the government might be to point out that the AIDA is also meant to apply to “biased output”. Biased output is defined in the AIDA as:

content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. It does not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds. (s. 5(1)) [my emphasis]

The argument here will be that the AIDA will also capture discriminatory biases in AI. However, I have underlined the part of this definition that once again returns the focus to individuals, rather than groups. It can be very hard for an individual to demonstrate that a particular decision discriminated against them (especially if the algorithm is obscure). In any event, biased AI will tend to replicate systemic discrimination. Although it will affect individuals, it is the collective impact that is most significant – and this should be recognized in the law. The somewhat obsessive focus on individual harm in the AIDA may unwittingly help perpetuate denials of systemic discrimination.

It is also important to note that the definition of “harm” does not include “biased output”, and while the terms are used in conjunction in some cases (for example, in s. 8’s requirement to “identify, assess and mitigate the risks of harm or biased output”), other obligations relate only to “harm”. Since the two are used conjunctively in some parts of the statute, but not others, a judge interpreting the statute might presume that when only one of the terms is used, then it is only that term that is intended. Section 17 of the AIDA allows the Minister to order a person responsible for a high-impact system to cease using it or making it available if there is a “serious risk of imminent harm”. Section 28 permits the Minister to order the publication of information related to an AI system where there are reasonable grounds to believe that the use of the system gives rise to “a serious risk of imminent harm”. In both cases, the defined term ‘harm’ is used, but not ‘biased output’.

The goals of the AIDA to protect against harmful AI are both necessary and important, but in articulating the harm that it is meant to address, the Bill underperforms.

Published in Privacy

This is the second in a series of posts on Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA). The first post looked at the scope of application of the AIDA. This post considers what activities and what data will be subject to governance.

Bill C-27’s proposed Artificial Intelligence and Data Act (AIDA) governs two categories of “regulated activity” so long as they are carried out “in the course of international or interprovincial trade and commerce”. These are set out in s. 5(1):

(a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;

(b) designing, developing or making available for use an artificial intelligence system or managing its operations.

These activities are cast in broad terms, capturing activities related both to the general curating of the data that fuel AI, and the design, development, distribution and management of AI systems. The obligations in the statute do not apply universally to all engaged in the AI industry. Instead, different obligations apply to those performing different roles. The chart below identifies the actor in the left-hand column, and the obligation the column on the right.

 

Actor

Obligation

A person who carries out any regulated activity and who processes or makes available for use anonymized data in the course of that activity

(see definition of “regulated activity” in s. 5(1)

s. 6 (data anonymization, use and management)

s. 10 (record keeping regarding measures taken under s. 6)

A person who is responsible for an artificial intelligence system (see definition of ‘person responsible’ in s. 5(2)

s. 7 (assess whether a system is high impact)

s. 10 (record keeping regarding reasons supporting their assessment of whether the system is high-impact under s. 7)

A person who is responsible for a high-impact system (see definition of ‘person responsible’ in s. 5(2; definition of “high-impact” system, s. 5(1))

s. 8 (measures to identify, assess and mitigate risk of harm or biased output)

s. 9 (measures to monitor compliance with the mitigation measures established under s. 8 and the effectiveness of the measures

s. 10 (record keeping regarding measures taken under ss. 8 and 9)

s. 12 (obligation to notify the Minister as soon as feasible if the use of the system results or is likely to result in material harm)

A person who makes available for use a high-impact system

s. 11(1) (publish a plain language description of the system and other required information)

A person who manages the operation of a high-impact system

s. 11(2) (publish a plain language description of how the system is used and other required information)

 

For most of these provisions, the details of what is actually required by the identified actor will depend upon regulations that have yet to be drafted.

A “person responsible” for an AI system is defined in s. 5(2) of the AIDA in these terms:

5(2) For the purposes of this Part, a person is responsible for an artificial intelligence system, including a high-impact system, if, in the course of international or interprovincial trade and commerce, they design, develop or make available for use the artificial intelligence system or manage its operation.

Thus, the obligations in ss. 7, 8, 9, 10 and 11, apply only to those engaged in the activities described in s. 5(1)(b) (designing, developing or making available an AI system or managing its operation). Further, it is important to note that with the exception of sections 6 and 7, the obligations in the AIDA also apply only to ‘high impact’ systems. The definition of a high-impact system has been left to regulations and is as yet unknown.

Section 6 stands out somewhat as a distinct obligation relating to the governance of data used in AI systems. It applies to a person who carries out a regulated activity and who “processes or makes available for use anonymized data in the course of that activity”. Of course, the first part of the definition of a regulated activity includes someone who processes or makes available for use “any data relating to human activities for the purpose of designing, developing or using” an AI system. So, this obligation will apply to anyone “who processes or makes available for use anonymized data” (s. 6) in the course of “processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system” (s. 5(1)). Basically, then for s. 6 to apply, the anonymized data must be processed for the purposes of development of an AI system. All of this must also be in the course if international or interprovincial trade and commerce.

Note that the first of these two purposes involves data “related to human activities” that are used in AI. This is interesting. The new Consumer Privacy Protection Act (CPPA) that forms the first part of Bill C-27 will regulate the collection, use and disclosure of personal data in the course of commercial activity. However, it provides, in s. 6(5), that: “For greater certainty, this Act does not apply in respect of personal information that has been anonymized.” By using the phrase “data relating to human activities” instead of “personal data”, s. 5(1) of the AIDA clearly addresses human-derived data that fall outside the definition of personal information in the CPPA because of anonymization.

Superficially, at least, s. 6 of the AIDA appears to pick up the governance slack that arises where anonymized data are excluded from the scope of the CPPA. [See my post on this here]. However, for this to happen, the data have to be used in relation to an “AI system”, as defined in the legislation. Not all anonymized data will be used in this way, and much will depend on how the definition of an AI system is interpreted. Beyond that, the AIDA only applies to a ‘regulated activity’ which is one carried out in the course of international and inter-provincial trade and commerce. It does not apply outside the trade and commerce context, nor does it apply to any excluded actors [as discussed in my previous post here]. As a result, there remain clear gaps in the governance of anonymized data. Some of those gaps might (eventually) be filled by provincial governments, and by the federal government with respect to public-sector data usage. Other gaps – e.g., with respect to anonymized data used for purposes other than AI in the private sector context – will remain. Further, governance and oversight under the proposed CPPA will be by the Privacy Commissioner of Canada, an independent agent of Parliament. Governance under the AIDA (as will be discussed in a forthcoming post) is by the Minister of Industry and his staff, who are also responsible for supporting the AI industry in Canada. Basically, the treatment of anonymized data between the CPPA and the AIDA creates a significant governance gap in terms of scope, substance and process.

On the issue of definitions, it is worth making a small side-trip into ‘personal information’. The definition of ‘personal information’ in the AIDA provides that the term “has the meaning assigned by subsections 2(1) and (3) of the Consumer Privacy Protection Act.” Section 2(1) is pretty straightforward – it defines “personal information” as “information about an identifiable individual”. However, s. 2(3) is more complicated. It provides:

2(3) For the purposes of this Act, other than sections 20 and 21, subsections 22(1) and 39(1), sections 55 and 56, subsection 63(1) and sections 71, 72, 74, 75 and 116, personal information that has been de-identified is considered to be personal information.

The default rule for ‘de-identified’ personal information is that it is still personal information. However, the CPPA distinguishes between ‘de-identified’ (pseudonymized) data and anonymized data. Nevertheless, for certain purposes under the CPPA – set out in s. 2(3) – de-identified personal information is not personal information. This excruciatingly-worded limit on the meaning of ‘personal information’ is ported into the AIDA, even though the statutory provisions referenced in s. 2(3) are neither part of AIDA nor particularly relevant to it. Since the legislator is presumed not to be daft, then this must mean that some of these circumstances are relevant to the AIDA. It is just not clear how. The term “personal information” is used most significantly in the AIDA in the s. 38 offense of possessing or making use of illegally obtained personal information. It is hard to see why it would be relevant to add the CPPA s. 2(3) limit on the meaning of ‘personal information’ to this offence. If de-identified (not anonymized) personal data (from which individuals can be re-identified) are illegally obtained and then used in AI, it is hard to see why that should not also be captured by the offence.

 

Published in Privacy

This is the first of a series of posts on the part of Bill C-27 that would enact a new Artificial Intelligence and Data Act (AIDA) in Canada. Previous posts have considered the part of the bill that would reform Canada’s private sector data protection law. This series on the AIDA begins with an overview of its purpose and application.

Bill C-27 contains the text of three proposed laws. The first is a revamped private sector data protection law. The second would establish a new Data Tribunal that is assigned a role under the data protection law. The third is a new Artificial Intelligence and Data Act (AIDA) While the two other components were present in the bill’s failed predecessor Bill C-11, the AIDA is new – and for many came as a bit of a surprise. The common thread, of course, is the government’s Digital Charter, which set out a series of commitments for building trust in the digital and data economy.

The preamble to Bill C-27, as a whole, addresses both AI and data protection concerns. Where it addresses AI regulation directly, it identifies the need to harmonize with national and international standards for the development and deployment of AI, and the importance of ensuring that AI systems uphold Canadian values in line with the principles of international human rights law. The preamble also signals a need for a more agile regulatory framework – something that might go towards justifying why so much of the substance of AI governance in the AIDA has been left to the development of regulations. Finally, the preamble speaks of a need “to foster an environment in which Canadians can seize the benefits of the digital and data-driven economy and to establish a regulatory framework that supports and protects Canadian norms and values, including the right to privacy.” This, then, frames how AI regulation (and data protection) will work in Canada – an attempt to walk a tightrope between enabling fast-paced innovation and protecting norms, values and privacy rights.

Regulating the digital economy has posed some constitutional (division of powers) challenges for the federal government, and these challenges are evident in the AIDA, particularly with respect to the scope of application of the law. Section 4 sets out the dual purposes of the legislation:

(a) to regulate international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems; and

(b) to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.

By focusing on international and interprovincial trade and commerce, the government asserts its general trade and commerce jurisdiction, without treading on the toes of the provinces, who remain responsible for intra-provincial activities. Yet, this means that there will be important gaps in AI regulation. Until the provinces act, these will be with respect to purely provincial AI solutions, whether in the public or private sectors, and, to a large extent, AI in the not-for-profit sector. However, this could get complicated since the AIDA sets out obligations for a range of actors, some of which could include international or interprovincial providers of AI systems to provincial governments.

The second purpose set out in s. 4 suggests that at least when it comes to AI systems that may result in serious harm, the federal jurisdiction over criminal law may be invoked. The AIDA creates a series of offences that could be supported by this power – yet, ultimately the offences relate to failures to meet the obligations that arise based on being engaged in a ‘regulated activity’, which takes one back to activities carried out in the course of international or interprovincial trade and commerce. The federal trade and commerce power thus remains the backbone of this bill.

Although there would be no constitutional difficulties with the federal government exerting jurisdiction over its own activities, the AIDA specifically excludes its application to federal government institutions, as defined in the Privacy Act. Significantly, it also does not apply to products, services or activities that are under the control of the Minister of National Defence, the Canadian Security Intelligence Service, the Communications Security Establishment or any other person who is responsible for a federal or provincial department or agency that is prescribed by regulation. This means that the AIDA would not apply even to those AI systems developed by the private sector for any of the listed actors. The exclusions are significant, particularly since the AIDA seems to be focussed on the prevention of harm to individuals (more on this in a forthcoming post) and the parties excluded are ones that might well develop or commission the development of AI that could (seriously) adversely impact individuals. It is possible that the government intends to introduce or rely upon other governance mechanisms to ensure that AI and personal data are not abused in these contexts. Or not. In contrast, the EU’s AI Regulation addresses the perceived need for latitude when it comes to national defence via an exception for “AI systems developed or used exclusively for military purposes” [my emphasis]. This exception is nowhere near as broad as that in the AIDA, which excludes all “products, services or activities under the control of the Minister of National defence”. Note that the Department of National Defence (DND) made headlines in 2020 when it contracted for an AI application to assist in hiring; it also made headlines in 2021 over an aborted psyops campaign in Canada. There is no reason why non-military DND uses of AI should not be subject to governance.

The government might justify excluding the federal public sector from governance under the AIDA on the basis that it is already governed by the Directive on Automated Decision-Making. This Directive applies to automated decision-making systems developed and used by the federal government, although there are numerous gaps in its application. For example, it does not apply to systems adopted before it took effect, it applies only to automated decision systems and not to other AI systems, and it currently does not apply to systems used internally (e.g., to govern public sector employees). It also does not have the enforcement measures that the AIDA has, and, since government systems could well be high-impact, this seems like a gap in governance. Consider in this respect the much-criticized ArriveCan App, designed for COVID-19 border screening and now contemplated for much broader use at border entries into Canada. The app has been criticized for its lack of transparency, and for the ‘glitch’ that sent inexplicable quarantine orders to potentially thousands of users. The ArriveCan app went through the DADM process, but clearly this is not enough to address governance issues.

Another important limit on the application of the AIDA is that most of its obligations apply only to “high impact systems”. This term is defined in the legislation as “an artificial intelligence system that meets the criteria for a high-impact system that are established in regulations.” This essentially says that this crucial term in the Bill will mean what cabinet decides it will mean at some future date. It is difficult to fully assess the significance or impact of this statute without any sense of how this term will be defined. The only obligations that appear to apply more generally are the obligation in s. 6 regarding the anonymization of data used or intended for use in AI systems, and the obligation in s. 10 to keep records regarding the anonymization measures taken.

By contrast, the EU’s AI Regulation applies to all AI systems. These fall into one of four categories: unacceptable risk, high-risk, limited risk, and low/minimal risk. Those systems that fall into the first category are banned. Those in the high-risk category are subject to the regulation’s most stringent requirements. Limited-risk AI systems need only meet certain transparency requirements and low-risk AI is essentially unregulated. Note that Canada’s approach to ‘agile’ regulation is to address only one category of AI systems – those that fall into the as-yet undefined category of high ‘impact’. It is unclear whether this is agile or supine. It is also not clear what importance should be given to the choice of the word ‘impact’ rather than ‘risk’. However, it should be noted that risk refers not just to actual but to potential harm, whereas ‘impact’ seems to suggest actual harm. Although one should not necessarily read too much into this choice of language, the fact that this important element is left to regulations means that Parliament will be asked to enact a law without understanding its full scope of application. This seems like a problem.

 

Published in Privacy

As part of my series on Bill C-27, I will be writing about both the proposed amendments to Canada’s private sector data protection law and the part of the Bill that will create a new Artificial Intelligence and Data Act (AIDA). So far, I have been writing about privacy, and my posts on consent, de-identification, data-for-good, and the right of erasure are already available. Posts on AIDA, will follow, although I still have a bit more territory on privacy to cover first. However, in the meantime, as a teaser, perhaps you might be interested in playing a bit of statutory MadLibs…...

Have you ever played MadLibs? It’s a paper-and-pencil game where someone asks the people in the room to supply a verb, noun, adverb, adjective, or body part, and the provided words are used to fill in the blanks in a story. The results are often absurd and sometimes hilarious.

The federal government’s proposal in Bill C-27 for an Artificial Intelligence and Data Act, really lends itself to a game of statutory MadLibs. This is because some of the most important parts of the bill are effectively left blank – either the Minister or the Governor-in-Council is tasked in the Bill with filling out the details in regulations. Do you want to play? Grab a pencil, and here goes:

Company X is developing an AI system that will (insert definition of ‘high impact system). It knows that this system is high impact because (insert how a company should assess impact). Company X has established measures to mitigate potential harms by (insert measures the company took to comply with the regulations) and has also recorded (insert records it kept), and published (insert information to be published).

Company X also had its system audited by an auditor who is (insert qualifications). Company X is being careful, because if it doesn’t comply with (insert a section of the Act for which non-compliance will count as a violation), it could be found to have committed a (insert degree of severity) violation. This could lead to (insert type of proceeding).

Company X, though, will be able to rely on (insert possible defence). However, if (insert possible defence) is unsuccessful, Company X may be liable to pay an Administrative Monetary Penalty if they are a (insert category of ‘person’) and if they have (insert factors to take into account). Ultimately, if they are unhappy with the outcome, they can launch a (insert a type of appeal proceeding).

Because of this regulatory scheme, Canadians can feel (insert emotion) at how their rights and interests are protected.

Published in Privacy

 

Note: The following is my response to the call for submissions on the recommendations following the third review of Canada’s Directive on Automated Decision-Making. Comments are due by June 30, 2022. If you are interested in commenting, please consult the Review Report and the Summary of Key Issues and Proposed Amendments. Comments can be sent to This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

 

The federal Directive on Automated Decision-Making (DADM) and its accompanying Algorithmic Impact Assessment tool (AIA) are designed to provide governance for the adoption and deployment of automated decision systems (ADS) by Canada’s federal government. Governments are increasingly looking to ADS in order to speed up routine decision-making processes and to achieve greater consistency in decision-making. At the same time, there are reasons to be cautious. Automated decision systems carry risks of incorporating and replicating discriminatory bias. They may also lack the transparency required of government decision-making, particularly where important rights or interests are at stake. The DADM, which has been in effect since April 2019 (with compliance mandatory no later than April 2020), sets out a series of obligations related to the design and deployment of automated decision-making systems. The extent of the obligations depends upon a risk assessment, and the AIA is the tool by which the level of risk of the system is assessed.

Given that this is a rapidly evolving area, the DADM provides that it will be reviewed every six months. It is now in its third review. The first two reviews led to the clarification of certain obligations in the DADM and to the development of guidelines to aid in its interpretation. This third review proposes a number of more substantive changes. This note comments on some of these changes and proposes an issue for future consideration.

Clarify and Broaden the Scope

A key recommendation in this third round of review relates to the scope of the DADM. Currently, the DADM applies only to ‘external’ services of government – in other words services offered to individuals or organizations by government. It does not apply internally. This is a significant gap when one considers the expanding use of ADS in the employment context. AI-enabled decision systems have been used in hiring processes, and they can be used to conduct performance reviews, and to make or assist in decision-making about promotions and internal workforce mobility. The use of AI tools in the employment context can have significant impacts on the lives and careers of employees. It seems a glaring oversight to not include such systems in the governance regime for ADM. The review team has recommended expanding the scope of the DADM to include internal as well as external services. They note that this move would also extend the DADM to any ADS used for “grants and contributions, awards and recognition, and security screening” (Report at 11). This is an important recommendation and one which should be implemented.

The review team also recommends a clarification of the language regarding the application of the DADM. Currently it puts within its scope “any system, tool, or statistical models used to recommend or make an administrative decision about a client”. Noting that “recommend” could be construed as including only those systems that recommend a specific outcome, as opposed to systems that process information on behalf of a decision-maker, the team proposes replacing “recommend” with “support”. This too is an important recommendation which should be implemented.

Periodic Reviews

Currently the DADM provides for its review every six months. This was always an ambitious review schedule. No doubt it was motivated by the fact that the DADM was a novel tool designed to address a rapidly emerging and evolving technology with potentially significant implications. The idea was to ensure that it was working properly and to promptly address any issues or problems. In this third review, however, the team recommends changing the review period from six months to two years. The rationale is that the six-month timetable makes it challenging for the team overseeing the DADM (which is constantly in a review cycle), and makes it difficult to properly engage stakeholders. They also cite the need for the DADM to “display a degree of stability and reliability, enabling federal institutions and the clients they serve to plan and act with a reasonable degree of confidence.” (Report at 12).

This too is a reasonable recommendation. While more frequent reviews were important in the early days of the DADM and the AIA, reviews every six months seem unduly burdensome once initial hiccups are resolved. A six-month review cycle engages the team responsible for the DADM in a constant cycle of review, which may not be the best use of resources. The proposed two-year review cycle would allow for a more experience to be garnered with the DADM and AIA, enabling a more substantive assessment of issues arising. Further, a two-year window is much more realistic if stakeholders are to be engaged in a meaningful way. Being asked to comment on reports and proposed changes every six months seems burdensome for anyone – including an already stretched civil society sector. The review document suggests that Canada’s Chief Information Officer could request completion of an off-cycle review if the need arose, leaving room for the possibility that a more urgent issue could be addressed outside of the two-year review cycle.

Data Model and Governance

The third review also proposes amendments to provide for what it describes as a more ‘holistic’ approach to data governance. Currently, the DADM focuses on data inputs – in other words on assessing the quality, relevance and timeliness of the data used in the model. The review report recommends the addition of an obligation to establish “measures to ensure that data used and generated by the Automated Decision System are traceable, protected, and appropriately retained and disposed of in accordance with the Directive on Service and Digital, Directive on Privacy Practices, and Directive on Security Management”. It will also recommend amendments to extend testing and assessment beyond data to underlying models, in order to assess both data and algorithms for bias or other problems. These are positive amendments which should be implemented.

Explanation

The review report notes that while the DADM requires “meaningful explanations” of how automated decisions were reached, and while guidelines provide some detail as to what is meant by explainability, there is still uncertainty about what explainability entails. The Report recommends adding language in Appendix C, in relation to impact assessment, that will set out the information necessary for ‘explainability’. This includes:

  • The role of the system in the decision-making process;
  • The training and client data, their source and method of collection, if applicable;
  • The criteria used to evaluate client data and the operations applied to process it; and
  • The output produced by the system and any relevant information needed to interpret it in the context of the administrative decision.

Again, this recommendation should be implemented.

Reasons for Automation

The review would also require those developing ADM systems for government to specifically identify why it was considered necessary or appropriate to automate the existing decision-making process. The Report refers to a “clear and demonstrable need”. This is an important additional criterion as it requires transparency as to the reasons for automation – and that these reasons go beyond the fact that vendor-demonstrated technologies look really cool. As the authors of the review note, requiring justification also helps to assess the parameters of the system adopted – particularly if the necessity and proportionality approach favoured by the Office of the Privacy Commissioner of Canada is adopted.

Transparency

The report addresses several issues that are relevant to the transparency dimensions of the DADM and the accompanying AIA. Transparency is an important element of the DADM, and it is key both to the legitimacy of the adoption of ADS by government, but also to its ongoing use. Without transparency in government decision-making that impacts individuals, organizations and communities, there can be no legitimacy. There are a number of transparency elements that are built into the DADM. For example, there are requirements to provide notice of automated decision systems, a right to an explanation of decisions that is tailored to the impact of the decision, and a requirement not just to conduct an AIA, but to publish the results. The review report includes a number of recommendations to improve transparency. These include a recommendation to clarify when an AIA must be completed and released, greater transparency around peer review results, more explicit criteria for explainability, and adding additional questions to the AIA. These are all welcome recommendations.

At least one of these recommendations may go some way to allaying my concerns with the system as it currently stands. The documents accompanying the report (slide 3 of summary document) indicate that there are over 300 AI projects across 80% of federal institutions. However, at the time of writing, only four AIAs were published on the open government portal. There is clearly a substantial lag between development of these systems and release of the AIAs. The recommendation that an AIA be not just completed but also released prior to the production of the system is therefore of great importance to ensuring transparency.

It may be that some of the discrepancy in the numbers is attributable to the fact that the DADM came into effect in 2020, and it was not grandfathered in for projects already underway. For transparency’s sake, I would also recommend that a public register of ADS be created that contains basic information about all government ADS. This could include their existence and function, as well as some transparency regarding explainability, the reasons for adoption, and measures taken to review, assess and ensure the reliability of these systems. Although it is too late, in the case of these systems, to perform a proactive AIA, there should be some form of reporting tool that can be used to provide important information, for transparency purposes, to the public.

Consideration for the Future

The next review of the DADM and the AIA should also involve a qualitative assessment of the AIAs that have been published to date. If the AIA is to be a primary tool not just for assessing ADS but for providing transparency about them, then they need to be good. Currently there is a requirement to conduct an AIA for a system within the scope of the DADM – but there is no explicit requirement for it to be of a certain quality. A quick review of the four AIAs currently available online shows some discrepancy between them in terms of the quality of the assessment. For example, the project description for one such system is an unhelpful 9-word sentence that does not make clear how AI is actually part of the project. This is in contrast to another that describes the project in a 14-line paragraph. These are clearly highly divergent in terms of the level of clarity and detail provided.

The first of these two AIAs also seems to contain contradictory answers to the AIA questionnaire. For example, the answer to the question “Will the system only be used to assist a decision-maker” is ‘yes’. Yet the answer to the question “Will the system be replacing a decision that would otherwise be made by a human” is also ‘yes’. Either one of these answers is incorrect, or the answers do not capture how the respondent interpreted these questions. These are just a few examples. It is easy to see how use of the AIA tool can range from engaged to pro forma.

The obligations imposed on departments with respect to ADS vary depending upon the risk assessment score. This score is evaluated through the questionnaire, and one of the questions asks “Are clients in this line of business particularly vulnerable?” In the AIA for an access to information (ATIP) tool, the answer given to this question is “no”. Of course, the description of the tool is so brief that it is hard to get a sense of how it functions. However, I would think that the clientele for an ATIP portal would be quite diverse. Some users will be relatively sophisticated (e.g., journalists or corporate users). Others will be inexperienced. For some of these, information sought may be highly important to them as they may be seeking access to government information to right a perceived wrong, to find out more about a situation that adversely impacts them, and so on. In my view, this assessment of the vulnerability of the clients is not necessarily accurate. Yet the answer provided contributes to a lower overall score and thus a lower level of accountability. My recommendation for the next round of reviews is to assess the overall effectiveness of the AIA tool in terms of the information and answers provided and in terms of their overall accuracy.

I note that the review report recommends adding questions to the AIA in order to improve the tool. Quite a number of these are free text answers, which require responses to be drafted by the party completing the AIA. Proposed questions include ones relating to the user needs to be addressed, how the system will meet those needs, and the effectiveness of the system in meeting those needs, along with reasons for this assessment. Proposed questions will also ask whether non-AI-enabled solutions were also considered, and if so, why AI was chosen as the preferred method. A further question asks what the consequences would be of not deploying the system. This additional information is important both to assessing the tool and to providing transparency. However, as noted above, the answers will need to be clear and sufficiently detailed in order to be of any use.

The AIA is crucial to assessing the level of obligation and to ensuring transparency. If AIAs are pro forma or excessively laconic, then the DADM can be as finely tuned as can be, but it will still not achieve desired results. The review committee’s recommendation that plain language summaries of peer review assessments also be published will provide a means of assessing the quality of the AIAs, and thus it is an important recommendation to strengthen both transparency and compliance.

A final issue that I would like to address is that, to achieve transparency, people will need to be able to easily find and access the information about the systems. Currently, AIAs are published on the Open Government website. There, they are listed alphabetically by title. This is not a huge problem right now, since there are only four of them. As more are published, it would be helpful to have a means of organizing them by department or agency, or by other criteria (including risk/impact score) to improve their findability and usability. Further, it will be important that any peer review summaries are linked to the appropriate AIAs. In addition to publication on the open government portal, links to these documents should be made available from department, agency or program websites. It would also be important to have an index or registry of AI in the federal sector – including not just those projects covered by the DADM, but also those in production prior to the DADM’s coming into force.

[Note: I have written about the DADM and the AIA from an administrative law perspective. My paper, which looks at the extent to which the DADM addresses administrative law concerns regarding procedural fairness, can be found here.]

Published in Privacy

 

Ontario has just released its Beta principles for the ethical use of AI and data enhanced technologies in Ontario. These replace the earlier Alpha principles, and are revised based upon commentary and feedback on the Alpha version. Note that these principles are designed for use in relation to AI technologies adopted for the Ontario public sector.

Below you will find a comparison table I created to provide a quick glance at what has been changed since the previous version. I have flagged significant additions with italics in the column for the Beta version. I have also flagged some words or concepts that have disappeared in the Beta version by using strikethrough in the column with the Alpha version. I have focused on the principles, and have not flagged changes to the “Why it Matters” section of each principle.

One important change to note is that the Beta version now refers not just to technologies used to make decisions, but also technologies used to assist in decision-making.

 

 

Principles for Ethical Use [Alpha]

Principles for Ethical Use [Beta]

The alpha Principles for Ethical Use set out six points to align the use of data-driven technologies within government processes, programs and services with ethical considerations and values. Our team has undertaken extensive jurisdictional scans of ethical principles across the world, in particular the US the European Union and major research consortiums. The Ontario “alpha” principles complement the Canadian federal principles by addressing a gap concerning specificity. Ontario’s principles support our diverse economic ecosystem by not clashing with existing best practices, principles and frameworks. This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.

 

These Principles for Ethical Use set out six points to align the use of data enhanced technologies within government processes, programs and services with ethical considerations and values.

 

The Trustworthy AI team within Ontario’s Digital Service has undertaken extensive jurisdictional scans of ethical principles across the world, in particular New Zealand, the United States, the European Union and major research consortiums.

 

The Ontario “beta” principles complement the Canadian federal principles by addressing a gap concerning specificity. Ontario’s principles support our diverse economic ecosystem by not clashing with existing best practices, principles and frameworks. This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.

 

We’re in the early days of bringing these principles to life. We encourage you to adopt as much of the principles as possible, and to share your feedback with us. You can email This e-mail address is being protected from spambots. You need JavaScript enabled to view it for more details.

 

You can also check out the Transparency Guidelines (GitHub).

1. Transparent and Explainable

 

There must be transparent and responsible disclosure around data-driven technology like Artificial Intelligence (AI), automated decisions and machine learning (ML) systems to ensure that people understand outcomes and can discuss, challenge and improve them.

 

 

Where automated decision making has been used to make individualized and automated decisions about humans, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject should be available.

 

Why it Matters

 

There is no way to hold data-driven technologies accountable, particularly as they impact various historically disadvantaged groups if the public is unaware of the algorithms and automated decisions the government is making. Transparency of use must be accompanied with plain language explanations for the public to have access to and not just the technical or research community. For more on this, please consult the Transparency Guidelines.

 

1. Transparent and explainable

 

There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used.

 

When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences.

 

Why it matters

 

Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it.

 

Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups.

 

For more on this, please consult the Transparency Guidelines.

 

2. Good and Fair

 

Data-driven technologies should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards to ensure a fair and just society.

 

Designers, policy makers and developers should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.

 

Why it matters

 

Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the lifecycle of use. The definitions of good and fair are intentionally vague to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

 

2. Good and fair

 

Data enhanced technologies should be designed and operated in a way throughout their life cycle that respects the rule of law, human rights, civil liberties, and democratic values. These include dignity, autonomy, privacy, data protection, non-discrimination, equality, and fairness.

 

Why it matters

 

Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the life cycle of use. The definitions of good and fair are intentionally broad to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.

 

3. Safe

 

Data-driven technologies like AI and ML systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.

 

Designers and developers should implement mechanisms and safeguards, such as capacity for human determination and complete halt of the system operations, that are appropriate to the context and predetermined at initial deployment.

 


Why it matters

Creating safe data-driven technologies means embedding safeguards throughout the life cycle of the deployment of the algorithmic system. Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. Despite our best efforts there will be unexpected outcomes and impacts. Systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are no longer agreeable that a human can adapt, correct or improve the system.

3. Safe

 

Data enhanced technologies like AI and ML systems must function in a safe and secure way throughout their life cycles and potential risks should be continually assessed and managed.

 

Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended. This would include mechanisms related to system testing, piloting, scaling and human intervention as well as alternative processes in case a complete halt of system operations is required. The mechanisms must be appropriate to the context and determined before deployment but should be iterated upon throughout the system’s life cycle.

 

Why it matters

Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. As well, issues with the system can arise that only become apparent after the system is deployed.

 

Therefore, despite our best efforts unexpected outcomes and impacts need to be considered. Accordingly, systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are not intended, a human can adapt, correct or improve the system.

 

4. Accountable and Responsible

 

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the above principles. Algorithmic systems should be periodically peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.

 

Where AI is used to make decisions about individuals there needs to be a process for redress to better understand how a given decision was made.

 

Why it matters

 

In order for there to be accountability for decisions that are made by an AI or ML system a person, group of people or organization needs to be identified prior to deployment. This ensures that if redress is needed there is a preidentified entity that is responsible and can be held accountable for the outcomes of the algorithmic systems.

 

4. Accountable and responsible

 

Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the other principles. Human accountability and decision making over AI systems within an organization needs to be clearly identified, appropriately distributed and actively maintained throughout the system’s life cycle. An organizational culture around shared ethical responsibilities over the system must also be promoted.

 

Where AI is used to make or assist with decisions, a public and accessible process for redress should be designed, developed, and implemented with input from a multidisciplinary team and affected stakeholders. Algorithmic systems should also be regularly peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.

 

Why it matters

 

Identifying and appropriately distributing accountability within an organization helps ensure continuous human oversight over the system is properly maintained. In addition to clear roles related to accountability, it is also important to promote an organizational culture around shared ethical responsibilities. This helps prevent gaps and avoids the situation where ethical considerations are always viewed as someone else’s responsibility.

 

While our existing legal framework includes numerous traditional processes of redress related to governmental decision making, AI systems can present unique challenges to those traditional processes with their complexity. Input from a multidisciplinary team and affected stakeholders will help identify those issues in advance and design appropriate mechanisms to mitigate them.

 

Regular peer review of AI systems is also important. Issues around bias may not be evident when AI systems are initially designed or developed, so it's important to consider this requirement throughout the lifecycle of the system.

 

5. Human Centric

 

The processes and outcomes behind an algorithm should always be developed with human users as the main consideration. Human centered AI should reflect the information, goals, and constraints that a human decision-maker weighs when arriving at a decision.

 

Keeping human users at the center entails evaluating any outcomes (both direct and indirect) that might affect them due to the use of the algorithm. Contingencies for unintended outcomes need to be in place as well, including removing the algorithms entirely or ending their application.

 

Why it matters

 

Placing the focus on human user ensures that the outcomes do not cause adverse effects to users in the process of creating additional efficiencies.

 

In addition, Human-centered design is needed to ensure that you are able to keep a human in the loop when ensuring the safe operation of an algorithmic system. Developing algorithmic systems with the user in mind ensures better societal and economic outcomes from the data-driven technologies.

 

5. Human centric

 

AI systems should be designed with a clearly articulated public benefit that considers those who interact with the system and those who are affected by it. These groups should be meaningfully engaged throughout the system’s life cycle, to inform development and enhance operations. An approach to problem solving that embraces human centered design is strongly encouraged.

 

Why it matters

 

Clearly articulating a public benefit is an important step that enables meaningful dialogue early with affected groups and allows for measurement of success later.

 

Placing the focus on those who interact with the system and those who are affected by it ensures that the outcomes do not cause adverse effects in the process of creating additional efficiencies.

 

Developing algorithmic systems that incorporate human centred design will ensure better societal and economic outcomes from the data enhanced technologies.

 

6. Sensible and Appropriate

 

Data-driven technologies like AI or ML shall be developed with consideration of how it may apply to specific sectors or to individual cases and should align with the Canadian Charter of Human Rights and Freedoms and with Federal and Provincial AI Ethical Use.

 

Other biproducts of deploying data-driven technologies such as environmental, sustainability, societal impacts should be considered as they apply to specific sectors and use cases and applicable frameworks, best practices or laws.

 

Why it matters

 

Algorithmic systems and machine learning applications will differ by sector and user. As a result, while the above principles are a good starting point for developing ethical data-driven technologies it is important that additional considerations be given to the specific sectors and environments to which the algorithm is applied.

 

Experts in both technology and ethics should be consulted in development of data-driven technologies such as AI to guard against any adverse effects (including societal, environmental and other long-term effects).

6. Sensible and appropriate

 

Every data enhanced system exists not only within its use case, but also within a particular sector of society and a broader context that can feel its impact. Data enhanced technologies should be designed with consideration of how they may apply to a particular sector along with awareness of the broader context. This context could include relevant social or discriminatory impacts.

 

Why it matters

 

Algorithmic systems and machine learning applications will differ by sector. As a result, while the above principles are a good starting point for developing ethical data enhanced technologies it is important that additional considerations be given to the specific sectors to which the algorithm is applied.

 

Encouraging sector specific guidance also helps promote a culture of shared ethical responsibilities and a dialogue around the important issues raised by data enhanced systems.

 

Published in Privacy

 

The following is my submission to the Ontario government's Consultation on Developing Ontario's Artificial Intelligence (AI) Framework. The Consultation closed on June 4, 2021.


Thank you for the opportunity to provide input on the development of trustworthy AI in Ontario. Due to time pressures my comments will be relatively brief. Hopefully there will be other opportunities to engage with this process.

Developing a framework for the governance of AI in Ontario is important, and it is good to see that this work is underway in Ontario. I note that the current consultation focuses on AI for use in the public sector. Similar work needs to be done for the governance of AI that will be developed and deployed in the private sector context. I hope that this work is also being contemplated.

As I am sure you know, the federal government has already developed a Directive on Automated Decision-Making (DADM) which applies to a broad range of uses of AI in the federal public sector context. It comes with an algorithmic impact assessment tool. Although I appreciate the sensitivities around sovereignty within a province’s own spheres of competence, there is much to be said for more unified national approaches to many regulatory issues – particularly in the digital context. One option for Ontario is to use the DADM as a starting point for its approach to public sector AI governance, and to assess and adapt it for use in Ontario. This would allow Ontario to take advantage of an approach that is already well developed, and into which a considerable amount of thoughtful work has been invested. It is both unnecessary and counterproductive to reinvent the wheel. Serious consideration should be given – as a matter of public policy – to adopting, where possible, harmonized approaches to the governance of digital technologies.

At the same time, I note that the consultation document suggests that Ontario might go beyond a simple internal directive and actually provide an accountability framework that would give individuals direct recourse in cases where government does not meet whatever requirements are established. A public accountability framework is lacking in the federal DADM, and would be most welcome in Ontario.

The proposed public sector framework for Ontario is organized around three broad principles: No AI in secret; AI use Ontarians can trust; and AI that serves all Ontarians. These are good, if broad, principles. The real impact of this governance initiative will, of course, lie in its detail. However, it is encouraging to see a commitment to transparency, openness and public participation. It is also important that the government recognize the potential for AI to replicate or exacerbate existing inequities and to commit to addressing equity and inclusion.

My comments will address each of the principles in turn.

1. No AI in Secret

The consultation document states that “for people to trust that the use of AI is safe and appropriate they must first be aware that the AI exists. As a result, the government needs to be transparent about how, when, and why these tools are used so that people have a right to address potential biases created by the AI algorithms.” I agree. A public register of AI tools in use by government, along with access to details about these tools would be most welcome.

I do question, however, what is meant by “government” in this statement. In other words, I would be very interested to know more about the scope of what is being proposed. It was only a short while ago that we learned, for example, that police services in Ontario had made use of Clearview AI’s controversial facial recognition database. In some cases, it seems that senior ranks of the police may not even have been aware of this use. Ontario’s Privacy Commissioner at the time expressed concerns over this practice. This case raises important questions regarding the scope of the proposed commitment to transparency and AI. The first is whether police services will be included under government AI governance commitments – and if they are not, why not, and what measures will be put in place to govern AI used in the law enforcement context. It is also important to know what other agencies or departments will be excluded. A further question is whether AI-related commitments at the provincial level will be extended to municipalities, or whether they are intended only for use in the provincial public sector. Another question is whether the principles will only apply to AI developed within government or commissioned by government. In other words, will any law or guidance developed also apply to the myriad services that might be otherwise be available to government? For example, will new rules apply to the decision by a department to use the services of a human resources firm that makes use of AI in its recruitment processes? Will they apply to workplace monitoring software and productivity analytics services that might be introduced in the public service? On this latter point, I note it is unclear whether the commitment to AI governance relates only to AI that affects the general population as opposed to AI used to manage government employees. These issues of application and scope of any proposed governance framework are important.

2. Use Ontarian’s can Trust

The second guiding principle is “Use Ontarians can Trust”. The commitment is framed in these terms: “People building, procuring, and using AI have a responsibility to the people of Ontario that AI never puts people at risk and that proper guardrails are in place before the technology is used by the government.”

One of the challenges here is that there are so many types of AI and so many contexts in which AI can be used. Risk is inevitable -- and some of the risks may be of complex harms. In some cases, these harms may be difficult to foresee. The traffic predicting algorithm used as an illustration in this part of the consultation document has fairly clear-cut risk considerations. The main issue will be whether such an algorithm reduces the risk of serious accidents, for example. The risks from an algorithm that determines who is or is not eligible to receive social assistance benefits, on the other hand, will be much more complex. One significant risk will be that people who need the benefit will not receive it. Other risks might include the exacerbation of existing inequalities, or even greater alienation in the face of a seemingly impersonal system. These risks are serious but some are intangible – they might be ignored, dismissed or underestimated. Virginia Eubanks and others have observed that experimentation with the use of AI in government tends to take place in the context of programs and services for the least empowered members of society. This is troubling. The concept of risk must be robust and multifaceted. Decisions about where to deploy AI must be equitable and unbiased – not just the AI.

One of the initial recommendations in this section is to propose “ways to update Ontario’s rules, laws and guidance to strengthen the governance of AI, including whether to adopt a risk-based approach to determine when which rules apply.” I agree that work needs to be done to update Ontario’s legal frameworks in order to better address the challenges of AI. Data protection and human rights are two obvious areas where legislative reform may be necessary. It will also be important for those reforms to be accompanied by the necessary resources to handle the complex cases likely to be generated by AI. If legal protections and processes are enhanced without additional resources, the changes will be meaningless. It may also be necessary to consider establishing a regulatory authority for AI that could provide the governance, oversight and accountability specifically required by AI systems, and that could develop the necessary expertise. Challenging algorithmic decision-making will not be easy for ordinary Ontarians. They will need expert assistance and guidance for any challenge that goes beyond asking for an explanation or a reconsideration of the decision. A properly-resourced oversight body can provide this assistance and can develop necessary expertise to assist those who develop and implement AI.

3. AI that Serves all Ontarians

The overall goal for this commitment is to ensure that “Government use of AI reflects and protects the rights and values of Ontarians.” The values that are identified are equity and inclusion, as well as accountability.

As noted above, there is a tendency to deploy AI systems in ways that impact the most disadvantaged. AI systems are in use in the carceral context, they are used for the administration of social benefits programs, and so on. The very choices as to where to start experimenting with AI are ones that have significant impact. In these contexts, the risks of harm may be quite significant, but the populations impacted may feel most disempowered when it comes to challenging decisions or seeking recourse. This part of the consultation document suggests as a potential action the need to “Assess whether the government should prohibit the use of AI in certain use cases where vulnerable populations are at an extremely high risk.” While there likely are contexts in which a risk-based approach would warrant an early ban on AI until the risks can properly addressed, beyond bans, there should also be deliberation about how to use AI in contexts in which individuals are vulnerable. This might mean not rushing to experiment with AI in these areas until we have built a more robust accountability and oversight framework. It may also mean going slowly in certain areas – using only AI-assisted decision making, for example, and carefully studying and evaluating particular use cases.

 

In closing I would like to note as well the very thoughtful and thorough work being done by the Law Commission of Ontario on AI and Governance, which has a particular focus on the public sector. I hope that any policy development being done in this area will make good use of the Law Commission’s work.

Published in Privacy

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law