Teresa Scassa - Blog

Displaying items by tag: AI regulation

The Artificial Intelligence and Data Act (AIDA) in Bill C-27 will create new obligations for those responsible for AI systems (particularly high impact systems), as well as those who process or make available anonymized data for use in AI systems. In any regulatory scheme that imposes obligations, oversight and enforcement are key issues. A long-standing critique of the Personal Information Protection and Electronic Documents Act (PIPEDA) has been that it is relatively toothless. This is addressed in the first part of Bill C-27, which reforms the data protection law to provide a suite of new enforcement powers that include order-making powers for the Privacy Commissioner and the ability to impose stiff administrative monetary penalties (AMPs). The AIDA comes with ‘teeth’ as well, although these teeth seem set within a rather fragile jaw. I will begin by identifying the oversight and enforcement powers (the teeth) and will then look at the agent of oversight and enforcement (the jaw). The table below sets out the main obligations accompanied by specific compliance measures. There is also the possibility that any breach of these obligations might be treated as either a violation or offence, although the details of these require elaboration in as-yet-to-be-drafted regulations.

 

Obligation

Oversight Power

To keep records regarding the manner in which data is anonymized and the use or management of anonymized data as well as records of assessment of whether an AI system is high risk (s. 10)

Minister may order the record-keeper to provide any of these records (s. 13(1))

 

 

Any record-keeping obligations imposed on any actor in as-yet undrafted regulations

Where there are reasonable grounds to believe that the use of a high impact system could result in harm or biased output, the Minister can order the specified person to provide these records (s. 14)

Obligation to comply with any of the requirements in ss. 6-12, or any order made under s. 13-14

Minister (on reasonable grounds to believe there has a contravention) can require the person to conduct either an internal or an external audit with respect to the possible contravention (s. 15); the audit must be provided to the Minister

 

A person who has been audited may be ordered by the Minister to implement any measure specified in the order, or to address any matter in the audit report (s. 16)

Obligation to cease using or making available a high-impact system that creates a serious risk of imminent harm

Minister may order a person responsible for a high-impact system to cease using it or making it available for use if the Minister has reasonable grounds to believe that its use gives rise to a serious risk of imminent harm (s. 17)

Transparency requirement (any person referred to in sections 6 to 12, 15 and 16)

Minister may order the person to publish on a publicly available website any information related to any of these sections of the AIDA, but there is an exception for confidential business information (s. 18)

 

Compliance with orders made by the Minister is mandatory (s. 19) and there is a procedure for them to become enforceable as orders of the Federal Court.

Although the Minister is subject to confidentiality requirements, they may disclose any information they obtain through the exercise of the above powers to certain entities if they have reasonable grounds to believe that a person carrying out a regulated activity “has contravened, or is likely to contravene, another Act of Parliament or a provincial legislature” (s. 26(1)). Those entities include the Privacy Commissioner, the Canadian Human Rights Commission, the Commissioner of Competition, the Canadian Radio-television and Telecommunications Commission, their provincial analogues, or any other person prescribed by regulation. An organization may therefore be in violation of statutes other than AIDA and may be subject to investigation and penalties under those laws.

The AIDA itself provides no mechanism for individuals to file complaints regarding any harms they may believe they have suffered, nor is there any provision for the investigation of complaints.

The AIDA sets up the Minister as the actor responsible for oversight and enforcement, but the Minister may delegate any or all of their oversight powers to the new Artificial Intelligence and Data Commissioner who is created by s. 33. The Data Commissioner is described in the AIDA as “a senior official of the department over which the Minister presides”. They are not remotely independent. Their role is “to assist the Minister” responsible for the AIDA (most likely the Minister of Industry), and they will also therefore work in the Ministry responsible for supporting the Canadian AI industry. There is essentially no real regulator under the AIDA. Instead, oversight and enforcement are provided by the same group that drafted the law and that will draft the regulations. It is not a great look, and, certainly goes against the advice of the OECD on AI governance, as Mardi Wentzel has pointed out.

The role of Data Commissioner had been first floated in the 2019 Mandate Letter to the Minister of Industry, which provided that the Minister would: “create new regulations for large digital companies to better protect people’s personal data and encourage greater competition in the digital marketplace. A newly created Data Commissioner will oversee those regulations.” The 2021 Federal Budget provided funding for the Data Commissioner, and referred to the role of this Commissioner as to “inform government and business approaches to data-driven issues to help protect people’s personal data and to encourage innovation in the digital marketplace.” In comparison with these somewhat grander ideas, the new AI and Data Commissioner role is – well – smaller than the title. It is a bit like telling your kids you’re getting them a deluxe bouncy castle for their birthday party and then on the big day tossing a couple of couch cushions on the floor instead.

To perhaps add a gloss of some ‘independent’ input into the administration of the statute, the AIDA provides for the creation of an advisory committee (s. 35) that will provide the Minister with “advice on any matters related to this Part”. However, this too is a bit of a throwaway. Neither the AIDA nor any anticipated regulations will provide for any particular composition of the advisory committee, for the appointment of a chair with a fixed term, or for any reports by the committee on its advice or activities. It is the Minister who may choose to publish advice he receives from the committee on a publicly available website (s. 35(2)).

The AIDA also provides for enforcement, which can take one of two routes. Well, one of three routes. One route is to do nothing – after all, the Minister is also responsible for supporting the AI industry in Canada– so this cannot be ruled out. A second option will be to treat a breach of any of the obligations specified in the as-yet undrafted regulations as a “violation” and impose an administrative monetary penalty (AMP). A third option is to treat a breach as an “offence” and proceed by way of prosecution (s. 30). A choice must be made between proceeding via the AMP or the offense route (s. 29(3)). Providing false information and obstruction are distinct offences (s. 30(2)). There are also separate offences in ss. 38 and 39 relating to the use of illegally obtained data and knowingly or recklessly making an AI system available for use that is likely to cause harm.

Administrative monetary penalties under Part 1 of Bill C-27 (relating to data protection) are quite steep. However, the necessary details regarding the AMPs that will be available for breach of the AIDA are to be set out in regulations that have yet to be drafted (s. 29(4)(d)). All that the AIDA really tells us about these AMPs is that their purpose is “to promote compliance with this Part and not to punish” (s. 29(2)). Note that at the bottom of the list of regulation-making powers for AMPs set out in s. 29(4). This provision allows the Minister to make regulations “respecting the persons or classes of persons who may exercise any power, or perform any duty or function, in relation to the scheme.” There is a good chance that the AMPs will (eventually) be administered by the new Personal Information and Data Tribunal, which is created in Part 2 of Bill C-27. This, at least, will provide some separation between the Minister and the imposition of financial penalties. If this is the plan, though, the draft law should say so.

It is clear that not all breaches of the obligations in the AIDA will be ones for which AMPs are available. Regulations will specify the breach of which provisions of the AIDA or its regulations will constitute a violation (s. 29(4)(a)). The regulations will also indicate whether the breach of the particular obligation is classified as minor, serious or very serious (s. 29(4)(b)). The regulations will also set out how any such proceedings will unfold. As-yet undrafted regulations will also specify the amounts or ranges of AMPS, and factors to take into account in imposing them.

This lack of important detail makes it hard not to think of the oversight and enforcement scheme in the AIDA as a rough draft sketched out on a cocktail napkin after an animated after-hours discussion of what enforcement under the AIDA should look like. Clearly, the goal is to be ‘agile’, but ‘agile’ should not be confused with slapdash. Parliament is being asked to enact a law that leaves many essential components undefined. With so much left to regulations, one wonders whether all the missing pieces can (or will) be put in place within this decade. There are instances of other federal laws left incomplete by never-drafted regulations. For example, we are still waiting for the private right of action provided for in Canada’s Anti-Spam Law, which cannot come into effect until the necessary regulations are drafted. A cynic might even say that failing to draft essential regulations is a good way to check the “enact legislation on this issue” box on the to-do list, without actually changing the status quo.

Published in Privacy

This is the first of a series of posts on the part of Bill C-27 that would enact a new Artificial Intelligence and Data Act (AIDA) in Canada. Previous posts have considered the part of the bill that would reform Canada’s private sector data protection law. This series on the AIDA begins with an overview of its purpose and application.

Bill C-27 contains the text of three proposed laws. The first is a revamped private sector data protection law. The second would establish a new Data Tribunal that is assigned a role under the data protection law. The third is a new Artificial Intelligence and Data Act (AIDA) While the two other components were present in the bill’s failed predecessor Bill C-11, the AIDA is new – and for many came as a bit of a surprise. The common thread, of course, is the government’s Digital Charter, which set out a series of commitments for building trust in the digital and data economy.

The preamble to Bill C-27, as a whole, addresses both AI and data protection concerns. Where it addresses AI regulation directly, it identifies the need to harmonize with national and international standards for the development and deployment of AI, and the importance of ensuring that AI systems uphold Canadian values in line with the principles of international human rights law. The preamble also signals a need for a more agile regulatory framework – something that might go towards justifying why so much of the substance of AI governance in the AIDA has been left to the development of regulations. Finally, the preamble speaks of a need “to foster an environment in which Canadians can seize the benefits of the digital and data-driven economy and to establish a regulatory framework that supports and protects Canadian norms and values, including the right to privacy.” This, then, frames how AI regulation (and data protection) will work in Canada – an attempt to walk a tightrope between enabling fast-paced innovation and protecting norms, values and privacy rights.

Regulating the digital economy has posed some constitutional (division of powers) challenges for the federal government, and these challenges are evident in the AIDA, particularly with respect to the scope of application of the law. Section 4 sets out the dual purposes of the legislation:

(a) to regulate international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems; and

(b) to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.

By focusing on international and interprovincial trade and commerce, the government asserts its general trade and commerce jurisdiction, without treading on the toes of the provinces, who remain responsible for intra-provincial activities. Yet, this means that there will be important gaps in AI regulation. Until the provinces act, these will be with respect to purely provincial AI solutions, whether in the public or private sectors, and, to a large extent, AI in the not-for-profit sector. However, this could get complicated since the AIDA sets out obligations for a range of actors, some of which could include international or interprovincial providers of AI systems to provincial governments.

The second purpose set out in s. 4 suggests that at least when it comes to AI systems that may result in serious harm, the federal jurisdiction over criminal law may be invoked. The AIDA creates a series of offences that could be supported by this power – yet, ultimately the offences relate to failures to meet the obligations that arise based on being engaged in a ‘regulated activity’, which takes one back to activities carried out in the course of international or interprovincial trade and commerce. The federal trade and commerce power thus remains the backbone of this bill.

Although there would be no constitutional difficulties with the federal government exerting jurisdiction over its own activities, the AIDA specifically excludes its application to federal government institutions, as defined in the Privacy Act. Significantly, it also does not apply to products, services or activities that are under the control of the Minister of National Defence, the Canadian Security Intelligence Service, the Communications Security Establishment or any other person who is responsible for a federal or provincial department or agency that is prescribed by regulation. This means that the AIDA would not apply even to those AI systems developed by the private sector for any of the listed actors. The exclusions are significant, particularly since the AIDA seems to be focussed on the prevention of harm to individuals (more on this in a forthcoming post) and the parties excluded are ones that might well develop or commission the development of AI that could (seriously) adversely impact individuals. It is possible that the government intends to introduce or rely upon other governance mechanisms to ensure that AI and personal data are not abused in these contexts. Or not. In contrast, the EU’s AI Regulation addresses the perceived need for latitude when it comes to national defence via an exception for “AI systems developed or used exclusively for military purposes” [my emphasis]. This exception is nowhere near as broad as that in the AIDA, which excludes all “products, services or activities under the control of the Minister of National defence”. Note that the Department of National Defence (DND) made headlines in 2020 when it contracted for an AI application to assist in hiring; it also made headlines in 2021 over an aborted psyops campaign in Canada. There is no reason why non-military DND uses of AI should not be subject to governance.

The government might justify excluding the federal public sector from governance under the AIDA on the basis that it is already governed by the Directive on Automated Decision-Making. This Directive applies to automated decision-making systems developed and used by the federal government, although there are numerous gaps in its application. For example, it does not apply to systems adopted before it took effect, it applies only to automated decision systems and not to other AI systems, and it currently does not apply to systems used internally (e.g., to govern public sector employees). It also does not have the enforcement measures that the AIDA has, and, since government systems could well be high-impact, this seems like a gap in governance. Consider in this respect the much-criticized ArriveCan App, designed for COVID-19 border screening and now contemplated for much broader use at border entries into Canada. The app has been criticized for its lack of transparency, and for the ‘glitch’ that sent inexplicable quarantine orders to potentially thousands of users. The ArriveCan app went through the DADM process, but clearly this is not enough to address governance issues.

Another important limit on the application of the AIDA is that most of its obligations apply only to “high impact systems”. This term is defined in the legislation as “an artificial intelligence system that meets the criteria for a high-impact system that are established in regulations.” This essentially says that this crucial term in the Bill will mean what cabinet decides it will mean at some future date. It is difficult to fully assess the significance or impact of this statute without any sense of how this term will be defined. The only obligations that appear to apply more generally are the obligation in s. 6 regarding the anonymization of data used or intended for use in AI systems, and the obligation in s. 10 to keep records regarding the anonymization measures taken.

By contrast, the EU’s AI Regulation applies to all AI systems. These fall into one of four categories: unacceptable risk, high-risk, limited risk, and low/minimal risk. Those systems that fall into the first category are banned. Those in the high-risk category are subject to the regulation’s most stringent requirements. Limited-risk AI systems need only meet certain transparency requirements and low-risk AI is essentially unregulated. Note that Canada’s approach to ‘agile’ regulation is to address only one category of AI systems – those that fall into the as-yet undefined category of high ‘impact’. It is unclear whether this is agile or supine. It is also not clear what importance should be given to the choice of the word ‘impact’ rather than ‘risk’. However, it should be noted that risk refers not just to actual but to potential harm, whereas ‘impact’ seems to suggest actual harm. Although one should not necessarily read too much into this choice of language, the fact that this important element is left to regulations means that Parliament will be asked to enact a law without understanding its full scope of application. This seems like a problem.

 

Published in Privacy

As part of my series on Bill C-27, I will be writing about both the proposed amendments to Canada’s private sector data protection law and the part of the Bill that will create a new Artificial Intelligence and Data Act (AIDA). So far, I have been writing about privacy, and my posts on consent, de-identification, data-for-good, and the right of erasure are already available. Posts on AIDA, will follow, although I still have a bit more territory on privacy to cover first. However, in the meantime, as a teaser, perhaps you might be interested in playing a bit of statutory MadLibs…...

Have you ever played MadLibs? It’s a paper-and-pencil game where someone asks the people in the room to supply a verb, noun, adverb, adjective, or body part, and the provided words are used to fill in the blanks in a story. The results are often absurd and sometimes hilarious.

The federal government’s proposal in Bill C-27 for an Artificial Intelligence and Data Act, really lends itself to a game of statutory MadLibs. This is because some of the most important parts of the bill are effectively left blank – either the Minister or the Governor-in-Council is tasked in the Bill with filling out the details in regulations. Do you want to play? Grab a pencil, and here goes:

Company X is developing an AI system that will (insert definition of ‘high impact system). It knows that this system is high impact because (insert how a company should assess impact). Company X has established measures to mitigate potential harms by (insert measures the company took to comply with the regulations) and has also recorded (insert records it kept), and published (insert information to be published).

Company X also had its system audited by an auditor who is (insert qualifications). Company X is being careful, because if it doesn’t comply with (insert a section of the Act for which non-compliance will count as a violation), it could be found to have committed a (insert degree of severity) violation. This could lead to (insert type of proceeding).

Company X, though, will be able to rely on (insert possible defence). However, if (insert possible defence) is unsuccessful, Company X may be liable to pay an Administrative Monetary Penalty if they are a (insert category of ‘person’) and if they have (insert factors to take into account). Ultimately, if they are unhappy with the outcome, they can launch a (insert a type of appeal proceeding).

Because of this regulatory scheme, Canadians can feel (insert emotion) at how their rights and interests are protected.

Published in Privacy

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law