Monday, 08 August 2022 07:58

Canada's Proposed AI & Data Act - Purpose and Application

Written by  Teresa Scassa
Rate this item
(6 votes)

This is the first of a series of posts on the part of Bill C-27 that would enact a new Artificial Intelligence and Data Act (AIDA) in Canada. Previous posts have considered the part of the bill that would reform Canada’s private sector data protection law. This series on the AIDA begins with an overview of its purpose and application.

Bill C-27 contains the text of three proposed laws. The first is a revamped private sector data protection law. The second would establish a new Data Tribunal that is assigned a role under the data protection law. The third is a new Artificial Intelligence and Data Act (AIDA) While the two other components were present in the bill’s failed predecessor Bill C-11, the AIDA is new – and for many came as a bit of a surprise. The common thread, of course, is the government’s Digital Charter, which set out a series of commitments for building trust in the digital and data economy.

The preamble to Bill C-27, as a whole, addresses both AI and data protection concerns. Where it addresses AI regulation directly, it identifies the need to harmonize with national and international standards for the development and deployment of AI, and the importance of ensuring that AI systems uphold Canadian values in line with the principles of international human rights law. The preamble also signals a need for a more agile regulatory framework – something that might go towards justifying why so much of the substance of AI governance in the AIDA has been left to the development of regulations. Finally, the preamble speaks of a need “to foster an environment in which Canadians can seize the benefits of the digital and data-driven economy and to establish a regulatory framework that supports and protects Canadian norms and values, including the right to privacy.” This, then, frames how AI regulation (and data protection) will work in Canada – an attempt to walk a tightrope between enabling fast-paced innovation and protecting norms, values and privacy rights.

Regulating the digital economy has posed some constitutional (division of powers) challenges for the federal government, and these challenges are evident in the AIDA, particularly with respect to the scope of application of the law. Section 4 sets out the dual purposes of the legislation:

(a) to regulate international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems; and

(b) to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.

By focusing on international and interprovincial trade and commerce, the government asserts its general trade and commerce jurisdiction, without treading on the toes of the provinces, who remain responsible for intra-provincial activities. Yet, this means that there will be important gaps in AI regulation. Until the provinces act, these will be with respect to purely provincial AI solutions, whether in the public or private sectors, and, to a large extent, AI in the not-for-profit sector. However, this could get complicated since the AIDA sets out obligations for a range of actors, some of which could include international or interprovincial providers of AI systems to provincial governments.

The second purpose set out in s. 4 suggests that at least when it comes to AI systems that may result in serious harm, the federal jurisdiction over criminal law may be invoked. The AIDA creates a series of offences that could be supported by this power – yet, ultimately the offences relate to failures to meet the obligations that arise based on being engaged in a ‘regulated activity’, which takes one back to activities carried out in the course of international or interprovincial trade and commerce. The federal trade and commerce power thus remains the backbone of this bill.

Although there would be no constitutional difficulties with the federal government exerting jurisdiction over its own activities, the AIDA specifically excludes its application to federal government institutions, as defined in the Privacy Act. Significantly, it also does not apply to products, services or activities that are under the control of the Minister of National Defence, the Canadian Security Intelligence Service, the Communications Security Establishment or any other person who is responsible for a federal or provincial department or agency that is prescribed by regulation. This means that the AIDA would not apply even to those AI systems developed by the private sector for any of the listed actors. The exclusions are significant, particularly since the AIDA seems to be focussed on the prevention of harm to individuals (more on this in a forthcoming post) and the parties excluded are ones that might well develop or commission the development of AI that could (seriously) adversely impact individuals. It is possible that the government intends to introduce or rely upon other governance mechanisms to ensure that AI and personal data are not abused in these contexts. Or not. In contrast, the EU’s AI Regulation addresses the perceived need for latitude when it comes to national defence via an exception for “AI systems developed or used exclusively for military purposes” [my emphasis]. This exception is nowhere near as broad as that in the AIDA, which excludes all “products, services or activities under the control of the Minister of National defence”. Note that the Department of National Defence (DND) made headlines in 2020 when it contracted for an AI application to assist in hiring; it also made headlines in 2021 over an aborted psyops campaign in Canada. There is no reason why non-military DND uses of AI should not be subject to governance.

The government might justify excluding the federal public sector from governance under the AIDA on the basis that it is already governed by the Directive on Automated Decision-Making. This Directive applies to automated decision-making systems developed and used by the federal government, although there are numerous gaps in its application. For example, it does not apply to systems adopted before it took effect, it applies only to automated decision systems and not to other AI systems, and it currently does not apply to systems used internally (e.g., to govern public sector employees). It also does not have the enforcement measures that the AIDA has, and, since government systems could well be high-impact, this seems like a gap in governance. Consider in this respect the much-criticized ArriveCan App, designed for COVID-19 border screening and now contemplated for much broader use at border entries into Canada. The app has been criticized for its lack of transparency, and for the ‘glitch’ that sent inexplicable quarantine orders to potentially thousands of users. The ArriveCan app went through the DADM process, but clearly this is not enough to address governance issues.

Another important limit on the application of the AIDA is that most of its obligations apply only to “high impact systems”. This term is defined in the legislation as “an artificial intelligence system that meets the criteria for a high-impact system that are established in regulations.” This essentially says that this crucial term in the Bill will mean what cabinet decides it will mean at some future date. It is difficult to fully assess the significance or impact of this statute without any sense of how this term will be defined. The only obligations that appear to apply more generally are the obligation in s. 6 regarding the anonymization of data used or intended for use in AI systems, and the obligation in s. 10 to keep records regarding the anonymization measures taken.

By contrast, the EU’s AI Regulation applies to all AI systems. These fall into one of four categories: unacceptable risk, high-risk, limited risk, and low/minimal risk. Those systems that fall into the first category are banned. Those in the high-risk category are subject to the regulation’s most stringent requirements. Limited-risk AI systems need only meet certain transparency requirements and low-risk AI is essentially unregulated. Note that Canada’s approach to ‘agile’ regulation is to address only one category of AI systems – those that fall into the as-yet undefined category of high ‘impact’. It is unclear whether this is agile or supine. It is also not clear what importance should be given to the choice of the word ‘impact’ rather than ‘risk’. However, it should be noted that risk refers not just to actual but to potential harm, whereas ‘impact’ seems to suggest actual harm. Although one should not necessarily read too much into this choice of language, the fact that this important element is left to regulations means that Parliament will be asked to enact a law without understanding its full scope of application. This seems like a problem.

 

Teresa Scassa

Latest from Teresa Scassa

Related items (by tag)

back to top