Tags
access to information
AI
AIDA
AI governance
AI regulation
Ambush Marketing
artificial intelligence
big data
bill c11
Bill c27
copyright
data governance
data protection
Electronic Commerce
freedom of expression
Geospatial
geospatial data
intellectual property
Internet
internet law
IP
open courts
open data
open government
personal information
pipeda
Privacy
smart cities
trademarks
transparency
|
Tuesday, 19 March 2024 09:41
AI, Human Rights, and Canada's Proposed AI and Data ActArtificial intelligence technologies have significant potential to impact human rights. Because of this, emerging AI laws make explicit reference to human rights. Already-deployed AI systems are raising human rights concerns – including bias and discrimination in hiring, healthcare, and other contexts; disruptions of democracy; enhanced surveillance; and hateful deepfake attacks. Well-documented human rights impacts also flow from the use of AI technologies by law enforcement and the state, and from the use of AI in armed conflicts. Governments are aware that human rights issues with AI technologies must be addressed. Internationally, this is evident in declarations by the G7, UNESCO, and the OECD. It is also clear in emerging national and supranational regulatory approaches. For example, human rights are tackled in the EU AI Act, which not only establishes certain human-rights-based no-go zones for AI technologies, but also addresses discriminatory bias. The US’s NIST AI Risk Management Framework (a standard, not a law – but influential nonetheless) also addresses the identification and mitigation of discriminatory bias. Canada’s Artificial Intelligence and Data Act (AIDA), proposed by the Minister of Industry, Science and Economic Development (ISED) is currently at the committee stage as part of Bill C-27. The Bill’s preamble states that “Parliament recognizes that artificial intelligence systems and other emerging technologies should uphold Canadian norms and values in line with the principles of international human rights law”. In its substantive provisions, AIDA addresses “biased output”, which it defines in terms of the prohibited grounds of discrimination in the Canadian Human Rights Act. AIDA imposes obligations on certain actors to assess and mitigate the risks of biased output in AI systems. The inclusion of these human rights elements in AIDA is positive, but they are also worth a closer look. Risk Regulation and Human Rights Requiring developers to take human rights into account in the design and development of AI systems is important, and certainly many private sector organizations already take seriously the problems of bias and the need to identify and mitigate it. After all, biased AI systems will be unable to perform properly, and may expose their developers to reputational harm and possibly legal action. However, such attention has not been universal, and has been addressed with different degrees of commitment. Legislated requirements are thus necessary, and AIDA will provide these. AIDA creates obligations to identify and mitigate potential harms at the design and development stage, and there are additional documentation and some transparency requirements. The enforcement of AIDA obligations can come through audits conducted or ordered by the new AI and Data Commissioner, and there is also the potential to use administrative monetary penalties to punish non-compliance, although what this scheme will look like will depend very much on as-yet-to-be-developed regulations. AIDA, however, has some important limitations when it comes to human rights. Selective Approach to Human Rights Although AIDA creates obligations around biased output, it does not address human rights beyond the right to be free from discrimination. Unlike the EU AI Act, for example, there are no prohibited practices related to the use of AI in certain forms of surveillance. A revised Article 5 of the EU AI Act will prohibit real-time biometric surveillance by law enforcement agencies in publicly accessible spaces, subject to carefully-limited exceptions. The untargeted scraping of facial images for the building or expansion of facial recognition databases (as occurred with Clearview AI) is also prohibited. Emotion recognition technologies are banned in some contexts, as are some forms of predictive policing. Some applications that are not outright prohibited, are categorized as high risk and have limits imposed on the scope of their use. These “no-go zones” reflect concerns over a much broader range of human rights and civil liberties than what we see reflected in Canada’s AIDA. It is small comfort to say that the Canadian Charter of Rights and Freedoms remains as a backstop against government excess in the use of AI tools for surveillance or policing; ex ante AI regulation is meant to head off problems before they become manifest. No-go zones reflect limits on what society is prepared to tolerate; AIDA sets no such limits. Constitutional litigation is expensive, time-consuming and uncertain in outcome (just look at the 5-4 splint in the recent R. v. Bykovets decision of the Supreme Court of Canada). Further, the application of AIDA to the military and intelligence services is expressly excluded from AIDA’s scope (as is the application of the law to the federal public service). Privacy is an important human right, and privacy rights are not part of the scope of AIDA. The initial response is that such rights are dealt with under privacy legislation for public and private sectors and at federal, provincial and territorial levels. However, such privacy statutes deal principally with data protection (in other words, they govern the collection, use and disclosure of personal information). AIDA could have addressed surveillance more directly. After all, the EU has top of its class data protection laws, but still places limits on the use of AI systems for certain types of surveillance activities. Second, privacy laws in Canada (and there are many of them) are, apart from Quebec’s, largely in a state of neglect and disrepair. Privacy commissioners at federal, provincial, and territorial levels have been issuing guidance as to how they see their laws applying in the AI context, and findings and rulings in privacy complaints involving AI systems are starting to emerge. The commissioners are thoughtfully adapting existing laws to new circumstances, but there is no question that there is need for legislative reform. In issuing its recent guidance on Facial Recognition and Mugshot Databases, the Office of the Information and Privacy Commissioner of Ontario specifically identified the need to issue the guidance in the face of legislative gaps and inaction that “if left unaddressed, risk serious harms to individuals’ right to privacy and other fundamental human rights.” Along with AIDA, Bill C-27 contains the Consumer Privacy Protection Act (CPPA) which will reform Canada’s private sector data protection law, the Personal Information Protection and Electronic Documents Act (PIPEDA). However, the CPPA has only one AI-specific amendment – a somewhat tepid right to an explanation of automated decision-making. It does not address the data scraping issue at the heart of the Clearview AI investigation, for example (where the core findings of the Commissioner remain disputed by the investigated company) and which prompted the articulation of a no-go zone for data-scraping for certain purposes in the EU AI Act. High Impact AI and Human Rights AIDA will apply only to “high impact” AI systems. Among other things, such systems can adversely impact human rights. While the original version of AIDA in Bill C-27 left the definition of “high impact” entirely to regulations (generating considerable and deserved criticism), the Minister of ISED has since proposed amendments to C-27 that set out a list of categories of “high impact” AI systems. While this list at least provides some insight into what the government is thinking, it creates new problems as well. This list identifies several areas in which AI systems could have significant impacts on individuals, including in healthcare and in some court or tribunal proceedings. Also included on the list is the use of AI in all stages of the employment context, and the use of AI in making decisions about who is eligible for services and at what price. Left off the list, however, is where AI systems are (already) used to determine who is selected as a tenant for rental accommodation. Such tools have extremely high impact. Yet, since residential tenancies are interests in land, and not services, they are simply not captured by the current “high impact” categories. This is surely an oversight – yet it is one that highlights the rather slap-dash construction of the AIDA and its proposed amendments. As a further example, a high-impact category addressing the use of biometrics to assess an individual’s behaviour or state of mind could be interpreted to capture affect recognition systems or the analysis of social media communications, but this is less clear than it should be. It also raises the question as to whether the best approach, from a human rights perspective, is to regulate such systems as high impact or whether limits need to be placed on their use and deployment. Of course, a key problem is that this bill is housed within ISED. This is not a bill centrally developed that takes a broader approach to the federal government and its powers. Under AIDA, medical devices are excluded from the category of “high impact” uses of AI in the healthcare context because it is Health Canada that will regulate AI-enabled medical devices, and ISED must avoid treading on its toes. Perhaps ISED also seeks to avoid encroaching on the mandates of the Minister of Justice, or the Minister of Public Safety. This may help explain some of the crabbed and clunky framing of AIDA compared to the EU AI Act. It does, however, raise the question of why Canada chose this route – adopting a purportedly comprehensive risk-management framework housed under the constrained authority of the Minister of ISED. Such an approach is inherently flawed. As discussed above, AIDA is limited in the human rights it is prepared to address, and it raises concerns about how human rights will be both interpreted and framed. On the interpretation side of things, the incorporation of the Canadian Human Rights Act’s definition of discrimination in AIDA combined with ISED’s power to interpret and apply the proposed law will give ISED interpretive authority over the definition of discrimination without the accompanying expertise of the Canadian Human Rights Commission. Further, it is not clear that ISED is a place for expansive interpretations of human rights; human rights are not a core part of its mandate – although fostering innovation is. All of this should leave Canadians with some legitimate concerns. AIDA may well be passed into law – and it may prove to be useful in the better governance of AI. But when it comes to human rights, it has very real limitations. AIDA cannot be allowed to end the conversation around human rights and AI at the federal level – nor at the provincial level either. Much work remains to be done.
Published in
Privacy
Monday, 11 December 2023 06:58
Data Governance for AI under Canada's Proposed AI and Data Act (AIDA Amendments Part IV)The federal government’s proposed Artificial Intelligence and Data Act (AIDA) (Part III of Bill C-27) - contained some data governance requirements for anonymized data used in AI in its original version. These were meant to dovetail with changes to PIPEDA reflected in the Consumer Privacy Protection Act (CPPA) (Part I of Bill C-27). The CPPA provides in s. 6(5) that “this Act does not apply in respect of personal information that has been anonymized.” Although no such provision is found in PIPEDA, this is, to all practical effects, the state of the law under PIPEDA. PIPEDA applies to “personal information”, which is defined as “information about an identifiable individual”. If someone is not identifiable, then it is not personal information, and the law does not apply. This was the conclusion reached, for example, in the 2020 Cadillac Fairview joint finding of the federal Privacy Commissioner and his counterparts from BC and Alberta. PIPEDA does apply to pseudonymized information because such information ultimately permits reidentification. The standard for identifiability under PIPEDA had been set by the courts as a “’serious possibility’ that an individual could be identified through the use of that information, alone or in combination with other available information.” (Cadillac Fairview at para 143). It is not an absolute standard (although the proposed definition for anonymized data in C-27 currently seems closer to absolute). In any event, the original version of AIDA was meant to offer comfort to those concerned with the flat-out exclusion of anonymized data from the scope of the CPPA. Section 6 of AIDA provided that: 6. A person who carries out any regulated activity and who processes or makes available anonymized data in the course of that activity must, in accordance with the regulations, establish measures with respect to (a) the manner in which data is anonymized; and (b) the use or management of anonymized data. Problematically, however, AIDA only provided for data governance with respect to this particular subset of data. It contained no governance requirements for personal, pseudonymized, or non-personal data. Artificial intelligence systems will be only as good as the data on which they are trained. Data governance is a fundamental element of proper AI regulation – and it must address more than anonymized personal data. This is an area where the amendments to AIDA proposed by the Minister of Industry demonstrate clear improvements over the original version. To begin with, the old s. 6 is removed from AIDA. Instead of specific governance obligations for anonymized data, we see some new obligations introduced regarding data more generally. For example, as part of the set of obligations relating to general-purpose AI systems, there is a requirement to ensure that “measures respecting the data used in developing the system have been established in accordance with the regulations” (s. 7(1)a)). There is also an obligation to maintain records “relating to the data and processes used in developing the general-purpose system and in assessing the system’s capabilities and limitations” (s. 7(2)(b)). There are similar obligations the case of machine learning models that are intended to be incorporated into high-impact systems (s. 9(1)(a) and 9(2)(a)). Of course, whether this is an actual improvement will depend on the content of the regulations. But at least there is a clear signal that data governance obligations are expanded under the proposed amendments to AIDA. Broader data governance requirements in AIDA are a good thing. They will apply to data generally including personal and anonymized data. Personal data used in AI will also continue to be governed under privacy legislation and privacy commissioners will still have a say about whether data have been properly anonymized. In the case of PIPEDA (or the CPPA if and when it is eventually enacted), the set of principles for the development and use of generative AI issued by federal, provincial, and territorial privacy commissioners on December 8, 2023 make it clear that the commissioners understand their enabling legislation to provide them with the authority to govern a considerable number of issues relating to the use of personal data in AI, whether in the public or private sector. This set of principles send a strong signal to federal and provincial governments alike that privacy laws and privacy regulators have a clear role to play in relation to emerging and evolving AI technologies and that the commissioners are fully engaged. It is also an encouraging example of federal, provincial and territorial co-operation among regulators to provide a coherent common position on key issues in relation to AI governance.
Published in
Privacy
Friday, 08 December 2023 09:00
Oversight and Enforcement in the AIDA Amendments (Part III of a series)This is Part III of a series of posts that look at the proposed amendments to Canada’s Artificial Intelligence and Data Act (which itself is still a Bill, currently before the INDU Committee for study). Part I provided a bit of context and a consideration of some of the new definitions in the Bill. Part II looked at the categories of ‘high-impact’ AI that the Bill now proposes to govern. This post looks at the changed role of the AI and Data Commissioner.
The original version of the Artificial Intelligence and Data Act (Part II of Bill C-27) received considerable criticism for its oversight mechanisms. Legal obligations for the ethical and transparent governance of AI, after all, depend upon appropriate oversight and enforcement for their effectiveness. Although AIDA proposed the creation of an AI and Data Commissioner (Commissioner), this was never meant to be an independent regulator. Ultimately, AIDA placed most of the oversight obligations in the hands of the Minister of Industry – the same Minister responsible for supporting the growth of Canada’s AI sector. Critics considered this to be a conflict of interest. A series of proposed amendments to AIDA are meant to address these concerns by reworking the role of the Commissioner. Section 33(1) of AIDA makes it clear that the AI and Data Commissioner will be a “senior official of the department over which the Minister presides”, and their appointment involves being designated by the Minister. This has not changed, although the amendments would delete from this provision language stating that the Commissioner’s role is “to assist the Minister in the administration and enforcement” of AIDA. The proposed amendments elevate the Commissioner somewhat, giving them a series of powers and duties, to which the Minister can add through delegation (s. 33(3)). So, for example, it will be the newly empowered Commissioner (Commissioner 2.0) who receives reports from those managing a general-purpose or high impact system where there are reasonable grounds to suspect that the use of the system has caused serious harm (s. 8.2(1)(e), s. 11(1)(g)). Commissioner 2.0 can also order someone managing or making available a general-purpose system to provide them with the accountability framework they are required to create under s. 12 (s. 13(1)) and can provide guidance or recommend corrections to that framework (s. 13(2)). Commissioner 2.0 can compel those making available or managing an AI system to provide the Commissioner with an assessment of whether the system is high impact, and in relation to which subclass of high impact systems set out in the schedule. Commissioner 2.0 can agree or disagree with the assessment, although if they disagree, their authority seems limited to informing the entity in writing with their reasons for disagreement. More significant are Commissioner 2.0’s audit powers. Under the original version of AIDA, these were to be exercised by the Minister – the powers are now those of the Commissioner (s. 15(1)). Further, Commissioner 2.0 may order (previously this was framed as “require”) that the person either conduct an audit themselves or that the person engage the services of an independent auditor. The proposed amendments also empower the Commissioner to conduct an audit to determine if there is a possible contravention of AIDA. This strengthens the audit powers by ensuring that there is at least an option that is not at least somewhat under the control of the party being audited. The proposed amendments give Commissioner 2.0 additional powers necessary to conduct an audit and to carry out testing of an AI system (s. 15(2.1)). Where Commissioner 2.0 conducts an audit, they must provide the audited party with a copy of the report (s. 15(3.1)) and where the audit is conducted by the person responsible or someone retained by them, they must provide a copy to the Commissioner (s. 15(4)). The Minister still retains some role with respect to audits. He or she may request that the Commissioner conduct an audit. In an attempt to preserve some independence of Commissioner 2.0, the Commissioner, when receiving such a request, may either carry out the audit or decline to do so on the basis that there are no reasonable grounds for an audit, so long as they provide the Minister with their reasons (s. 15.1(1)(b)). The Minister may also order a person to take actions to bring themselves into compliance with the law (s. 16) or to cease making available or terminate the operation of a system if the Minister considers compliance to be impossible (s. 16(b)) or has reasonable grounds to believe that the use of the system “gives rise to a risk of imminent and serious harm” (s. 17(1)). As noted above, Commissioner 2.0 (a mere employee in the Minister’s department) will have order making powers under the amendments. This is something the Privacy Commissioner of Canada, an independent agent of Parliament, appointed by the Governor in Council, is hoping to get in Bill C-27. If so, it will be for the first time since the enactment of PIPEDA in 2000. Orders of Commissioner 2.0 or the Minister can become enforceable as orders of the Federal Court under s. 20. Commissioner 2.0 is also empowered to share information with a list of federal or provincial government regulators where they have “reasonable grounds to believe that the information may be relevant to the administration or enforcement by the recipient of another Act of Parliament or of a provincial legislature.” (s. 26(1)). Reciprocally, under a new provision, federal regulators may also share information with the Commissioner (s. 26.1). Additionally, Commissioner 2.0 may “enter into arrangements” with different federal regulators and/or the Ministers of Health and Transport in order to assist those actors with the “exercise of their powers or the performance of their functions and duties” in relation to AI (s. 33.1). These new provisions strengthen a more horizontal, multi-regulator approach to governing AI which is an improvement in the Bill, although this might eventually need to be supplemented by corresponding legislative amendments – and additional funding – to better enable the other commissioners to address AI-related issues that fit within their areas of competence. The amendments also impose upon Commissioner 2.0 a new duty to report on the administration and enforcement of AIDA – such a report is to be “published on a publicly available website”. (s. 35.1) The annual reporting requirement is important as it will increase transparency regarding the oversight and enforcement of AIDA. For his or her part, the Minister is empowered to publish information, where it is in the public interest, regarding any contravention of AIDA or where the use of a system gives rise to a serious risk of imminent harm (ss. 27 and 28). Interestingly, AIDA, which provides for the potential imposition of administrative monetary penalties for contraventions of the Act does not indicate who is responsible for setting and imposing these penalties. Section 29(1)(g) makes it clear that “the persons or classes of persons who may exercise any power, or perform any duty or function, in relation to the [AMP] scheme” is left to be articulated in regulations. The AIDA also makes it an offence under s. 30 for anyone to obstruct or provide false or misleading information to “the Minister, anyone acting on behalf of the Minister or an independent auditor in the exercise of their powers or performance of their duties or functions under this Part.” This remains unchanged from the original version of AIDA. Presumably, since Commissioner 2.0 would exercise a great many of the oversight functions, this is meant to apply to the obstruction or misleading of the Commissioner – but it will only do so if the Commissioner is characterized as someone “acting on behalf of the Minister”. This is not language of independence, but then there are other features of AIDA that also counter any view that even Commissioner 2.0 is truly independent (and I mean others besides the fact that they are an employee under the authority of the Minister and handpicked by the Minister). Most notable of these is that should the Commissioner become incapacitated or absent, or should they simply never be designated by the Minister, it is the Minister who will exercise their powers and duties (s. 33(4)). In sum, then, the proposed amendments to AIDA attempt to give some separation between the Minister and Commissioner 2.0 in terms of oversight and enforcement. At the end of the day, however, Commissioner 2.0 is still the Minister’s hand-picked subordinate. Commissioner 2.0 does not serve for a specified term and has no security of tenure. In their absence, the Minister exercises their powers. It falls far short of independence.
Published in
Privacy
Wednesday, 06 December 2023 07:16
High-Impact AI Under AIDA's Proposed Amendments (Part II of a Series)My previous post looked at some of the new definitions in the proposed amendments to the Artificial Intelligence and Data Act (AIDA) which is Part III of Bill C-27. These include a definition of “high impact” AI, and a schedule of classes of high-impact AI (the Schedule is reproduced at the end of this post). The addition of the schedule changes AIDA considerably, and that is the focus of this post. The first two classes in the Schedule capture contexts that can clearly affect individuals. Class 1 addresses AI used in most aspects of employment, and Class 2 relates to the provision of services. On the provision of services (which could include things like banking and insurance), the wording signals that it will apply to decision-making about the provision of services, their cost, or the prioritization of recipients. To be clear, AIDA does not prohibit systems with these functions. They are simply characterized as “high impact” so that they will be subject to governance obligations. A system to determine creditworthiness can still reject individuals; and companies can still prioritize preferred customers – as long as the systems are sufficiently transparent, free from bias and do not cause harm. There is, however, one area which seems to fall through the cracks of Classes 1 & 2: rental accommodation. A lease is an interest in land – it is not a service. Human rights legislation in Canada typically refers to accommodation separately from services for this reason. AI applications are already being used to screen and select tenants for rental accommodation. In the midst of a housing crisis, this is surely an area that is high-impact and where the risks of harm from flawed AI to individuals and families searching for a place to live are significant. This gap needs to be addressed – perhaps simply by adding “or accommodation” after each use of the term “service” in Class 2. Class 3 rightly identifies biometric systems as high risk. It also includes systems that use biometrics in “the assessment of an individual’s behaviour or state of mind.” Key to the scope of this section will be the definition of “biometric”. Some consider biometric data to be exclusively physiological data (fingerprints, iris scans, measurements of facial features, etc.). Yet others include behavioral data in this class if it is used for the second identified purpose – the assessment of behaviour or state of mind. Behavioural data, though, is potentially a very broad category. It can include data about a person’s gait, or their speech or keystroke patterns. Cast even more broadly, it could include things such as “geo-location and IP addresses”, “purchasing habits”, “patterns of device use” or even “browser history and cookies”. If that is the intention behind Class 3, then conventional biometric AI should be Part One of this class; Part Two should be the use of an AI system to assess an individual’s behaviour or state of mind (without referring specifically to biometrics in order to avoid confusion). This would also, importantly, capture the highly controversial area of AI for affect recognition. It would be unfortunate if the framing of the class as ‘biometrics’ led to an unduly narrow interpretation of the kind of systems or data involved. The explanatory note in the Minister’s cover letter for this provision seems to suggest (although it is not clear) that it is purely physiological biometric data that is intended for inclusion and not a broader category. If this is so, then Class 3 seems unduly narrow. Class 4 is likely to be controversial. It addresses content moderation and the prioritization and presentation of content online and identifies these as high-impact algorithmic activities. Such systems are in widespread use in the online context. The explanatory note from the Minister observes that such systems “have important potential impacts on Canadians’ ability to express themselves, as well as pervasive effects at societal scale” (at p. 4). This is certainly true although the impact is less direct and obvious than the impact of a hiring algorithm, for example. Further, although an algorithm that presents a viewer of online streaming services with suggestions for content could have the effect of channeling a viewer’s attention in certain directions, it is hard to see this as “high impact” in many contexts, especially since there are multiple sources of suggestions for online viewing (including word of mouth). That does not mean that feedback loops and filter bubbles (especially in social media) do not contribute to significant social harms – but it does make this high impact class feel large and unwieldy. The Minister’s cover letter indicates that each of the high-impact classes presents “distinct risk profiles and consequently will require distinct risk management strategies.” (at p. 2). Further, he notes that the obligations that will be imposed “are intended to scale in proportion to the risks they present. A low risk use within a class would require correspondingly minimal mitigation effort.” (at p. 2). Much will clearly depend on regulations. Class 5 relates to the use of AI in health care or emergency services, although it explicitly excludes medical devices because these are already addressed by Health Canada (which recently consulted on the regulation of AI-enabled medical devices). This category also demonstrates some of the complexity of regulating AI in Canada’s federal system. Many hospital-based AI technologies are being developed by researchers affiliated with the hospitals and who are not engaged in the interprovincial or international trade and commerce which is necessary for AIDA to apply. AIDA will only apply to those systems developed externally and in the context of international or interprovincial trade and commerce. While this will still capture many applications, it will not capture all – creating different levels of governance within the same health care context. It is also not clear what is meant, in Class 5, by “use of AI in matters relating to health care”. This could be interpreted to mean health care that is provided within what is understood as the health care system. Understood more broadly, it could extend to health-related apps – for example, one of the many available AI-enabled sleep trackers, or an AI-enabled weight loss tool (to give just two examples). I suspect that what is intended is the former, even though, with health care in crisis and more people turning to alternate means to address their health issues, health-related AI technologies might well deserve to be categorized as high-impact. Class 6 involves the use of an AI system by a court or administrative body “in making a determination in respect of an individual who is a party to proceedings before the court or administrative body.” In the first place, this is clearly not meant to apply to automated decision-making generally – it seems to be limited to judicial or quasi-judicial contexts. Class 6 must also be reconciled with s. 3 of AIDA, which provides that AIDA does not apply “with respect to a government institution as defined in s. 3 of the Privacy Act.” This includes the Immigration and Refugee Board, for example, as well as the Canadian Human Rights Commission, the Parole Board, and the Veterans Review and Appeal Board. Making sense of this, then, it would be the tools used by courts or tribunals and developed or deployed in the course of interprovincial or international trade and commerce that would be considered high impact. The example given in the Minister’s letter seems to support this – it is of an AI system that provides an assessment of “risk of recidivism based on historical data” (at p. 5). However, Class 6 is confusing because it identifies the context rather than the tools as high impact. Note that the previous classes address the use of AI “in matters relating to” the subject matter of the class, whereas class 6 identifies actors – the use of AI by a court or tribunal. There is a different focus. Yet the same tools used by courts and tribunals might also be used by administrative bodies or agencies that do not hold hearings or that are otherwise excluded from the application of AIDA. For example, in Ewert v. Canada, the Supreme Court of Canada considered an appeal by a Métis man who challenged the use of recidivism-risk assessment tools by Correctional Services of Canada (to which AIDA would not apply according to s. 3). If this type of tool is high-risk, it is so whether it is used by Correctional Services or a court. This suggests that the framing of Class 6 needs some work. It should perhaps be reworded to identify tools or systems as high impact if they are used to determine the rights, entitlements or status of individuals. Class 7 addresses the use of an AI system to assist a peace officer “in the exercise and performance of their law enforcement powers, duties and function”. Although “peace officer” receives the very broad interpretation found in the Criminal Code, that definition is modified in the AIDA by language that refers to the exercise of specific law enforcement powers. This should still capture the use of a broad range of AI-enabled tools and technologies. It is an interesting question whether AIDA might apply more fulsomely to this class of AI systems (not just those developed in the course of interprovincial or international trade) as it might be considered to be rooted in the federal criminal law power. These, then, are the different classes that are proposed initially to populate the Schedule if AIDA and its amendments are passed. The list is likely to spark debate, and there is certainly some wording that could be improved. And, while it provides much greater clarity as to what is proposed to be regulated, it is also evident that the extent to which obligations will apply will likely be further tailored in regulations to create sliding scales of obligation depending on the degree of risk posed by any given system.
AIDA Schedule: High-Impact Systems — Uses 1. The use of an artificial intelligence system in matters relating to determinations in respect of employment, including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination. 2. The use of an artificial intelligence system in matters relating to (a) the determination of whether to provide services to an individual; (b) the determination of the type or cost of services to be provided to an individual; or (c) the prioritization of the services to be provided to individuals. 3. The use of an artificial intelligence system to process biometric information in matters relating to (a) the identification of an individual, other than in cases in which the biometric information is processed with the individual’s consent to authenticate their identity; or (b) the assessment of an individual’s behaviour or state of mind. 4. The use of an artificial intelligence system in matters relating to (a) the moderation of content that is found on an online communications platform, including a search engine or social media service; or (b) the prioritization of the presentation of such content.
5. The use of an artificial intelligence system in matters relating to health care or emergency services, excluding a use referred to in any of paragraphs (a) to (e) of the definition device in section 2 of the Food and Drugs Act that is in relation to humans. 6. The use of an artificial intelligence system by a court or administrative body in making a determination in respect of an individual who is a party to proceedings before the court or administrative body. 7. The use of an artificial intelligence system to assist a peace officer, as defined in section 2 of the Criminal Code, in the exercise and performance of their law enforcement powers, duties and functions.
Published in
Privacy
Tuesday, 05 December 2023 13:31
AIDA Evolving: A Consideration of Proposed Amendments to Canada's Bill to Enact an AI and Data Act (Part I)Note: This is the first in a series of posts that will look at the proposed amendments to Canada's Artificial Intelligence and Data Act, which is Part III of Bill C-27, currently before Parliament. The amendments are extensive and have only just been introduced. Please consider these assessments to be preliminary.
Canada’s Artificial Intelligence and Data Act (AIDA) (Part III of Bill C-27) has passed second reading and is currently before the INDU Committee for study. Early in this committee process, the Minister of Industry Philippe Champagne announced that his department was working on amendments to AIDA in response to considerable criticism. Those amendments have now been tabled for consideration by the committee. One of the criticisms of the Bill was that it left almost all of its substance to be developed in regulations. It is unsurprising, then, that the amendments are almost as long as the original bill. While it is certainly the case that the amendments contain more detail than the original text, some of the additional length is attributable to new provisions intended to address generative AI systems. This highlights just how quickly things are moving in the AI space, as generative AI was not on anyone’s legislative radar when Bill C-27 was introduced in June 2022. One of the criticisms of AIDA was the absence of any specific prior consultation before its appearance in Bill C-27. This, combined with its lack of substance on many issues, raised basic concerns about how it would apply and to what. For example, AIDA was to govern “high-impact” AI systems, but the definition of such systems was left to regulations. Concerns were also raised about oversight being largely in the hands of the Minister of Industry who is also responsible for supporting Canada’s AI sector. The proposed amendments demonstrate that ISED has been listening to the feedback it has received since June 2022, just as it has been adapting to the challenges of generative AI, and engaging with its international partners on AI governance issues. The amendments, which include new definitions, more explicit obligations, and governance principles for generative AI, will make AIDA a better bill. They may be enough to garner sufficient support to pass it into law, something which the Minister describes as “pivotal”. This is the first in a series of posts that will explore some of the changes proposed to AIDA – as well as some of the remaining issues. This post addresses some of the new definitions.
The proposed AIDA amendments propose a new definition of “artificial intelligence system” which would read: “a technological system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions” (s. 2). This provides greater alignment with the OECD definition of an AI system (“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”) It is an improvement over the previous definition, which was criticized for being too specific about the types of techniques used in AI. It is unclear, though, why the new AIDA definition does not include “content” as an output as is the case with the OECD definition. The AIDA definition is also supplemented by a separate definition for a “general-purpose system”, which is “an artificial intelligence system that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system’s development” (s. 5(1)). There is a further definition for a “machine learning model”, which is “a digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns”. A new s. 5(2) makes it clear that the definition of AI system includes general-purpose systems, and that general-purpose systems can also be high-impact. These new definitions reflect the major changes in both the technology and in the evolving regulatory context in the short time since AIDA was introduced. They also shape a new framework for obligations under the legislation. The proposed amendments also contain a definition of “high-impact system”: “an artificial intelligence system of which at least one of the intended uses may reasonably be concluded to fall within a class of uses set out in the schedule”. (s. 5(1)). The previous version of AIDA left the articulation of “high impact” to future regulations. The schedule sets out a list of classes that describe certain uses. These are: High-Impact Systems — Uses 1. The use of an artificial intelligence system in matters relating to determinations in respect of employment, including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination. 2. The use of an artificial intelligence system in matters relating to (a) the determination of whether to provide services to an individual; (b) the determination of the type or cost of services to be provided to an individual; or (c) the prioritization of the services to be provided to individuals. 3. The use of an artificial intelligence system to process biometric information in matters relating to (a) the identification of an individual, other than in cases in which the biometric information is processed with the individual’s consent to authenticate their identity; or (b) the assessment of an individual’s behaviour or state of mind. 4. The use of an artificial intelligence system in matters relating to (a) the moderation of content that is found on an online communications platform, including a search engine or social media service; or (b) the prioritization of the presentation of such content. 5. The use of an artificial intelligence system in matters relating to health care or emergency services, excluding a use referred to in any of paragraphs (a) to (e) of the definition device in section 2 of the Food and Drugs Act that is in relation to humans. 6. The use of an artificial intelligence system by a court or administrative body in making a determination in respect of an individual who is a party to proceedings before the court or administrative body. 7. The use of an artificial intelligence system to assist a peace officer, as defined in section 2 of the Criminal Code, in the exercise and performance of their law enforcement powers, duties and functions. (Note: the classes in this schedule will be the subject of the next blog post) The list is not intended to be either closed or permanent. Under a proposed s. 36.1, the Governor in Council (GinC) can enact regulations amending the schedule by adding, modifying, or deleting a category of use. Any such decision by the GinC is to be guided by criteria set out in s. 36.1. These include the risks of adverse impact on “the economy or any other aspect of Canadian society and on individuals, including on individual’s health and safety and on their rights recognized in international human rights treaties to which Canada is a party”. The GinC must also consider the “severity and extent” of any adverse impacts, as well as the “social and economic circumstances of any individuals who may experience” such impacts. A final consideration is whether the uses in the category are adequately addressed under another Act of Parliament or of a provincial legislature. The AIDA only applies to “high impact” systems, and since there is no screening or registration process, it is up to those who manage or make such systems available to identify them as such and to meet the obligations set out in the law. A proposed s. 14 would empower the AI and Data Commissioner to order a person who makes available or who manages an AI system to provide the Commissioner with their assessment of whether the system is a high impact system, a general purpose system (which can also be high impact), or a machine learning model intended to be incorporated into a high impact system. My next post will look at the classes of “high-impact” AI as set out in the Schedule.
Published in
Privacy
Tuesday, 21 March 2023 06:50
Explaining the AI and Data Act
The federal government’s proposed Artificial Intelligence and Data Act (AIDA) is currently before Parliament as part of Bill C-27, a bill that will also reform Canada’s private sector data protection law. The AIDA, which I have discussed in more detail in a series of blog posts (here, here, and here), has been criticized for being a shell of a law with essential components (including the definition of the “high impact AI” to which it will apply) being left to as-yet undrafted regulations. The paucity of detail in the AIDA, combined with the lack of public consultation, has prompted considerable frustration and concern from AI developers and from civil society alike. In response to these concerns, the government published, on March 13, 2023, a companion document that explains the government’s thinking behind the AIDA. The document is a useful read as it makes clear some of the rationales for different choices that have been made in the bill. It also obliquely engages with many of the critiques that have been leveled at the AIDA. Unlike a consultation document, however, where feedback is invited to improve what is being proposed, the companion document is essentially an apology (in the Greek sense of the word) – something that is written in defense or explanation. At this stage, any changes will have to come as amendments to the bill. Calling this a ‘companion document’ also somewhat tests the notion of “companion”, since it was published nine months after the AIDA was introduced in Parliament in June 2022. The document explains that the government seeks to take “the first step towards a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses.” The AIDA comes on the heels of the European Union’s draft AI Act – a document that is both more comprehensive and far more widely consulted upon. Pressure on Canada to regulate AI is heightened by the activity in the EU. This is evident in the introduction to the companion document, which speaks of the need to work with international partners to achieve global protection for Canadians and to ensure that “Canadian firms can be recognized internationally as meeting robust standards.” An important critique of the AIDA has been that it will apply only to “high impact” AI. By contrast, the EU AI Act sets a sliding scale of obligations, with the most stringent obligations applying to high risk applications, and minimal obligations for low risk AI. In the AIDA companion document, there is no explanation of why the AIDA is limited to high impact AI. The government explains that defining the scope of the Act in regulations will allow for greater precision, as well as for updates as technology progresses. The companion document offers some clues about what the government considers relevant to determining whether an AI system is high-impact. Factors include the type of harm, the severity of harm, and the scale of use. Although this may help understand the concept of high impact, it does not explain why governance was only considered for high and not medium or low impact AI. This is something that cannot be fixed by the drafting of regulations. The bill would have to be specifically amended to provide for governance for AI with different levels of impact according to a sliding scale of obligations. Another important critique of the AIDA has been that it unduly focuses on individual rather than collective or broader harms. As the US’s NIST AI Risk Management Framework aptly notes, AI technologies “pose risks that can negatively impact individuals, groups, organizations, communities, society, the environment and the planet” (at p. 1). The AIDA companion document addresses this critique by noting that the bill is concerned both with individual harms and with systemic bias (defined as discrimination). Yet, while it is crucially important to address the potential for systemic bias in AI, this is not the only collective harm that should be considered. The potential for AI to be used to generate and spread disinformation or misinformation, for example, can create a different kind of collective harm. Flawed AI could potentially also result in environmental damage that is the concern of all. The companion document does little to address a broader notion of harm – but how can it? The AIDA specifically refers to and defines “individual harm”, and also addresses biased output as discriminatory within the meaning of the Canadian Human Rights Act. Only amendments to the bill can broaden its scope to encompass other forms of collective harm. Such amendments are essential. Another critique of the AIDA is that it relies for its oversight on the same Ministry that is responsible for promoting and supporting AI innovation in Canada. The companion document tackles this concern, citing the uniqueness of the AI context, and stating that “administration and enforcement decisions have important implications for policy”, such that oversight and the encouragement of innovation “would need to be [sic] work in close collaboration in the early years of the framework under the direction of the Minister.” The Minister will be assisted by a Ministry staffer who will be designated the AI and Data Commissioner. The document notes that the focus in the early days of the legislation will be on helping organizations become compliant: “The Government intends to allow ample time for the ecosystem to adjust to the new framework before enforcement actions are undertaken.” The ample time will include the (at least) two years before the necessary regulations are drafted (though note that if some key regulations are not drafted, the law will never take effect), as well as any subsequent ‘adjustment’ time. Beyond this, the document is quite explicit that compliance and enforcement should not get unnecessarily in the way of the industry. The AIDA contains other mechanisms, including requiring companies to hire their own auditors for audits and having an appointed Ministerial advisory committee to reassure those who remain concerned about governance. Yet these measures do nothing to address a core lack of independent oversight. This lack is particularly noteworthy given that the same government has proposed the creation of an ill-advised Personal Information and Data Protection Tribunal (in Part II of Bill C-27) in order to establish another layer between the Privacy Commissioner and the enforcement of Bill C-27’s proposed Consumer Privacy Protection Act. It is difficult to reconcile the almost paranoid approach taken to the Privacy Commissioner’s role with the in-house, “we’re all friends here” approach to AI governance in the AIDA. It is hard to see how this lack of a genuine oversight framework can be fixed without a substantial rewrite of the bill. And that brings us to the reality that we must confront with this bill: AI technologies are rapidly advancing and are already having significant impacts on our lives. The AIDA is deeply flawed, and the lack of consultation is profoundly disturbing. Yet, given the scarcity of space on Parliament’s agenda and the generally fickle nature of politics, the failure of the AIDA could lead to an abandonment of attempts to regulate in this space – or could very substantially delay them. As debate unfolds over the AIDA, Parliamentarians will have to ask themselves the unfortunate question of whether the AIDA is unsalvageable, or whether it can be sufficiently amended to be better than no law at all.
Published in
Privacy
Monday, 29 August 2022 08:05
Oversight and Enforcement Under Canada's Proposed AI and Data Act
The Artificial Intelligence and Data Act (AIDA) in Bill C-27 will create new obligations for those responsible for AI systems (particularly high impact systems), as well as those who process or make available anonymized data for use in AI systems. In any regulatory scheme that imposes obligations, oversight and enforcement are key issues. A long-standing critique of the Personal Information Protection and Electronic Documents Act (PIPEDA) has been that it is relatively toothless. This is addressed in the first part of Bill C-27, which reforms the data protection law to provide a suite of new enforcement powers that include order-making powers for the Privacy Commissioner and the ability to impose stiff administrative monetary penalties (AMPs). The AIDA comes with ‘teeth’ as well, although these teeth seem set within a rather fragile jaw. I will begin by identifying the oversight and enforcement powers (the teeth) and will then look at the agent of oversight and enforcement (the jaw). The table below sets out the main obligations accompanied by specific compliance measures. There is also the possibility that any breach of these obligations might be treated as either a violation or offence, although the details of these require elaboration in as-yet-to-be-drafted regulations.
Compliance with orders made by the Minister is mandatory (s. 19) and there is a procedure for them to become enforceable as orders of the Federal Court. Although the Minister is subject to confidentiality requirements, they may disclose any information they obtain through the exercise of the above powers to certain entities if they have reasonable grounds to believe that a person carrying out a regulated activity “has contravened, or is likely to contravene, another Act of Parliament or a provincial legislature” (s. 26(1)). Those entities include the Privacy Commissioner, the Canadian Human Rights Commission, the Commissioner of Competition, the Canadian Radio-television and Telecommunications Commission, their provincial analogues, or any other person prescribed by regulation. An organization may therefore be in violation of statutes other than AIDA and may be subject to investigation and penalties under those laws. The AIDA itself provides no mechanism for individuals to file complaints regarding any harms they may believe they have suffered, nor is there any provision for the investigation of complaints. The AIDA sets up the Minister as the actor responsible for oversight and enforcement, but the Minister may delegate any or all of their oversight powers to the new Artificial Intelligence and Data Commissioner who is created by s. 33. The Data Commissioner is described in the AIDA as “a senior official of the department over which the Minister presides”. They are not remotely independent. Their role is “to assist the Minister” responsible for the AIDA (most likely the Minister of Industry), and they will also therefore work in the Ministry responsible for supporting the Canadian AI industry. There is essentially no real regulator under the AIDA. Instead, oversight and enforcement are provided by the same group that drafted the law and that will draft the regulations. It is not a great look, and, certainly goes against the advice of the OECD on AI governance, as Mardi Wentzel has pointed out. The role of Data Commissioner had been first floated in the 2019 Mandate Letter to the Minister of Industry, which provided that the Minister would: “create new regulations for large digital companies to better protect people’s personal data and encourage greater competition in the digital marketplace. A newly created Data Commissioner will oversee those regulations.” The 2021 Federal Budget provided funding for the Data Commissioner, and referred to the role of this Commissioner as to “inform government and business approaches to data-driven issues to help protect people’s personal data and to encourage innovation in the digital marketplace.” In comparison with these somewhat grander ideas, the new AI and Data Commissioner role is – well – smaller than the title. It is a bit like telling your kids you’re getting them a deluxe bouncy castle for their birthday party and then on the big day tossing a couple of couch cushions on the floor instead. To perhaps add a gloss of some ‘independent’ input into the administration of the statute, the AIDA provides for the creation of an advisory committee (s. 35) that will provide the Minister with “advice on any matters related to this Part”. However, this too is a bit of a throwaway. Neither the AIDA nor any anticipated regulations will provide for any particular composition of the advisory committee, for the appointment of a chair with a fixed term, or for any reports by the committee on its advice or activities. It is the Minister who may choose to publish advice he receives from the committee on a publicly available website (s. 35(2)). The AIDA also provides for enforcement, which can take one of two routes. Well, one of three routes. One route is to do nothing – after all, the Minister is also responsible for supporting the AI industry in Canada– so this cannot be ruled out. A second option will be to treat a breach of any of the obligations specified in the as-yet undrafted regulations as a “violation” and impose an administrative monetary penalty (AMP). A third option is to treat a breach as an “offence” and proceed by way of prosecution (s. 30). A choice must be made between proceeding via the AMP or the offense route (s. 29(3)). Providing false information and obstruction are distinct offences (s. 30(2)). There are also separate offences in ss. 38 and 39 relating to the use of illegally obtained data and knowingly or recklessly making an AI system available for use that is likely to cause harm. Administrative monetary penalties under Part 1 of Bill C-27 (relating to data protection) are quite steep. However, the necessary details regarding the AMPs that will be available for breach of the AIDA are to be set out in regulations that have yet to be drafted (s. 29(4)(d)). All that the AIDA really tells us about these AMPs is that their purpose is “to promote compliance with this Part and not to punish” (s. 29(2)). Note that at the bottom of the list of regulation-making powers for AMPs set out in s. 29(4). This provision allows the Minister to make regulations “respecting the persons or classes of persons who may exercise any power, or perform any duty or function, in relation to the scheme.” There is a good chance that the AMPs will (eventually) be administered by the new Personal Information and Data Tribunal, which is created in Part 2 of Bill C-27. This, at least, will provide some separation between the Minister and the imposition of financial penalties. If this is the plan, though, the draft law should say so. It is clear that not all breaches of the obligations in the AIDA will be ones for which AMPs are available. Regulations will specify the breach of which provisions of the AIDA or its regulations will constitute a violation (s. 29(4)(a)). The regulations will also indicate whether the breach of the particular obligation is classified as minor, serious or very serious (s. 29(4)(b)). The regulations will also set out how any such proceedings will unfold. As-yet undrafted regulations will also specify the amounts or ranges of AMPS, and factors to take into account in imposing them. This lack of important detail makes it hard not to think of the oversight and enforcement scheme in the AIDA as a rough draft sketched out on a cocktail napkin after an animated after-hours discussion of what enforcement under the AIDA should look like. Clearly, the goal is to be ‘agile’, but ‘agile’ should not be confused with slapdash. Parliament is being asked to enact a law that leaves many essential components undefined. With so much left to regulations, one wonders whether all the missing pieces can (or will) be put in place within this decade. There are instances of other federal laws left incomplete by never-drafted regulations. For example, we are still waiting for the private right of action provided for in Canada’s Anti-Spam Law, which cannot come into effect until the necessary regulations are drafted. A cynic might even say that failing to draft essential regulations is a good way to check the “enact legislation on this issue” box on the to-do list, without actually changing the status quo.
Published in
Privacy
|
Electronic Commerce and Internet Law in Canada, 2nd EditionPublished in 2012 by CCH Canadian Ltd. Intellectual Property for the 21st CenturyIntellectual Property Law for the 21st Century: Interdisciplinary Approaches |