Tags
access to information
AI
AIDA
AI governance
AI regulation
Ambush Marketing
artificial intelligence
big data
bill c11
Bill c27
copyright
data governance
data protection
data scraping
data strategy
Electronic Commerce
freedom of expression
Geospatial
geospatial data
intellectual property
Internet
internet law
IP
open data
open government
personal information
pipeda
Privacy
trademarks
transparency
|
Displaying items by tag: AI strategy
Monday, 09 February 2026 07:15
Canada's AI Strategy: Some ReflectionsThe Department of Innovation Science and Economic Development (ISED) has released the results of the consultation it carried out in advance of its development of the latest iteration of its AI Strategy. The consultation had two components – one was a Task Force on AI – a group of experts tasked with consulting their peers to develop their views. The experts were assigned to specified themes (research and talent; adoption across industry and government; commercialization of AI; scaling our champions and attracting investment; building safe AI systems and public trust in AI, education and skills; infrastructure; and security). The second component was a broad public consultation asking for either answers to an online survey or emailed free-form submissions. This post offers some reflections on the process and its outcomes. 1. The controversy over the consultation The consultation process generated controversy. One reason for this was the sudden and short timelines. Submissions from the public were sought within a month, and Task Force members were initially expected to consult their peers and report in the month following the launch of the consultation. In the end, the Task Force Reports were not published until early February – the timelines were simply unrealistic. However, there was no extension for the public consultation. The Summary of Inputs on the consultation refers to it as “the largest public consultation in the history of Innovation Science and Economic Development Canada, generating important ideas, questions and legitimate concerns to take into consideration in the drafting of the strategy” (at page 3). The response signals how important the issue is to Canadians and how they want to be heard. One has to wonder how many submissions ISED might have received with longer timelines. Short deadlines favour those with time and resources. Civil society organizations, small businesses, and individuals with full workloads (domestic and professional) find short timelines particularly challenging. Running a “sprint” consultation favours participation from some groups over others. Another point of controversy was the lack of diversity of the Task Force. The government was roundly criticized for putting together a Task Force with no representation from Canada’s Black communities, particularly given the risks of bias and discrimination posed by AI technologies. A letter to this effect was sent to the Minister of AI, the Prime Minister, and the leaders of Canada’s other political parties by a large group of Black academic and scholars. Following this, a Black representative – a law student - was hurriedly added to the Task Force. An open letter to the Minister of Artificial Intelligence for civil society organizations and individuals also denounced the consultation, arguing that the deadline should be extended, and that the Task Force should be more equitably representative. The letter noted that civil society groups, human rights experts, and others were absent from the Task Force panel. The group was also critical of the online survey for being biased towards particular outcomes. This group indicated that it would be boycotting the consultation. They have now set up their own People’s Consultation on AI, which is accepting submissions until March 15, 2026. These controversies highlight a major stumble in developing the AI Strategy. The lack of consultation around the failed Artificial Intelligence and Data Act in Bill C-27 and the criticism that this generated should have been a lesson to ISED on how important the issues raised by AI are to the public and about how they want to be heard. The Summary makes no mention of the controversy it generated. Nevertheless, the criticisms and pushbacks are surely an important part of the outcome of this process. 2. Some thoughts on Transparency ISED has not only published a summary of the results of its consultation and of the Task Force Reports, it has published in its open government portal the raw data from the consultation, as well as the individual task force reports. This seems to be in line with a new commitment to greater transparency around AI – in the fall of 2025 ISED also published its beta version of a register of AI in use within the federal public service. These are positive developments, although it is worth watching to see if tools like the register of AI are refined, improved (and updated). ISED was also transparent about its use of generative AI to process the results of the consultation. Page 16 of the summary document explains how it used (unspecified) LLMs to create a “classification pipeline” to “clean survey responses and categorize them into a structured set of themes and subthemes”. The report also describes the use of human oversight to ensure that there was “at least a 90% success rate in categorizing responses into specific intents”. ISED explains that it consulted research experts about their methodology and indicated that the methods they used were in conformity with the recent Treasury Board Guide on the use of generative artificial intelligence. The declaration on the use of AI indicates that the output was used to produce the final report, which is apparently a combination of human authorship and extracts from the AI generated content. It would frankly be astonishing if generative AI tools have not already been used in other contexts to process submissions to government consultations (but likely without having been disclosed). As a result, the level of transparency about the use here is important. This is illustrated by my colleague Michael Geist’s criticisms of the results of ISED’s use of AI. He ran the Task Force reports through two (identified) LLMs and noted differences in the results between his generated analysis and ISED’s. He argues that “the government had not provided the public with the full picture” and posits that the results were softened by ISED to suggest a consensus that is not actually present. Putting a particular spin on things is not exclusively the result of the use of AI tools – humans do this all the time. However, explaining how results were arrived at using a technological system can create an impression of objectivity and scientific rigor that can mislead, and this underscores the importance of Prof. Geist’s critique. It is worth noting that it is the level of transparency provided by ISED that allowed this analysis and critique. The immediacy of the publication of the data on which the report was based is important as well. Prolonged access to information request processes were unnecessary here. This approach should become standard government practice. 3. AI Governance/Regulation The consultation covered many themes, and the AI Strategy is clearly intended to be about more than just how to regulate or govern AI. In fact, one could be forgiven for thinking that the AI Strategy will be about everything except governance and regulation, given the limited expertise from these areas on the Task Force. These focus areas emphasized adoption, investment in, and scaling of AI innovation, as well as strengthening sovereign infrastructure. Among the focus areas only “public trust, skills and safety” gives a rather offhand nod to governance and regulation. That said, reading between the lines of the summary of inputs, Canadian are concerned about AI governance and regulation. This can be seen in statements such as “Respondents…urged Canada to prioritize responsible governance” (p. 7). Respondents also called for “meaningful regulation” (p. 8) and reminded the government of the need to “modernize regulations” (p. 8). There were also references to “accountable and robust governance”(p. 8) and “strict regulation, penalties for non-compliance and frameworks that uphold Canadian values” (p. 8) when it comes to generative AI. There were also calls for “strict liability laws” (p. 9), and concerns expressed over “lack of regulation and accountability” (p. 9). One finds these snippets throughout the summary document, which suggests that meaningful regulation was a matter of real concern for respondents. However, the “Conclusions and next steps” section of the report mentions only the need for “regulatory clarity” and streamlined regulatory frameworks – neither of which is a bad thing, but neither of which is really about new regulation or governance. Instead, the report concludes that: “There was general consensus among participants that public trust depends on transparency, accountability, and robust governance, supported by certification standards, independent audits and AI literacy programs” (p. 15, my emphasis). While those tools are certainly part of a regulatory toolkit for AI, on their own and outside of a framework that builds in accountability and oversight, they are basically soft-law and self-regulation. This feels like a rather convenient consensus around where the government was likely heading in the first place.
Published in
Privacy
Thursday, 02 October 2025 07:34
Consultation on New Canadian AI Strategy - Don't blink or you'll miss itThe federal government has just launched an AI Strategy Task Force and public engagement on a new AI strategy for Canada. Consultation is a good thing – the government took a lot of flak for the lack of consultation leading up to the ill-fated AI and Data Act that was part of the now-defunct Bill C-27. That said, there are consultations and there are consultations. Here are some of my concerns about this one. The consultation has two parts. First, the government has convened an AI Task Force consisting of some very talented and clearly public-spirited Canadians who have expertise in AI or AI-adjacent areas. Let me be clear that I appreciate the time and energy that these individuals are willing to contribute to this task. However, if you peruse the list, you will see that few of the Task Force members are specialists in the ethical or social science dimensions of AI. There are no experts in labour and employment issues (which are top of mind for many Canadians these days), nor is there representation from those with expertise in the environmental issues we already know are raised by AI innovation. Only three people from a list of twenty-six are tasked with addressing “Building safe AI systems and public trust in AI”. The composition of the Task Force seems clearly skewed towards rapid adoption and deployment of AI technologies. This is an indication that the government already has a new AI Strategy – they are just looking for “bold, pragmatic and actionable recommendations” to bolster it. It is a consultation to make the implicit strategy explicit. The first part of the process will see the members of the Task Force, “consult their networks to provide actionable insights and recommendations.” That sounds a lot like insider networking which should frankly raise concerns. This does not lend itself to ensuring fair and appropriate representation of diverse voices. It risks creating its own echo chambers. It is also very likely to lack other elements of transparency. It is hard to see how the conversations and interactions between the private citizens who are members of the task force and their networks will produce records that could be requested under the Access to Information Act. The second part of the consultation is a more conventional one where Canadians who are not insiders are invited to make contributions. Although the press release announcing the consultation directs people to the “Consulting Canadians”, it does not provide a link. Consulting Canadians is actually a Statistics Canada site. What the government probably meant was “Consulting with Canadians”, which is part of the Open Canada portal (and I have provided a link). The whole process is described in the press release as a “national sprint” (which is much fancier than calling it “a mad rush to a largely predetermined conclusion”). In November, the AI Task Force members “will share the bold, practical ideas they gathered.” That’s asking a lot, but no doubt they will harness the power of Generative AI to transcribe and summarize the input they receive. If, in the words of the press release, “This moment demands a renewal of thinking—a collective commitment to reimagining how we harness innovation, achieve our artificial intelligence (AI) ambition and secure our digital sovereignty”, perhaps it also demands a bit more time and reflection. That said, if you want to be heard, you now have less than a month to provide input – so get writing and look for the relevant materials in the Consulting with Canadians portal.
Published in
Privacy
Friday, 04 June 2021 13:00
Submission to Consultation on Developing Ontario's Artificial Intelligence (AI) Framework
The following is my submission to the Ontario government's Consultation on Developing Ontario's Artificial Intelligence (AI) Framework. The Consultation closed on June 4, 2021. Thank you for the opportunity to provide input on the development of trustworthy AI in Ontario. Due to time pressures my comments will be relatively brief. Hopefully there will be other opportunities to engage with this process. Developing a framework for the governance of AI in Ontario is important, and it is good to see that this work is underway in Ontario. I note that the current consultation focuses on AI for use in the public sector. Similar work needs to be done for the governance of AI that will be developed and deployed in the private sector context. I hope that this work is also being contemplated. As I am sure you know, the federal government has already developed a Directive on Automated Decision-Making (DADM) which applies to a broad range of uses of AI in the federal public sector context. It comes with an algorithmic impact assessment tool. Although I appreciate the sensitivities around sovereignty within a province’s own spheres of competence, there is much to be said for more unified national approaches to many regulatory issues – particularly in the digital context. One option for Ontario is to use the DADM as a starting point for its approach to public sector AI governance, and to assess and adapt it for use in Ontario. This would allow Ontario to take advantage of an approach that is already well developed, and into which a considerable amount of thoughtful work has been invested. It is both unnecessary and counterproductive to reinvent the wheel. Serious consideration should be given – as a matter of public policy – to adopting, where possible, harmonized approaches to the governance of digital technologies. At the same time, I note that the consultation document suggests that Ontario might go beyond a simple internal directive and actually provide an accountability framework that would give individuals direct recourse in cases where government does not meet whatever requirements are established. A public accountability framework is lacking in the federal DADM, and would be most welcome in Ontario. The proposed public sector framework for Ontario is organized around three broad principles: No AI in secret; AI use Ontarians can trust; and AI that serves all Ontarians. These are good, if broad, principles. The real impact of this governance initiative will, of course, lie in its detail. However, it is encouraging to see a commitment to transparency, openness and public participation. It is also important that the government recognize the potential for AI to replicate or exacerbate existing inequities and to commit to addressing equity and inclusion. My comments will address each of the principles in turn. 1. No AI in Secret The consultation document states that “for people to trust that the use of AI is safe and appropriate they must first be aware that the AI exists. As a result, the government needs to be transparent about how, when, and why these tools are used so that people have a right to address potential biases created by the AI algorithms.” I agree. A public register of AI tools in use by government, along with access to details about these tools would be most welcome. I do question, however, what is meant by “government” in this statement. In other words, I would be very interested to know more about the scope of what is being proposed. It was only a short while ago that we learned, for example, that police services in Ontario had made use of Clearview AI’s controversial facial recognition database. In some cases, it seems that senior ranks of the police may not even have been aware of this use. Ontario’s Privacy Commissioner at the time expressed concerns over this practice. This case raises important questions regarding the scope of the proposed commitment to transparency and AI. The first is whether police services will be included under government AI governance commitments – and if they are not, why not, and what measures will be put in place to govern AI used in the law enforcement context. It is also important to know what other agencies or departments will be excluded. A further question is whether AI-related commitments at the provincial level will be extended to municipalities, or whether they are intended only for use in the provincial public sector. Another question is whether the principles will only apply to AI developed within government or commissioned by government. In other words, will any law or guidance developed also apply to the myriad services that might be otherwise be available to government? For example, will new rules apply to the decision by a department to use the services of a human resources firm that makes use of AI in its recruitment processes? Will they apply to workplace monitoring software and productivity analytics services that might be introduced in the public service? On this latter point, I note it is unclear whether the commitment to AI governance relates only to AI that affects the general population as opposed to AI used to manage government employees. These issues of application and scope of any proposed governance framework are important. 2. Use Ontarian’s can Trust The second guiding principle is “Use Ontarians can Trust”. The commitment is framed in these terms: “People building, procuring, and using AI have a responsibility to the people of Ontario that AI never puts people at risk and that proper guardrails are in place before the technology is used by the government.” One of the challenges here is that there are so many types of AI and so many contexts in which AI can be used. Risk is inevitable -- and some of the risks may be of complex harms. In some cases, these harms may be difficult to foresee. The traffic predicting algorithm used as an illustration in this part of the consultation document has fairly clear-cut risk considerations. The main issue will be whether such an algorithm reduces the risk of serious accidents, for example. The risks from an algorithm that determines who is or is not eligible to receive social assistance benefits, on the other hand, will be much more complex. One significant risk will be that people who need the benefit will not receive it. Other risks might include the exacerbation of existing inequalities, or even greater alienation in the face of a seemingly impersonal system. These risks are serious but some are intangible – they might be ignored, dismissed or underestimated. Virginia Eubanks and others have observed that experimentation with the use of AI in government tends to take place in the context of programs and services for the least empowered members of society. This is troubling. The concept of risk must be robust and multifaceted. Decisions about where to deploy AI must be equitable and unbiased – not just the AI. One of the initial recommendations in this section is to propose “ways to update Ontario’s rules, laws and guidance to strengthen the governance of AI, including whether to adopt a risk-based approach to determine when which rules apply.” I agree that work needs to be done to update Ontario’s legal frameworks in order to better address the challenges of AI. Data protection and human rights are two obvious areas where legislative reform may be necessary. It will also be important for those reforms to be accompanied by the necessary resources to handle the complex cases likely to be generated by AI. If legal protections and processes are enhanced without additional resources, the changes will be meaningless. It may also be necessary to consider establishing a regulatory authority for AI that could provide the governance, oversight and accountability specifically required by AI systems, and that could develop the necessary expertise. Challenging algorithmic decision-making will not be easy for ordinary Ontarians. They will need expert assistance and guidance for any challenge that goes beyond asking for an explanation or a reconsideration of the decision. A properly-resourced oversight body can provide this assistance and can develop necessary expertise to assist those who develop and implement AI. 3. AI that Serves all Ontarians The overall goal for this commitment is to ensure that “Government use of AI reflects and protects the rights and values of Ontarians.” The values that are identified are equity and inclusion, as well as accountability. As noted above, there is a tendency to deploy AI systems in ways that impact the most disadvantaged. AI systems are in use in the carceral context, they are used for the administration of social benefits programs, and so on. The very choices as to where to start experimenting with AI are ones that have significant impact. In these contexts, the risks of harm may be quite significant, but the populations impacted may feel most disempowered when it comes to challenging decisions or seeking recourse. This part of the consultation document suggests as a potential action the need to “Assess whether the government should prohibit the use of AI in certain use cases where vulnerable populations are at an extremely high risk.” While there likely are contexts in which a risk-based approach would warrant an early ban on AI until the risks can properly addressed, beyond bans, there should also be deliberation about how to use AI in contexts in which individuals are vulnerable. This might mean not rushing to experiment with AI in these areas until we have built a more robust accountability and oversight framework. It may also mean going slowly in certain areas – using only AI-assisted decision making, for example, and carefully studying and evaluating particular use cases.
In closing I would like to note as well the very thoughtful and thorough work being done by the Law Commission of Ontario on AI and Governance, which has a particular focus on the public sector. I hope that any policy development being done in this area will make good use of the Law Commission’s work.
Published in
Privacy
|
Electronic Commerce and Internet Law in Canada, 2nd EditionPublished in 2012 by CCH Canadian Ltd.
Intellectual Property for the 21st CenturyIntellectual Property Law for the 21st Century: Interdisciplinary Approaches
|