Teresa Scassa - Blog

Teresa Scassa

Teresa Scassa

This post is the second in a series on the consultation paper published by Treasury Board Secretariat on proposed reform of the federal Privacy Act. The first can be found here. This post focuses on the first of six themes in the document: Enabling integrated services.

If I had to sum up the new consultation paper on reform of the Privacy Act, I would describe it as a document about pragmatic privacy. It is about how government will protect privacy while enabling the uses that it needs and wants to make with data. It is not about the ideal of privacy, nor is it really about where the line should be drawn between government and citizen when it comes to the use of personal data. I am not suggesting that the document ignores the importance of privacy as a value; but I am proposing that the overall approach is pragmatic.

The pragmatism is evident in first of six themes chosen to lead the consultation paper on reform of the federal Privacy Act: “Enabling integrated services”. This set of reforms is aimed at facilitating horizontal information sharing across the federal government. Horizontal data sharing has, to date, been limited by the Privacy Act, since the vertical siloing of personal data within departments and agencies was initially seen as a way to protect privacy. Only those departments or agencies that had collected information directly from individuals had access to that data.

Horizontal sharing reflects two broad modernization goals. The first is to make it simpler for Canadians to access government services without having to provide or update the same information multiple times when dealing with programs housed in different departments. The second is less overt in the discussion paper, which describes :

[…] a new, purpose-based approach that allows government institutions to reuse and securely share personal data with each other and with their provincial, territorial, or municipal partners without asking for consent, if it clearly serves a public interest or directly benefits individuals, such as improving service delivery or program activities.

This is broad language that will surely include using data in analytics and AI systems to develop and deliver services.

The consultation paper makes it clear that horizontal data sharing will be subject to strict conditions which will include sharing only the information that is necessary for the stated purpose, sharing in the “least privacy-invasive way possible”, and having in place strong safeguards to protect privacy. (Note: Some of these issues are part of subsequent themes and proposals in the discussion document, and I will dig into them in later posts in this series). The document also promises that individuals will be informed of any reuse or sharing of their personal data, although it seems that this will be through plain language notices “published in a central registry before the data is shared or reused.” This transparency is important but note how the technological infrastructure to ensure transparency seems already determined. It will not be done through individual notice nor will it be through an Estonian-style citizen portal (called Data Tracker) which allows individuals to see who within government has accessed their personal data and when.

The general move towards horizontal data sharing is evident in the reforms of some provincial public sector data protection laws. For example, Alberta’s new Protection of Privacy Act contains, in Part 3, a framework governing “data matching”, which is defined in s. 1(f) as “linking personal information between 2 or more databases or other electronic sources of information”. Nova Scotia’s revised Freedom of Information and Protection of Privacy Act allows for personal information to be shared horizontally if it is “necessary for the delivery of a common or integrated program or activity” (s. 70, s. 71(g)). Data linking is also permitted for research or statistical purposes in s. 72. It is unsurprising, then, that a reform of the federal Privacy Act would seek to better enable horizontal data sharing. However, this objective is buried in the first theme in language about enabling better services and requiring individuals only to provide information once instead of multiple times. The broader goals of horizontal data sharing should be more explicit.

It is important to note that the data sharing envisaged is not just horizontal within the federal government, since the discussion paper refers to the potential to share information with provincial, territorial or even municipal governments. There is nothing inherently wrong with sharing information across governments. In Canada we sometimes create unnecessary barriers to getting things done, especially across layers of government. Yet there are also substantial risks with horizontal data sharing. These can include unwarranted surveillance, and problematic uses of data in AI systems that drive decision-making. Safeguards, transparency and accountability will be crucial.

As part of the infrastructure to support horizontal data sharing, the consultation paper puts forward a model which would designate “certain programs or institutions as the official sources for specific types of personal data”. TBS admits that there would be set-up time required for this infrastructure, but that it will ultimately “reduce the need for repeated data collection, lower storage costs, and simplify updates to personal data for individuals by allowing them to maintain their data in fewer trusted locations.”

The combination of discussion of privacy rules and infrastructure in the same document is part of the ‘pragmatic privacy’ approach. It highlights one of the differences between Privacy Act reform housed at TBS rather than in the Department of Justice. Past consultation papers from Justice have focused on privacy principles and reform of specific statutory provisions, with little discussion of the infrastructure required. On this model, principle precedes design. By contrast, the TBS consultation paper has one eye on privacy principles and another on how the new data infrastructures that will be required might be built. Another difference is that past discussion papers have been very specific about what provisions of the Privacy Act are targeted for change and how they might be changed. This consultation document discusses legislative changes in more general terms.

One thing is clear: in this first theme, the discussion of reform of the Privacy Act is closely tied to new data infrastructure. Public sector data protection laws have an odd relationship to infrastructure. What the law allows and does not allow can dictate how data infrastructure is designed and built. Conversely, how data infrastructure is built can establish a reality to which privacy laws must adapt. We seem to be at a transition point, where new data infrastructure is clearly contemplated (some of it is sketched out in this document). At the same time, Privacy Act reform is underway to enable the new ways of collecting and handling data that this infrastructure will enable. Privacy reform is therefore in part about how privacy will be protected within this new infrastructure – but the new infrastructure, which will enable new uses of personal data across the federal government, will also transform long-held expectations about privacy that stem in part from what was and was not previously possible. There is a fundamental paradigm shift. This is a Privacy Act being rewritten for a government that has access to more data than ever before and has tools to do more with that data than ever imagined in 1983. The nature and scale of data use has changed. It is a vision of a Privacy Act that is about enabling use and reuse of data.

The next post in this series will consider the second theme in the document: Enhancing Accountability and Transparency.

 

Treasury Board Secretariat has published a discussion paper and launched a consultation into the long-overdue reform of the federal Privacy Act. The consultation is open until July 10, 2026.

The Privacy Act, which came into force in 1983, has not had a significant overhaul since that time, although we have seen dramatic changes in how personal data are collected and used. The Privacy Act’s woeful state of disrepair is no secret. The statute has been the subject of multiple reports and recommendations for reform from the Standing Committee on Access to Information, Privacy and Ethics, the Office of the Privacy Commissioner of Canada, the Information Commissioner, and from several public consultations. One thing that is different this time around is that responsibility for Privacy Act reform has shifted from the Department of Justice to Treasury Board Secretariat (TBS). Since Justice has failed to move privacy law reform forward over decades, this move offers some hope. Among other things, TBS is responsible for establishing and maintaining internal federal government policies on information management, privacy, automated decision-making, and cybersecurity. Taking responsibility for the legal framework that shapes these policies makes sense.

Reform of the Privacy Act is sorely needed. Both the nature and volume of information collected by government has dramatically changed since the early 1980’s. So too have the uses to which such data are put. Another change is the desire of government (signaled in its strategy on the use of AI in the public service) to make greater use of data analytics and technology to derive value from data and to increase efficiency and improve service delivery. A 1980’s era privacy statute which relies on the strict vertical siloing of data to enhance privacy is not well adapted to an environment in which greater access to more complex data is seen as desirable. At the same time, the cybersecurity landscape has also dramatically changed, increasing the impact of privacy breaches and leaving Canadians more vulnerable where greater and greater volumes of data are collected. The Privacy Act must provide Canadians with modernized rules fit for our contemporary context. Although additional safeguards have been added over the years through directives and policies, these lack both the enforceability and independent oversight that privacy legislation can provide. Their scope of application across the public sector is also more limited. It is clear from the discussion document that TBS sees the reform process as a way to consolidate some of the approaches currently found in directives and policies and to extend them more broadly across the federal public sector.

In framing their approach to privacy reform, TBS has identified three overarching policy approaches:

o Enabling better services to Canadians

o Strengthening privacy protections for the digital age

o Updating foundations and oversight of the federal public sector privacy regime

By setting enabling better services to Canadians as a priority, TBS signals that its reforms will seek to remove some of the friction experienced by Canadians when accessing government services (notably the need to provide the same personal information to multiple different departments or agencies). In this sense, one of the goals of Privacy Act reform is to make personal data more reusable by government – with appropriate safeguards in place. The safeguards, and oversight of privacy measures are part of the second and third policy approaches.

The recommendations in the discussion paper are organized around 6 broad themes. These are: enabling integrated services; enhancing accountability and transparency; advancing safeguards across the spectrum of data sensitivity; modernizing the foundation for privacy and trust; Indigenous People’s access to, and protection of, their data; and updating the compliance framework. The themes and the discussion that accompanies them are not considered exhaustive or definitive, and feedback is invited.

There are a number of interesting features in this proposal for reform. Notably, it seeks to integrate Indigenous data sovereignty within a reformed Privacy Act. This builds upon considerable work done by First Nations, Métis and Inuit on data sovereignty issues over the years, as well as government efforts towards truth and reconciliation. The document also includes proposals to create new legal safeguards for public sector automated decision-making and to include (long overdue) privacy breach notification requirements. There is a proposal to formally recognize privacy as a fundamental right in the statute. New transparency measures are also proposed, both with respect to automated decision-making and the use of personal data by departments and agencies. There is also a recommendation to shift requests for access to one’s personal data to the Access to Information Act. Proposed changes would also add new compliance features, including order-making powers for the OPC, a new offence for deliberate re-identification of anonymized data; expanded judicial remedies; and a mandatory 5-year review of the Privacy Act.

Taken together there is much that is new and interesting in this document. There is also still room for criticism, comment and discussion. I will be diving into the TBS recommendations for reform over the next few weeks. My comments will be structured around each of the themes in the document. Stay tuned!

 

In November 2025, Canada’s Treasury Board Secretariat made available a minimum viable product AI register, intended to form the basis for a consultation on what a register of AI in use in the federal public sector should look like. This dataset is not meant to represent in form or content what the final product will look like. But it is a starting point for a discussion. The consultation closes on March 31, 2026.

It is worth highlighting how significant the idea of a federal AI registry is. We are still in the early days of public sector AI, and there are relatively few precedents for official AI registers. That said, it is clear that this is a trend that is likely to grow. The Dutch government has a national AI register offering a public-facing searchable database that includes entries from federal and municipal governments. The UK has a register of “algorithmic tools” used in its public sector. Norway has what is described as an “overview” of AI projects in the public sector, which it cautions is a work in progress. France maintains an inventory of public sector algorithms, under the auspices of the Observatoire des algorithms publics. In the US, Executive Order 13960 requires federal agencies to create an inventory of their AI use cases, and guidance is provided on how to do this. While overview data is provided, each department maintains its own AI Use Case Inventory Library (see an example here). Canada’s decision to create a federal AI Register is an important commitment, and its consultation on what such a register should look like is also significant.

The consultation process is nourished by a dataset made available through Canada’s open data portal. Described as a minimum viable product, this is a pretty rough set of data compiled from different sources. It is really meant as a conversation starter – it provides a glimpse into what is already happening within the federal public sector when it comes to AI, and it prompts users to think about what data they might want to have, and how they might want to see it organized.

The current data set contains 409 separate entries, each with 23 data categories. These represent both French and English versions of the same categories. The categories include a unique identifier for each system, the system’s name and the government department or agency responsible for it. There is a short description of the system, information about primary users and about who developed the system. For procured systems, the name of the vendor is provided. The status of the system is indicated (e.g., in development, in production, or retired), as well as brief descriptions of system capabilities and data sources. Whether the system relies on personal data is also specified, as well as any relevant personal information banks. Whether users are notified of the use of the system is also indicated, and a short description is provided of the expected results of the system.

The AI register seems intended to serve two broad audiences. The first is users from within the federal government. By making its uses of AI systems more transparent internally, the government can avoid duplicative efforts, allow better collaboration across departments and agencies, and perhaps also share ideas for helpful uses of AI tools to streamline different processes. A second audience is the broader public. This audience can include researchers, journalists, academics, civil society organizations, lawyers, developers, and many others seeking to understand how and where the government is using AI systems. The diversity of potential users will impact both how the data are made available and what data points may be of interest.

The fact that the federal AI register seems intended for both internal and external audiences is important and should not be taken for granted. For example, Ontario’s Responsible Use of Artificial Intelligence Directive requires ministries and agencies to report on AI use cases and risk management, with ministries reporting to the Ministry of Public and Business Service Delivery and Procurement on an annual basis. However, this reporting requirement is internal and not public. The Directive only requires public disclosure of the use of an AI system where the public interacts directly with it or where the system is used to make a decision about a member of the public.

Currently Canada’s AI Register data is available in different formats, including CSV, JSON, TSV and XML These formats are useful for some types of users, but they are not particularly accessible for a broader public that might require a more user-friendly interface. Ideally, the AI Register should have a public facing site that makes it easy to search and find results offering straightforward information at a click. The UK’s Register provides an interesting example in this respect. For each algorithm there is a standardized list of information provided. It would be good to have a dashboard that provides visual representations of how and where AI is used in the federal public sector. This could include other overview representations of the data within the Register, but also, perhaps, information about the register itself (e.g, tracking the number of entries over time; tracking categories of uses, etc. For an example of a dashboard, see the one created by the Dutch Government as part of its AI Register). However, the more granular data should still be available through the open government portal as a downloadable dataset for those who wish to dig into it. This would be a useful resource for researchers, journalists, students, and others.

AI systems in use across the federal government may also have other data associated with them which it would be good to be able to access easily. For example, automated decision systems at the federal level are subject to the Directive on Automated Decision Making and are supposed to have gone through an algorithmic impact assessment (AIA). These assessments are meant to be available through the open government portal (and some are). Providing links to available AIA’s would be useful for those who want to know more about a particular system. Similarly, systems that use personal data will have gone through a privacy impact assessment, and many systems will also have gone through a Gender-based Plus assessment. Links to any publicly accessible evaluations would be useful, but even if these are not fully publicly available, the register could indicate whether the AI system has gone through such an evaluation, and when it might have been updated.

Other data points that could be considered might include whether there is human oversight and at what point in the process. In the current version of the Register, data sources are identified (e.g., certain categories of documents), but it might also be useful to know what specific data points are relied upon (this is something that is provided, for example, in the Dutch register).

Presumably AI systems in use in the public sector will be monitored and assessed, and data will be gathered on their performance. Are the systems reducing workload or backlogs and if so, by how much? Are they replacing humans? Saving money? Generating complaints? Are any reports, audits, and assessments publicly available? If so, where? When it comes to assessments and reports, it is not necessary for the AI register to be overburdened with too many data points. However, other relevant information that is proactively published should be easily findable.

Once TBS has decided what data should be in the register, it will need to provide a mechanism to gather this data and to ensure that it is harmonized across the federal public sector. This will likely require providing fillable forms in which terminology is carefully defined.

Generative AI and its use in the public sector will present some interesting challenges for the AI Register. Some uses of generative AI within departments or agencies are likely to be fairly ad hoc (as, for example, when AI is used to translate an email or document received that is in a language other than French or English). On the other hand, a deliberate choice to use genAI to translate such materials in a context in which they are frequently received, might require disclosure. Similarly, the ad hoc use of genAI to summarize reading material may not require disclosure, but a systematic approach to summarizing with genAI in administrative processes should require disclosure (and might require an algorithmic impact assessment). An example of this might be the systematic use of AI to summarize evidence or submissions to an agency or tribunal. Focusing on the nature/extent of use is one way of approaching this. Another might be to assess whether there is a public-facing dimension to the use of genAI. If it is used solely for internal administrative purposes, perhaps disclosure in the registry is less necessary than if it is used in a decision-making process, or if it is used in communications with the public. This latter way of approaching it could get complicated, since it may be difficult to determine which internal administrative uses end up having public facing dimensions. For example, genAI used in summarizing and report-drafting could have very public dimensions if that research shapes policy documents, white papers, consultation materials or other public-facing content. And, as reliance on agentic AI systems expands, it will also become necessary to think about how agentic AI use cases are recorded and documented within the register.

There may also be uses that the government decides should not be in the Register for reasons related to cybersecurity, national security or law enforcement practices, for example. Certainly, disclosing what AI systems are used to protect against cyberattacks or that are used in the national security context may be contrary to the public interest. Law enforcement is a trickier category, as there are some types of systems (e.g., predictive policing, facial recognition technology) for which transparency and accountability seem squarely in the public interest. (Note that the Dutch database contains 13 entries related to policing, including both FRT and predictive policing models.) Others (e.g., particular fraud detection algorithms) may require more circumspection.

A final point is to consider how often departments and agencies will be required to update their entries. Systems evolve and acquire new functionalities all the time. Sometimes modifications are significant enough to warrant new AIA’s or PIA’s. Whatever choices are made for the launch of Canada’s AI Register, the Register itself should be part of an iterative process subject to periodic reviews and updates, and open to user feedback.

 

The Province of Manitoba has three bills currently before the legislature that address AI-related issues.

The first of these is Bill 2, which proposes amendments to the province’s Non-Consensual Distribution of Intimate Images Act. Unlike some of its provincial counterparts, the original law (dating from 2015) already applies to both real and fake intimate images. The amendments will change the definition of an intimate image to include images in which a person is nearly nude. It will also include personal intimate images in which the individual is not identifiable. This will address circumstances, for example, where a former partner is threatening to disclose an intimate image in which a person is not readily identifiable, but where she knows that it depicts her. The bill also creates a new tort of threatening to disclose an intimate image. It makes explicit the power of the courts to issue orders against internet intermediaries. Interestingly, the bill will also limit the liability of internet intermediaries that have “taken reasonable steps to address unlawful distribution of intimate images” in the use of their services (s. 15.1(1)).

Bill 49, The Business Practices Amendment Act, proposes amendments to the provincial statute that sets out unfair business practices. The proposed changes will address the use of algorithms and big data to generate dynamic prices that are different for different consumers. Specifically, the following two practices will be added as unfair practices:

(r.1) where the price of a part of the consumer transaction is displayed by way of an electronic shelf labelling system, demanding a higher price from the consumer at the point of sale due to personalized algorithmic pricing in respect of that consumer;

and

(v) in the case of an online retailer or online distributor, the use of personalized algorithmic pricing to increase the price of the goods demanded from the consumer.

The bill defines personalized algorithmic pricing as occurring where personal data about the consumer are “collected, analyzed or processed with or without the consumer’s consent, knowledge or involvement”. This is important as it makes any consent to use of personal information in a long and obscure privacy policy irrelevant to the issue of the fairness of the business practice. The types of personal data that might be used in this way form a lengthy list that includes browsing or purchasing history, spending patterns, inferences about the consumer’s willingness to enter into the transaction, demographics, socio-economic status, credit history, location, medical history, and so on.

This important measure comes at a time when price discrimination practices are on the rise (see research from Pascale Chapdelaine here and here), and is typically invisible to the consumer. After all, if you are shopping online and are offered goods at a particular price, it would require considerable effort to determine whether someone else is being offered the same goods at a different price. This amendment is important. That said, it does not address the potential for dynamic surge pricing. Recent reporting on patents obtained by Walmart suggests that the company may be looking to use dynamic pricing on digital price displays on stores shelves to adjust prices based on demand in real time. The capacity to adjust prices based on who is shopping – and when – will have significant implications for consumers and it will be important for consumer-oriented legislation to anticipate and address these issues.

Last but not least, Bill 51, the Public Sector Artificial Intelligence and Cybersecurity Governance Act, is highly reminiscent of Ontario’s Enhancing Digital Security and Trust Act (EDSTA), which was enacted in 2024. Like the EDSTA, Manitoba’s Bill 51 creates a legislative framework for the governance of public sector artificial intelligence (AI) on the one hand, and for cybersecurity measures for the public sector on the other. Like the EDSTA, this is a ‘plug and play’ framework. The statute itself, if enacted, will require prescribed public sector entities to comply with obligations that are established in the regulations. The goal is to have a flexible framework that can adapt to changing technologies and circumstances through amendments to regulations and/or standards, that will be achieved more quickly than legislative amendments. The catch is that without regulations, the law is nothing more than words on a page. Ontario’s EDSTA, which took effect over a year ago on January 29, 2025, has resulted in that most flexible of regulatory frameworks for public sector AI known as “none”. Although regulations have been proposed for the portions of the EDSTA dealing with Cyber Security and Digital Technology Affecting Individuals Under 18, no regulations are yet in sight for AI in the public sector. Hopefully, Manitoba’s Bill 51 will not serve as an empty policy placeholder.

 

The British Columbia Court of Appeal has ruled that the BC Privacy Commissioner’s enforcement order against Clearview AI is both reasonable and enforceable. Clearview AI is a US-based company that scrapes photographs from the internet, including from social media websites, to build a massive facial recognition database which it offers as a service to law enforcement (very broadly defined). At the time complaints were first lodged with Canadian privacy commissioners, the database held over 3 billion images. Today the number is estimated at around 70 billion.

The order against the company followed a joint investigation report (from the federal Privacy Commissioner and the Commissioners of British Columbia, Alberta and Quebec). The laws of BC, Alberta, and Canada all contain exceptions to the requirements of knowledge and consent for the collection, use and disclosure of personal information where that information is “publicly available”. Clearview AI sought to rely on that exception, arguing that it needed no consent to collect and use personal information such as photographs that were available on the internet.

The term “publicly available” is defined in narrow terms in the regulations, and the BC Court of Appeal found that the Commissioner’s interpretation of this exception to exclude information posted on social media sites was reasonable. In another judicial review application that challenged a similar order against Clearview AI from the Alberta Privacy Commissioner, the Alberta Court of King’s Bench also found the interpretation to be reasonable. However, that court struck down part of the exception in the regulations, finding that it breached Clearview AI’s right to freedom of expression under the Canadian Charter of Rights and Freedoms. Charter arguments were not raised before the BC courts, and so the reasonable interpretation of the BC regulation stands in BC. (You can find my discussion of the Alberta court decision and its implications here).

The Court also found reasonable the BC Commissioner’s ruling that the scraping of photographs from the internet to create a massive facial recognition database was not a purpose that “a reasonable person would consider appropriate in the circumstances.” This baseline privacy norm is shared by the laws of Canada, Alberta and BC. The result of the BC Court of Appeal decision is therefore a clear win for the BC Privacy Commissioner – and frankly, for BC residents. Although the window of time is still open for Clearview AI to seek leave to appeal to the Supreme Court of Canada, without a constitutional angle to this case it is hard to see why the Supreme Court would consider it necessary to review the BC Court of Appeal’s ruling on this interpretation of BC law.

What is perhaps most interesting about this decision is the strong signal it sends about privacy in a digital age. Clearview had argued (as it did in Alberta) that the province’s laws do not apply to its activities. The Court of Appeal disagreed, noting that the test for a “real and substantial connection” to the jurisdiction is necessarily contextual. It framed that context as “the internet as it exists today.” (at para 51) Writing for the unanimous court, Justice Iyer noted that “Clearview’s success as a business depends on its ability to acquire facial data on a global scale to build the databank on which its search engine runs” (at para 52). She observed that the scale of the company’s activities and its inability to exclude BC from its data scraping “supports a conclusion that BC’s relationship to Clearview is substantial, not incidental” (at para 52). She also noted that BC’s private sector data protection law is quasi-constitutional in nature, making transnational enforcement in a global digital age important. She rejected Clearview AI’s argument that just because PIPA is important within BC, its reach should note extend beyond the province’s borders, stating that: “PIPA is simply one of many legislative and common law mechanisms through which the protection of personal privacy is achieved. The importance of the public interest in protecting that fundamental right is highly relevant in the sufficient connection analysis.” (at para 54)

Clearview AI’s business model and the scale of its activities were clearly relevant to the conclusion on jurisdiction. Justice Iyer stated that:

[T]his case is not about the ‘incidental touching’ of a person’s publicly available data. It is about a systematic acquisition of facial data regardless of jurisdiction that enables an enterprise to commercially exploit that information by disclosing it to law enforcement and other entities who are interested in connecting with an individual. (at para 61)

In these circumstances, the Court concluded that BC’s Personal Information Protection Act applies, giving the Commissioner jurisdiction.

These findings on jurisdiction clearly reinforce both the importance of privacy protection and the significant impact of contemporary technology on privacy. Other statements in the decision also highlight this reality. In comments that are relevant to the anticipated reform (in the way that the arrival of the Easter Bunny is anticipated – with childlike faith that becomes cynical over the years) of Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA)), Justice Iyer reminds us of the Federal Court of Appeal’s admonition that PIPEDA (and its substantially similar counterparts) “does not aim to balance competing rights, it balances a need [of organizations to use personal data] with a right” (at para 82). The BC Court of Appeal decision joins the growing list of decisions in Canada that highlight the importance of privacy rights – particularly in the face of invasive transnational technologies and business models.

 

Monday, 09 February 2026 07:15

Canada's AI Strategy: Some Reflections

The Department of Innovation Science and Economic Development (ISED) has released the results of the consultation it carried out in advance of its development of the latest iteration of its AI Strategy. The consultation had two components – one was a Task Force on AI – a group of experts tasked with consulting their peers to develop their views. The experts were assigned to specified themes (research and talent; adoption across industry and government; commercialization of AI; scaling our champions and attracting investment; building safe AI systems and public trust in AI, education and skills; infrastructure; and security). The second component was a broad public consultation asking for either answers to an online survey or emailed free-form submissions. This post offers some reflections on the process and its outcomes.

1. The controversy over the consultation

The consultation process generated controversy. One reason for this was the sudden and short timelines. Submissions from the public were sought within a month, and Task Force members were initially expected to consult their peers and report in the month following the launch of the consultation. In the end, the Task Force Reports were not published until early February – the timelines were simply unrealistic. However, there was no extension for the public consultation. The Summary of Inputs on the consultation refers to it as “the largest public consultation in the history of Innovation Science and Economic Development Canada, generating important ideas, questions and legitimate concerns to take into consideration in the drafting of the strategy” (at page 3). The response signals how important the issue is to Canadians and how they want to be heard. One has to wonder how many submissions ISED might have received with longer timelines. Short deadlines favour those with time and resources. Civil society organizations, small businesses, and individuals with full workloads (domestic and professional) find short timelines particularly challenging. Running a “sprint” consultation favours participation from some groups over others.

Another point of controversy was the lack of diversity of the Task Force. The government was roundly criticized for putting together a Task Force with no representation from Canada’s Black communities, particularly given the risks of bias and discrimination posed by AI technologies. A letter to this effect was sent to the Minister of AI, the Prime Minister, and the leaders of Canada’s other political parties by a large group of Black academic and scholars. Following this, a Black representative – a law student - was hurriedly added to the Task Force.

An open letter to the Minister of Artificial Intelligence for civil society organizations and individuals also denounced the consultation, arguing that the deadline should be extended, and that the Task Force should be more equitably representative. The letter noted that civil society groups, human rights experts, and others were absent from the Task Force panel. The group was also critical of the online survey for being biased towards particular outcomes. This group indicated that it would be boycotting the consultation. They have now set up their own People’s Consultation on AI, which is accepting submissions until March 15, 2026.

These controversies highlight a major stumble in developing the AI Strategy. The lack of consultation around the failed Artificial Intelligence and Data Act in Bill C-27 and the criticism that this generated should have been a lesson to ISED on how important the issues raised by AI are to the public and about how they want to be heard. The Summary makes no mention of the controversy it generated. Nevertheless, the criticisms and pushbacks are surely an important part of the outcome of this process.

2. Some thoughts on Transparency

ISED has not only published a summary of the results of its consultation and of the Task Force Reports, it has published in its open government portal the raw data from the consultation, as well as the individual task force reports. This seems to be in line with a new commitment to greater transparency around AI – in the fall of 2025 ISED also published its beta version of a register of AI in use within the federal public service. These are positive developments, although it is worth watching to see if tools like the register of AI are refined, improved (and updated).

ISED was also transparent about its use of generative AI to process the results of the consultation. Page 16 of the summary document explains how it used (unspecified) LLMs to create a “classification pipeline” to “clean survey responses and categorize them into a structured set of themes and subthemes”. The report also describes the use of human oversight to ensure that there was “at least a 90% success rate in categorizing responses into specific intents”. ISED explains that it consulted research experts about their methodology and indicated that the methods they used were in conformity with the recent Treasury Board Guide on the use of generative artificial intelligence. The declaration on the use of AI indicates that the output was used to produce the final report, which is apparently a combination of human authorship and extracts from the AI generated content.

It would frankly be astonishing if generative AI tools have not already been used in other contexts to process submissions to government consultations (but likely without having been disclosed). As a result, the level of transparency about the use here is important. This is illustrated by my colleague Michael Geist’s criticisms of the results of ISED’s use of AI. He ran the Task Force reports through two (identified) LLMs and noted differences in the results between his generated analysis and ISED’s. He argues that “the government had not provided the public with the full picture” and posits that the results were softened by ISED to suggest a consensus that is not actually present. Putting a particular spin on things is not exclusively the result of the use of AI tools – humans do this all the time. However, explaining how results were arrived at using a technological system can create an impression of objectivity and scientific rigor that can mislead, and this underscores the importance of Prof. Geist’s critique.

It is worth noting that it is the level of transparency provided by ISED that allowed this analysis and critique. The immediacy of the publication of the data on which the report was based is important as well. Prolonged access to information request processes were unnecessary here. This approach should become standard government practice.

3. AI Governance/Regulation

The consultation covered many themes, and the AI Strategy is clearly intended to be about more than just how to regulate or govern AI. In fact, one could be forgiven for thinking that the AI Strategy will be about everything except governance and regulation, given the limited expertise from these areas on the Task Force. These focus areas emphasized adoption, investment in, and scaling of AI innovation, as well as strengthening sovereign infrastructure. Among the focus areas only “public trust, skills and safety” gives a rather offhand nod to governance and regulation.

That said, reading between the lines of the summary of inputs, Canadian are concerned about AI governance and regulation. This can be seen in statements such as “Respondents…urged Canada to prioritize responsible governance” (p. 7). Respondents also called for “meaningful regulation” (p. 8) and reminded the government of the need to “modernize regulations” (p. 8). There were also references to “accountable and robust governance”(p. 8) and “strict regulation, penalties for non-compliance and frameworks that uphold Canadian values” (p. 8) when it comes to generative AI. There were also calls for “strict liability laws” (p. 9), and concerns expressed over “lack of regulation and accountability” (p. 9).

One finds these snippets throughout the summary document, which suggests that meaningful regulation was a matter of real concern for respondents. However, the “Conclusions and next steps” section of the report mentions only the need for “regulatory clarity” and streamlined regulatory frameworks – neither of which is a bad thing, but neither of which is really about new regulation or governance. Instead, the report concludes that: “There was general consensus among participants that public trust depends on transparency, accountability, and robust governance, supported by certification standards, independent audits and AI literacy programs” (p. 15, my emphasis). While those tools are certainly part of a regulatory toolkit for AI, on their own and outside of a framework that builds in accountability and oversight, they are basically soft-law and self-regulation. This feels like a rather convenient consensus around where the government was likely heading in the first place.

 

The Ontario and British Columbia Information and Privacy Commissioners each released new AI medical scribes guidance on Privacy Day (January 28, 2026). This means that along with Alberta and Saskatchewan, a total for four provincial information and privacy commissioners have now issued similar guidance. BC’s guidance is aimed at health care practitioners running their own practices and governed by the province’s Personal Information Protection Act. It does not extend to health authorities and hospitals that fall under the province’s Freedom of Information and Protection of Privacy Act. Ontario’s guidance is for both public institutions and physicians in private practice who are governed by the Personal Health Information Protection Act.

This flurry of guidance on AI Scribes shows how privacy regulators are responding to the very rapid adoption in the Canadian health sector of an AI-tool that raises sometimes complicated privacy issues with a broad public impact.

At its most basic level, an AI medical scribe is a tool that records a doctor’s interaction with their patient. The recording is then transcribed by the scribe, and a summary is generated that can be cut and pasted by the doctor into the patient’s electronic medical record (EMR). The development and adoption of AI scribes has been rapid, in part because physicians have been struggling with both significant administrative burdens as well as burnout. This is particularly acute in the primary care sector. AI scribes offer the promise of better patient care (doctors are more focused on the patient as they are freed up from notetaking during appointments), as well as potentially significantly reduced time spent on administrative work.

AI medical scribes raise a number of different privacy issues. These can include issues relating to the scribe tool itself (for example, how good is the data security of the scribe company? What kind of personal health information (PHI) is stored, where, and for how long? Are secondary uses made of de-identified PHI? Is the scribe company’s definition of de-identification consistent with the relevant provincial health information legislation?) They may also include issues around how the technology is adopted and implemented by the physician (including, for example” whether the physician retains the full transcription as well as the chart summary and for how long; what data security measures are in place within the physician’s practice; and how consent is obtained from patients to the use of this tool). As the BC IPC’s guidance notes, “What distinguishes an AI scribe’s collection of personal information from traditional notetaking with a pen and notepad is that there are many processes taking place with an AI scribe that are more complex, potentially more privacy invasive, and less obvious to the average person” (at 5).

AI scribes raise issues other than privacy that touch on patient data. In their guidance, Ontario’s IPC notes the human rights considerations raised by AI scribes and refers to its recent AI Principles issued jointly with the Ontario Human Rights Commission (which I have written about here). The quality of AI technologies depends upon the quality of their training data. Where training data does not properly represent the populations impacted by the tool, there can be bias and discrimination. Concerns exist, for example, about how well AI scribes will function for people (or physicians) with accents, or for those with speech impaired by disease or disability. Certainly, the accuracy of personal health information that is recorded by the physician is a data protection issue; it is also a quality of health care issue. There are concerns that busy physicians may develop automation bias, increasingly trusting the scribe tool and reducing time spent on reviewing and correcting summaries – potentially leading to errors in the patient’s medical record.

AI scribes are being adopted by individual physicians, but they are also adopted and used within institutions – either with the engagement of the institution, or as a form of ‘shadow use’. A recent response to a breach by Ontario’s IPC relating to the use of a general-purpose AI scribe illustrates how complex the privacy issues may be in such as case (I have written about this incident here). In that case, the scribe tool ‘attended’ nephrology rounds at a hospital, transcribed the meeting, sent a summary to all 65 people on the mailing list for the meeting and provided a link to the full transcript. The summary and transcript contained the sensitive personal information of the patients seen on those rounds. Complicating the matter was the fact that the physician whose scribe attended the meeting was no longer even at the hospital.

Privacy commissioners are not the only ones who have stepped up to provide guidance and support to physicians in the choice of AI scribe tools. Ontario MD, for example, conducted an evaluation of AI medical scribes, and is assisting in assessing and recommending scribing tools that are considered safe and compliant with Ontario law.

Of course, scribe technologies are not standing still. It is anticipated that these tools will evolve to include suggestions for physicians for diagnosis or treatment plans, raising new and complex issues that will extend beyond privacy law. As the BC guidance notes, some of these tools are already being used to “generate referral letters, patient handouts, and physician reminders for ordering lab work and writing prescriptions for medication” (at 2). Further, this is a volatile area where scribe tools are likely to be acquired by EMR companies to integrate with their offerings, reducing the number of companies and changing the profile of the tools. The mutable tools and volatile context might suggest that guidance is premature; but the AI era is presenting novel regulatory challenges, and this is an example of guidance designed not to consolidate and structure rules and approaches that have emerged over time; but rather to reduce risk and harm in a rapidly evolving context. Regulator guidance may serve other goals here as well, as it signals to developers and to EMR companies those design features which will be important for legal compliance. Both the BC and Ontario guidance caution that function creep will require those who adopt and use these technologies to be alert to potential new issues that may arise as the adopted tools’ functionalities change over time.

Note: Daniel Kim and I have written a paper on the privacy and other risks related to AI medical scribes which is forthcoming in the TMU Law Review. A pre-print version can be found here: Scassa, Teresa and Kim, Daniel, AI Medical Scribes: Addressing Privacy and AI Risks with an Emergent Solution to Primary Care Challenges (January 07, 2025). (2025) 3 TMU Law Review, Available at SSRN: https://ssrn.com/abstract=5086289

 

Ontario’s Office of the Information and Privacy Commissioner (IPC) and Human Rights Commission (OHRC) have jointly released a document titled Principles for the Responsible Use of Artificial Intelligence.

Notably, this is the second collaboration of these two institutions on AI governance. Their first was a joint statement on the use of AI technologies in 2023, which urged the Ontario government to “develop and implement effective guardrails on the public sector’s use of AI technologies”. This new initiative, oriented towards “the Ontario public sector and the broader public sector” (at p. 1), is interesting because it deepens the cooperation between the IPC and the OHRC in relation to a rapidly evolving technology that is increasingly used in the public sector. It also fills a governance gap left by the province’s delay in developing its public sector AI regulatory framework.

In 2024, the Ontario government enacted the Enhancing Digital Security and Trust Act, 2024 (EDSTA), which contains a series of provisions addressing the use of AI in the broader public sector (which includes hospitals and universities). It also issued the Responsible Use of Artificial Intelligence Directive which sets basic rules and principles for Ontario ministries and provincial agencies. The Directive is currently in force and is built around principles similar to those set out by the IPC and OHRC. It outlines a set of obligations for ministries and agencies that adopt and use AI systems. These include transparency, risk management, risk mitigation, and documentation requirements. The EDSTA, which would have a potentially broader application, creates a framework for transparency, accountability, and risk management obligations, but the actual requirements have been left to regulations. Those regulations will also determine to whom any obligations will apply. Although the EDSTA can apply to all actors within the public sector, broadly defined, its obligations can be tailored by regulations to specific departments or agencies, and can include or exclude universities and hospitals. There has been no obvious movement on the drafting of the regulations needed to breathe life into EDSTA’s AI provisions

It is clear that AI systems will have both privacy and human rights implications, and that both the IPC and the OHRC will have to deal with complaints about such systems in relation to matters within their respective jurisdictions. As the Commissioners put it, the principles “will ground our assessment of organizations’ adoption of AI systems consistent with privacy and human rights obligations.” (at p. 1) The document clarifies what the IPC and OHRC expect from institutions. For example, conforming to the ‘Valid and reliable” principle will require compliance with independent testing standards and objective evidence will be required to demonstrate that systems “fulfil the intended requirements for a specified use or application”. (at p. 3) The safety principle also requires demonstrable cybersecurity protection and safeguards for privacy and human rights. The Commissioners also expect institutions to provide opportunities for access and correction of individuals’ personal data both used in and generated by AI systems. The “Human rights affirming” principle includes a caution that public institutions “should avoid the uniform use of AI systems with diverse groups”, since such practices could lead to adverse effects discrimination. The Commissioners also caution against uses of systems that may “unduly target participants in public or social movements, or subject marginalized communities to excessive surveillance that impedes their ability to freely associate with one another.” (at p. 6)

The Commissioners’ “Transparency” principle requires that the use by the public sector of AI be visible. The IPC’s mandate covers both access to information and privacy. The Principles state that the documentation required for the “public account” of AI use “may include privacy impact assessments, algorithmic impact assessments, or other relevant materials.” (at p. 6) There must also be transparency regarding “the sources of any personal data collected and used to train or operate the system, the intended purposes of the system, how it is being used, and the ways in which its outputs may affect individuals or communities.” (at p. 6)

The Principles also require that systems used in the public sector be understandable and explainable. The accountability principle requires public sector institutions to document design and application choices and to be prepared to explain how the system works to an oversight body. They should also establish mechanisms to receive and respond to complaints and concerns. The Principles call for whistleblower protections to support reporting of non-compliant systems.

The joint nature of the Principles highlights how issues relating to AI do not easily fall within the sole jurisdiction of any one regulator. It also highlights that the dependence of AI systems on data – often personal data or de-identified personal data – carries with it implications both for privacy and human rights.

That the IPC and OHRC will have to deal with complaints and investigations that touch on AI issues is indisputable. In fact, the IPC has already conducted formal and informal investigations that touch on AI-enabled remote proctoring, AI scribes, and vending machines on university campuses that incorporate face-detection technologies. The Principles offer important insights into how these two oversight bodies see privacy and human rights intersecting with the adoption and use of AI technologies, and what organizations should be doing to ensure that the systems they procure, adopt and deploy are legally compliant.

 

 

A recent communication from the Office of the Information and Privacy Commissioner of Ontario (IPC) highlights how rapidly evolving and widely available artificial intelligence-enabled tools can pose significant privacy risks for organizations.

The communication in question was a letter to an unnamed hospital (“the hospital”) which had reported a data breach to the IPC. The letter reviewed the breach, set out a series of recommendations for the hospital, and requested an update on the hospital’s response to the recommendations by late January 2026. Although the breach occurred in the health sector, with its strict privacy laws, lessons extend more broadly to other sectors as well.

The breach involved the use of a transcription tool of a kind now regularly in use by many physicians to document physician-patient interactions. AI Scribe tools record and transcribe physician-patient interactions and generate summaries suitable for inclusion in electronic medical records. These functions are designed to relieve physicians of significant note-taking and administrative burdens. Although there are many task-specific AI Scribe tools now commercially available, in this case, the tool used was the commonly available Otter.ai transcription tool designed for use in a broad range of contexts.

This breach was complicated by the fact that the Otter.ai tool acted as an AI agent of the physician who had downloaded it. AI agents can perform a series of tasks with a certain level of autonomy. In this case, the tool can be integrated with different communications platforms, as well as with the user’s digital calendar (such as Outlook). Essentially, Otter.ai can scan a user’s digital calendar and join scheduled meetings. The tool then transcribes and summarizes the meeting. It can also share both the summary and the transcription with other meeting participants – all without direct user intervention.

The physician had downloaded Otter.ai and provided it with access to his calendar over a year after he left the hospital that reported the breach. Because he had he used his personal email, rather than his hospital email, for internal communications while at that hospital, his departure in 2023 and the deactivation of his hospital email account had not led to the removal of his personal email from meeting invitation lists. When he downloaded Otter.ai in September 2024 and gave it access to his digital calendar, he was still receiving invitations from the hospital to hepatology rounds. Although the physician did not attend these rounds following his departure, his AI agent did. It attended a September 2024 meeting, produced a transcript and meeting summary and emailed the summary with a link to the full transcript to all 65 individuals on the meeting invitation. The breach was presumably reported to the hospital by one or more of the email recipients. Seven patients had been seen during the hepatology rounds, and the transcript and summary contained their sensitive personal health information.

The hospital took immediate action to address the breach. It cancelled the digital invitation to the physician and contacted all recipients of the summary and transcript asking them to promptly delete all copies of the rogue email and attachments. It also sent a notice to all staff reminding them that they are not permitted to use non-approved tools in association with their hospital credentials and/or devices. It contacted the physician who had used Otter.ai and ensured that he removed all digital connections with the hospital. They also requested that he contact Otter.ai to request that all information related to the meeting be deleted from their systems. Patients affected by the breach were also notified by the hospital. To prevent future breaches, the hospital created firewalls to block on-site access to non-approved scribing tools, updated its training materials to address the use of unapproved tools, and revised its Appropriate Use of Information and Information Technology policy. The revised policy emphasizes the importance of using only hospital approved IT resources. It also advises regular review of participant lists for meetings to ensure that AI tools or automated agents are not included.

In addition to these steps, the IPC made further recommendations, including that the hospital itself contact Otter.ai to request the deletion of any patient information that it may have retained. Twelve of the sixty-five email recipients had not confirmed that they had deleted the emails, and the IPC recommended that the hospital follow up to ensure this had been done. Updates to the hospital’s breach protocol were also recommended as well as changes to offboarding procedures to ensure that access to hospital information systems is “immediately revoked” when personnel leave the hospital. The OIPC also recommended the use of mandatory meeting lobbies for all virtual meetings so that unauthorized AI agents are not permitted access to meetings.

This incident highlights some of the important challenges faced by hospitals – as well as by many other organizations – with the development of widely available generative and agentic AI tools. Where sophisticated and powerful tools in the workplace were once more easily controlled by the employer, it is increasingly the case that employees have independent access to such tools. Shadow AI usage is a growing concern for organizations, as it may pose unexpected – and even undetected – risks for privacy and confidentiality of information. Rapidly evolving agentic AI tools – with their capacity to act independently may also create challenges, particularly where employees are not fully familiar with their full range of functions or default settings.

Medical associations and privacy commissioners’ offices have begun developing guidance for the use of AI Scribes in medical practice (see, e.g., guidance from Saskatchewan and Alberta OIPCs). Ontario MD has even gone so far as to develop a list of approved AI scribe vendors – ones that they consider meet privacy and security standards. However, the tool adopted in this case was designed for all contexts and is available in both free and paid versions, which only serves to highlight the risks and challenges in this area. The widespread availability of such tools poses important governance issues for privacy and security conscious organizations. Even where an organization may subscribe to a particular tool that has been customized to its own privacy and security standards, employees still have access to many other tools that they might already use in other contexts. The risk that an employee will simply decide to use a tool with which they are already familiar and with which they are comfortable must be considered.

More generic transcription tools may also pose other risks in the medical context, since they are not specifically trained or designed for a particular context such as health care. For example, they may be less adept at dealing with medical terminology, prescription drug names, or other terms of art. This could increase the incidence of errors in any transcriptions or summaries.

Risks that data collected through unauthorized tools may be used to train AI systems also underscores the potential consequences for privacy and confidentiality. Under Ontario’s Personal Health Information Protection Act (PHIPA), a health care custodian is not authorized to share personal health information with third parties without the patient’s express consent to do so. Using health-care related transcription or voice recordings to train third party AI systems without this express consent is not permitted. Although some services indicate that they only use “de-identified” information for system training, the term “de-identified” may not be defined in the same way as in PHIPA. For example, stripping information of all direct identifiers (names, ID numbers, etc.) does not count as de-identification under PHIPA which requires that in addition to the removal of all direct identifiers, it is also necessary to remove information “for which it is reasonably foreseeable in the circumstances that it could be utilized, either alone or with other information, to identify the individual”.

This incident highlights the vulnerability of sensitive personal information in a context in which a proliferation of novel (and evolving) technological tools for personal and professional use is rampant. Organizations must act quickly to assess and mitigate risks, and this will require regular engagement with and training of personnel.

Note: A pre-print version of my research paper with Daniel Kim on AI Scribes can be found here.

 

Monday, 05 January 2026 08:32

Canada's New Regulatory Sandbox Policy

In November 2025, Canada’s federal government published a new Policy on Regulatory Sandboxes in anticipation of amendments to the Red Tape Reduction Act which had been announced in the 2024 budget. This development deserves some attention, particularly as the federal government embraces a pro-innovation agenda and shifts its approach to regulation of innovative technologies such as artificial intelligence (AI).

Regulatory sandboxes have received considerable attention since the first use of one by the Financial Conduct Authority the UK in 2017. Although they first took hold in the financial services sector, they have since attracted interest in other sectors. For example, several European data protection authorities have created privacy regulatory sandboxes (see, e.g., the UK Information Commissioner and France’s CNIL). In Canada, the Ontario Energy Board and the Law Society of Ontario – to give just two examples – both have regulatory sandboxes. Alberta also created a fintech regulatory sandbox by legislation in 2022. Regulatory sandboxes are expected to be an important component in AI regulation in the European Union. Article 57 of the EU Artificial Intelligence Act requires all member states to establish an AI regulatory sandbox – or at the very least to partner with one or more members states to jointly create such a sandbox.

Regulatory sandboxes are seen as a regulatory tool that can be effectively deployed in rapidly evolving technological contexts where existing regulations may create barriers to innovation. In some cases, innovators may hesitate to develop novel products or services where they see no clear pathway to regulatory approval. In many instances, regulators struggle to understand rapidly evolving technologies and the novel business methods they may bring with them. A regulatory sandbox is a space created by a regulator that allows selected innovators to work with regulators to explore how these innovations can be brought to market in a safe and compliant way, and to learn whether and how existing regulations might need to be adapted to a changing technological environment. It is a form of experimental regulation with benefits both for the regulator and for regulated parties.

This is the context in which the federal Policy has been introduced. It defines a regulatory sandbox in these terms:

[A] regulatory sandbox, in the context of this policy, is the practice by which a temporary authorization is provided for innovation (for example, a new product, service, process, application, regulatory and non-regulatory approaches) and is for the purpose of evaluating the real-life impacts of innovation, in order to provide information to the regulator to support the development, management and/or review and assessment of the results of regulations. This can also include for the purposes of equipping the regulatory framework to support innovation, competitiveness or economic growth.

It is important to remember that the policy is anchored in the Red Tape Reduction Act and has a particular slant that sets it apart from other sandbox initiatives. An example of the type of sandbox likely contemplated by this policy can be found in a new regulatory sandbox proposed by Transport Canada to address a very specific regulatory issue arising with respect to the design of aircraft. This sandbox is described as being for “minor change approvals used in support of a major modification.” It is narrow in scope, using modifications to existing regulations to try out a new regulatory process for the certification of major modifications to aircraft design. The end goal is to reduce regulatory burden and to relieve uncertainties caused by existing regulations. Data will be collected from the sandbox experiment to assess the impact of regulatory changes before they might be made permanent.

This approach frames sandboxing as a means to enable innovation by improving existing regulations and streamlining processes. While this is a worthy objective, there is a risk that the policy may be cast too narrowly by focusing on a regulatory sandbox as a means to improve regulation, rather than more broadly as a means of understanding how novel technologies or processes can be brought safely to market – sometimes under existing regulatory frameworks. This is reflected in the policy document, which states that sandboxes proposed under this policy “must demonstrate how regulatory regimes could be modernized”.

The definition of a regulatory sandbox in the Policy, reproduced above, essentially describes a data gathering process by the regulator “to support the development, management and/or review and assessment of the results of regulations.” This can be contrasted with the more open-ended definition adopted in the relatively recent standard for regulatory sandboxes developed by the Digital Governance Standardization Initiative (DGSI):

A regulatory sandbox is a facility created and controlled by a regulator, designed to allow the conduct of testing or experiments with novel products or processes prior to their entry into a regulated marketplace.

Rather than focus on the regulator conducting an assessment of its regulations, the DGSI definition is focused on innovative products and processes, and frames sandboxes in terms of their recognized mutual benefits for both regulators and innovators. The focus of the DGSI’s sandbox definition is on the bringing to market of novel products or processes. Although improving regulations and regulatory processes is a perfectly acceptable outcome of a regulatory sandbox, it is not the only possible outcome – nor is it even a necessary one. In this context, the new federal policy is rather narrow. It is focused on regulations themselves at the core of the sandbox experiments – rather than how innovative technologies challenge regulatory frameworks.

An example of this latter approach is found in the Ontario Bar Association’s regulatory sandbox for AI-enabled access to justice innovations (A2I). In some cases, innovations of this kind might be characterized as constituting the illegal practice of law, creating a barrier to market entry. In the A2I sandbox the novel products or services are developed and live-tested under supervision to assess whether they can be deployed in a way that is sufficiently protective of the public. The issue is partly a regulatory one – but it is not that any particular regulations necessarily require changing – rather, it is that innovators need a level of comfort that their innovation will not be blocked by existing regulations. At the same time, the regulator needs to understand the emerging technology and how they can fulfil their public protection mandate while supporting useful innovation. One out come of a sandbox process might be to learn that a particular innovation cannot safely be brought to market.

A similar paradigm exists with privacy regulatory sandboxes, which might either explore ways in which a novel technology can be designed to comply with the legislation, or examine how existing rules should be understood and applied in novel circumstances.

In all cases, the regulator may learn something about how existing regulations might need to adapt to an evolving technological context, and this too is a useful outcome. However, it does not have to be the principal goal of the regulatory sandbox. While the federal Policy is interesting, it seems narrowly focused. It appears to primarily be a tool conceived of to help streamline and improve regulatory processes (still a worthy goal) rather than a more ambitious sandboxing initiative. The policy is interesting and signals an openness to the concept of regulatory sandboxes. Unfortunately, it is still a rather narrow framing of the nature and potential of this regulatory tool.

 

<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 1 of 39

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law