Teresa Scassa - Blog

Displaying items by tag: artificial intelligence

This is the third in a series of posts discussing the federal government’s new consultation document on reform of the federal Privacy Act. The previous posts are here and here. This post addresses the second theme in the document: Enhancing accountability and transparency.

Accountability and transparency are important privacy principles, and it is no surprise that the TBS consultation document on reform of the federal Privacy Act addresses these issues in four proposals set out in its second theme. The first of these (Proposal #3 overall in the document) would create a “legal requirement to conduct a privacy impact assessment when a program or activity uses personal data to make a decision about someone”. Privacy impact assessments are currently required under the Directive on Privacy Practices when “personal information is to be used for an administrative purpose”. The consultation paper suggests that the proposal to reform the Privacy Act would “make PIAs a legal requirement instead of a policy requirement.”

Under the proposal, PIAs would be shared with the Privacy Commissioner of Canada, who would assess whether they comply with the Privacy Act, and also with TBS. The consultation document notes that the incorporation of these existing policy requirements into the law would “not create an additional approval process or delay program implementation”. (See my discussion of the pragmatic privacy in my second post in the series). Although this is framed as a proposal to make an existing obligation more concrete and enforceable, according to the consultation document, the PIA requirement would be activated where there is a new program or a substantial modification to an existing program that uses personal data “to make decisions about people”. This is narrower than what the current policy on PIAs requires, and the difference is significant. I will return to this issue in the discussion of transparency, below.

TBS also proposes to leave the contents of the PIA to policy to allow “the rules to be updated more easily as technologies, risks, and best practices change over time.” This tendency to leave details to regulations is becoming increasingly common in Canadian laws addressing rapidly evolving technologies. Nonetheless, although the law could simply require PIA’s to be completed according to a prescribed set of requirements (for example, there is currently a PIA template document for the federal public service), basic elements should still be set out in the law. For example, Alberta’s new public sector Protection of Privacy Act sets out four statutory requirements for PIAs. They must:

26. [. . .] (a) identify and review risks associated with the public body’s collection, use and disclosure of personal information,

(b) develop mitigation strategies and safeguards respecting those risks,

(c) address how the public body will comply with its duties under this Act, and

(d) comply with the prescribed requirements.

Section 38(3) of Ontario’s Freedom of Information and Protection of Privacy Act also provides a list of essential elements of a PIA, along with “any other prescribed elements”. A reformed federal Privacy Act should take the same approach, articulating essential requirements in the law, with other more variable elements to be prescribed.

The consultation paper also proposes requiring the publication of plain language summaries of PIAs, suggesting that these would exclude information that might adversely impact “law enforcement, investigations, or national security”. The publication of plain language PIA summaries would offer an important level of transparency in an accessible format to a broader public. However, the level of detail in a full PIA could still be valuable to researchers and journalists. Both the detailed and plain language versions could be proactively published. After all, algorithmic impact assessments carried out under the Directive on Automated Decision-Making (DADM) are meant to be shared via the open government portal. In the US, PIAs under the E-Government Act 2002 must be proactively published unless certain exceptions apply.

The second proposal under this theme (Proposal #4 overall) is to create a central registry of personal data holdings and to publish key information on personal data management practices. This system would replace the current Personal Information Banks system along with its classifications of personal data. Instead, there would be “a centralized registry of personal data holdings” (not a centralized data storage repository). The registry would include “privacy notices explaining why data is collected and how it will be used, general descriptions of how personal data is shared between programs, and summaries of PIAs.” Exceptions to disclosure would likely be created for law enforcement or national security, although the consultation document emphasizes that any exceptions should be “limited, specific, and clearly set out in the Act” and would require justification. This recommendation is aimed at modernizing how transparency is provided about government management of its personal data holdings. In the case of horizontal data sharing, it would ensure that the “flow of data between programs would be more clearly articulated”.

The third proposal under this theme (Proposal #5) would establish “transparency requirements for the use of artificial intelligence and automated decision systems that support the right to the correction of personal data”. What is contemplated is an amendment to the Privacy Act to require – at the request of an individual – an explanation of “how an ADS [automated decision system] supported a decision and what personal data was used.” An automated decision system is currently defined in the DADM as “[a]ny technology that either assists or replaces the judgment of human decision makers.” A right to verify the accuracy of the data and to ask for corrections would also be provided. Where an individual believes that an error has been made, they could request a human review of the decision.

The final proposal under this theme (Proposal #6) also deals with automated decision systems and would require notices that explain why data is being collected, for what purposes, and with whom it might be shared. The proposal would add a plain language requirement for such notices and would require them to be posted in the central registry. Additional notices would be required for ADS, and these would “provide a general explanation so the person can understand how the ADS handled their personal data and how the decision was made.” It is not entirely clear whether the ADS notice would be sent directly to affected individuals or placed in the centralized registry, but it seems that it might be the latter.

The recommendations in this part of the proposal are clearly oriented towards automated decision-making. Although the federal Directive on Automated Decision Making (DADM) sets out certain transparency requirements, the DADM does not apply to all of the institutions that fall under the Privacy Act. The proposed reform would not only elevate these transparency requirements to law, but it would also ensure that they extend further across the public sector. While this would be a positive development, it is important to note that the DADM was developed as a form of AI governance, not as a privacy measure. The scope of the DADM is therefore shaped by its focus on automated decision-making. Indeed, TBS states that the transparency/correction requirement “would only apply to ADS that use personal data to make or support decisions that directly affect individuals”, language that echoes that used in the DADM.

This is where the PIA requirement in Proposal #3 and the transparency requirement in Proposal #5 run into potential problems. As noted earlier, the PIA requirement in the consultation document would apply only where a new or modified program uses personal data “to make decisions about people”. (Compare this with the right to an explanation that featured in Bill C-27’s Consumer Privacy Protection Act, which would have applied to systems used to “make a prediction, recommendation or decision about an individual that could have a significant impact on them.”) The scope of this obligation will therefore be determined by how making “decisions about people” is defined. The DADM defines an administrative decision as one that “affects legal rights, privileges or interests”, which appears to be a relatively high threshold. The Guide on the scope of the DADM identifies a list of activities that are both in and out of scope of the Directive. In-scope activities include:

· Triaging client applications based on their complexity as determined through machine-defined criteria

· Examining a financial transaction to estimate the probability of fraud

· Generating an assessment, score or classification about the client

· Generating a summary of relevant client information for officers to determine eligibility to a program

· Presenting information from multiple sources to an officer (such as by data matching and fuzzy matching)

· Using facial recognition or other biometric technology to target subjects for additional scrutiny

· Recommending one or multiple options to the decision maker

· Using an AI resumé-screening tool or skills-based assessment tool to filter top-performing candidates to the interview stage in a recruitment process

· Reviewing client applications for benefits and recommending approval or denial to an officer

· Chatbot that officers use to recommend a course of action

These offer some examples of the fairly wide net cast by the DADM and clearly go beyond some of the most obvious forms of automated decision-making. They help clarify what “decisions about people” mean, but any change to the legislation to add transparency and accountability in relation to automated decision making will need to be crystal clear that the scope of language such as “decisions about people” and about decisions that affect “legal rights, privileges or interest”, are as inclusive as this list. The risk is that without clear parameters, the interpretation of these rights could be too narrow.

 

Published in Privacy

In November 2025, Canada’s Treasury Board Secretariat made available a minimum viable product AI register, intended to form the basis for a consultation on what a register of AI in use in the federal public sector should look like. This dataset is not meant to represent in form or content what the final product will look like. But it is a starting point for a discussion. The consultation closes on March 31, 2026.

It is worth highlighting how significant the idea of a federal AI registry is. We are still in the early days of public sector AI, and there are relatively few precedents for official AI registers. That said, it is clear that this is a trend that is likely to grow. The Dutch government has a national AI register offering a public-facing searchable database that includes entries from federal and municipal governments. The UK has a register of “algorithmic tools” used in its public sector. Norway has what is described as an “overview” of AI projects in the public sector, which it cautions is a work in progress. France maintains an inventory of public sector algorithms, under the auspices of the Observatoire des algorithms publics. In the US, Executive Order 13960 requires federal agencies to create an inventory of their AI use cases, and guidance is provided on how to do this. While overview data is provided, each department maintains its own AI Use Case Inventory Library (see an example here). Canada’s decision to create a federal AI Register is an important commitment, and its consultation on what such a register should look like is also significant.

The consultation process is nourished by a dataset made available through Canada’s open data portal. Described as a minimum viable product, this is a pretty rough set of data compiled from different sources. It is really meant as a conversation starter – it provides a glimpse into what is already happening within the federal public sector when it comes to AI, and it prompts users to think about what data they might want to have, and how they might want to see it organized.

The current data set contains 409 separate entries, each with 23 data categories. These represent both French and English versions of the same categories. The categories include a unique identifier for each system, the system’s name and the government department or agency responsible for it. There is a short description of the system, information about primary users and about who developed the system. For procured systems, the name of the vendor is provided. The status of the system is indicated (e.g., in development, in production, or retired), as well as brief descriptions of system capabilities and data sources. Whether the system relies on personal data is also specified, as well as any relevant personal information banks. Whether users are notified of the use of the system is also indicated, and a short description is provided of the expected results of the system.

The AI register seems intended to serve two broad audiences. The first is users from within the federal government. By making its uses of AI systems more transparent internally, the government can avoid duplicative efforts, allow better collaboration across departments and agencies, and perhaps also share ideas for helpful uses of AI tools to streamline different processes. A second audience is the broader public. This audience can include researchers, journalists, academics, civil society organizations, lawyers, developers, and many others seeking to understand how and where the government is using AI systems. The diversity of potential users will impact both how the data are made available and what data points may be of interest.

The fact that the federal AI register seems intended for both internal and external audiences is important and should not be taken for granted. For example, Ontario’s Responsible Use of Artificial Intelligence Directive requires ministries and agencies to report on AI use cases and risk management, with ministries reporting to the Ministry of Public and Business Service Delivery and Procurement on an annual basis. However, this reporting requirement is internal and not public. The Directive only requires public disclosure of the use of an AI system where the public interacts directly with it or where the system is used to make a decision about a member of the public.

Currently Canada’s AI Register data is available in different formats, including CSV, JSON, TSV and XML These formats are useful for some types of users, but they are not particularly accessible for a broader public that might require a more user-friendly interface. Ideally, the AI Register should have a public facing site that makes it easy to search and find results offering straightforward information at a click. The UK’s Register provides an interesting example in this respect. For each algorithm there is a standardized list of information provided. It would be good to have a dashboard that provides visual representations of how and where AI is used in the federal public sector. This could include other overview representations of the data within the Register, but also, perhaps, information about the register itself (e.g, tracking the number of entries over time; tracking categories of uses, etc. For an example of a dashboard, see the one created by the Dutch Government as part of its AI Register). However, the more granular data should still be available through the open government portal as a downloadable dataset for those who wish to dig into it. This would be a useful resource for researchers, journalists, students, and others.

AI systems in use across the federal government may also have other data associated with them which it would be good to be able to access easily. For example, automated decision systems at the federal level are subject to the Directive on Automated Decision Making and are supposed to have gone through an algorithmic impact assessment (AIA). These assessments are meant to be available through the open government portal (and some are). Providing links to available AIA’s would be useful for those who want to know more about a particular system. Similarly, systems that use personal data will have gone through a privacy impact assessment, and many systems will also have gone through a Gender-based Plus assessment. Links to any publicly accessible evaluations would be useful, but even if these are not fully publicly available, the register could indicate whether the AI system has gone through such an evaluation, and when it might have been updated.

Other data points that could be considered might include whether there is human oversight and at what point in the process. In the current version of the Register, data sources are identified (e.g., certain categories of documents), but it might also be useful to know what specific data points are relied upon (this is something that is provided, for example, in the Dutch register).

Presumably AI systems in use in the public sector will be monitored and assessed, and data will be gathered on their performance. Are the systems reducing workload or backlogs and if so, by how much? Are they replacing humans? Saving money? Generating complaints? Are any reports, audits, and assessments publicly available? If so, where? When it comes to assessments and reports, it is not necessary for the AI register to be overburdened with too many data points. However, other relevant information that is proactively published should be easily findable.

Once TBS has decided what data should be in the register, it will need to provide a mechanism to gather this data and to ensure that it is harmonized across the federal public sector. This will likely require providing fillable forms in which terminology is carefully defined.

Generative AI and its use in the public sector will present some interesting challenges for the AI Register. Some uses of generative AI within departments or agencies are likely to be fairly ad hoc (as, for example, when AI is used to translate an email or document received that is in a language other than French or English). On the other hand, a deliberate choice to use genAI to translate such materials in a context in which they are frequently received, might require disclosure. Similarly, the ad hoc use of genAI to summarize reading material may not require disclosure, but a systematic approach to summarizing with genAI in administrative processes should require disclosure (and might require an algorithmic impact assessment). An example of this might be the systematic use of AI to summarize evidence or submissions to an agency or tribunal. Focusing on the nature/extent of use is one way of approaching this. Another might be to assess whether there is a public-facing dimension to the use of genAI. If it is used solely for internal administrative purposes, perhaps disclosure in the registry is less necessary than if it is used in a decision-making process, or if it is used in communications with the public. This latter way of approaching it could get complicated, since it may be difficult to determine which internal administrative uses end up having public facing dimensions. For example, genAI used in summarizing and report-drafting could have very public dimensions if that research shapes policy documents, white papers, consultation materials or other public-facing content. And, as reliance on agentic AI systems expands, it will also become necessary to think about how agentic AI use cases are recorded and documented within the register.

There may also be uses that the government decides should not be in the Register for reasons related to cybersecurity, national security or law enforcement practices, for example. Certainly, disclosing what AI systems are used to protect against cyberattacks or that are used in the national security context may be contrary to the public interest. Law enforcement is a trickier category, as there are some types of systems (e.g., predictive policing, facial recognition technology) for which transparency and accountability seem squarely in the public interest. (Note that the Dutch database contains 13 entries related to policing, including both FRT and predictive policing models.) Others (e.g., particular fraud detection algorithms) may require more circumspection.

A final point is to consider how often departments and agencies will be required to update their entries. Systems evolve and acquire new functionalities all the time. Sometimes modifications are significant enough to warrant new AIA’s or PIA’s. Whatever choices are made for the launch of Canada’s AI Register, the Register itself should be part of an iterative process subject to periodic reviews and updates, and open to user feedback.

 

Published in Privacy

The Province of Manitoba has three bills currently before the legislature that address AI-related issues.

The first of these is Bill 2, which proposes amendments to the province’s Non-Consensual Distribution of Intimate Images Act. Unlike some of its provincial counterparts, the original law (dating from 2015) already applies to both real and fake intimate images. The amendments will change the definition of an intimate image to include images in which a person is nearly nude. It will also include personal intimate images in which the individual is not identifiable. This will address circumstances, for example, where a former partner is threatening to disclose an intimate image in which a person is not readily identifiable, but where she knows that it depicts her. The bill also creates a new tort of threatening to disclose an intimate image. It makes explicit the power of the courts to issue orders against internet intermediaries. Interestingly, the bill will also limit the liability of internet intermediaries that have “taken reasonable steps to address unlawful distribution of intimate images” in the use of their services (s. 15.1(1)).

Bill 49, The Business Practices Amendment Act, proposes amendments to the provincial statute that sets out unfair business practices. The proposed changes will address the use of algorithms and big data to generate dynamic prices that are different for different consumers. Specifically, the following two practices will be added as unfair practices:

(r.1) where the price of a part of the consumer transaction is displayed by way of an electronic shelf labelling system, demanding a higher price from the consumer at the point of sale due to personalized algorithmic pricing in respect of that consumer;

and

(v) in the case of an online retailer or online distributor, the use of personalized algorithmic pricing to increase the price of the goods demanded from the consumer.

The bill defines personalized algorithmic pricing as occurring where personal data about the consumer are “collected, analyzed or processed with or without the consumer’s consent, knowledge or involvement”. This is important as it makes any consent to use of personal information in a long and obscure privacy policy irrelevant to the issue of the fairness of the business practice. The types of personal data that might be used in this way form a lengthy list that includes browsing or purchasing history, spending patterns, inferences about the consumer’s willingness to enter into the transaction, demographics, socio-economic status, credit history, location, medical history, and so on.

This important measure comes at a time when price discrimination practices are on the rise (see research from Pascale Chapdelaine here and here), and is typically invisible to the consumer. After all, if you are shopping online and are offered goods at a particular price, it would require considerable effort to determine whether someone else is being offered the same goods at a different price. This amendment is important. That said, it does not address the potential for dynamic surge pricing. Recent reporting on patents obtained by Walmart suggests that the company may be looking to use dynamic pricing on digital price displays on stores shelves to adjust prices based on demand in real time. The capacity to adjust prices based on who is shopping – and when – will have significant implications for consumers and it will be important for consumer-oriented legislation to anticipate and address these issues.

Last but not least, Bill 51, the Public Sector Artificial Intelligence and Cybersecurity Governance Act, is highly reminiscent of Ontario’s Enhancing Digital Security and Trust Act (EDSTA), which was enacted in 2024. Like the EDSTA, Manitoba’s Bill 51 creates a legislative framework for the governance of public sector artificial intelligence (AI) on the one hand, and for cybersecurity measures for the public sector on the other. Like the EDSTA, this is a ‘plug and play’ framework. The statute itself, if enacted, will require prescribed public sector entities to comply with obligations that are established in the regulations. The goal is to have a flexible framework that can adapt to changing technologies and circumstances through amendments to regulations and/or standards, that will be achieved more quickly than legislative amendments. The catch is that without regulations, the law is nothing more than words on a page. Ontario’s EDSTA, which took effect over a year ago on January 29, 2025, has resulted in that most flexible of regulatory frameworks for public sector AI known as “none”. Although regulations have been proposed for the portions of the EDSTA dealing with Cyber Security and Digital Technology Affecting Individuals Under 18, no regulations are yet in sight for AI in the public sector. Hopefully, Manitoba’s Bill 51 will not serve as an empty policy placeholder.

 

Published in Privacy
Monday, 09 February 2026 07:15

Canada's AI Strategy: Some Reflections

The Department of Innovation Science and Economic Development (ISED) has released the results of the consultation it carried out in advance of its development of the latest iteration of its AI Strategy. The consultation had two components – one was a Task Force on AI – a group of experts tasked with consulting their peers to develop their views. The experts were assigned to specified themes (research and talent; adoption across industry and government; commercialization of AI; scaling our champions and attracting investment; building safe AI systems and public trust in AI, education and skills; infrastructure; and security). The second component was a broad public consultation asking for either answers to an online survey or emailed free-form submissions. This post offers some reflections on the process and its outcomes.

1. The controversy over the consultation

The consultation process generated controversy. One reason for this was the sudden and short timelines. Submissions from the public were sought within a month, and Task Force members were initially expected to consult their peers and report in the month following the launch of the consultation. In the end, the Task Force Reports were not published until early February – the timelines were simply unrealistic. However, there was no extension for the public consultation. The Summary of Inputs on the consultation refers to it as “the largest public consultation in the history of Innovation Science and Economic Development Canada, generating important ideas, questions and legitimate concerns to take into consideration in the drafting of the strategy” (at page 3). The response signals how important the issue is to Canadians and how they want to be heard. One has to wonder how many submissions ISED might have received with longer timelines. Short deadlines favour those with time and resources. Civil society organizations, small businesses, and individuals with full workloads (domestic and professional) find short timelines particularly challenging. Running a “sprint” consultation favours participation from some groups over others.

Another point of controversy was the lack of diversity of the Task Force. The government was roundly criticized for putting together a Task Force with no representation from Canada’s Black communities, particularly given the risks of bias and discrimination posed by AI technologies. A letter to this effect was sent to the Minister of AI, the Prime Minister, and the leaders of Canada’s other political parties by a large group of Black academic and scholars. Following this, a Black representative – a law student - was hurriedly added to the Task Force.

An open letter to the Minister of Artificial Intelligence for civil society organizations and individuals also denounced the consultation, arguing that the deadline should be extended, and that the Task Force should be more equitably representative. The letter noted that civil society groups, human rights experts, and others were absent from the Task Force panel. The group was also critical of the online survey for being biased towards particular outcomes. This group indicated that it would be boycotting the consultation. They have now set up their own People’s Consultation on AI, which is accepting submissions until March 15, 2026.

These controversies highlight a major stumble in developing the AI Strategy. The lack of consultation around the failed Artificial Intelligence and Data Act in Bill C-27 and the criticism that this generated should have been a lesson to ISED on how important the issues raised by AI are to the public and about how they want to be heard. The Summary makes no mention of the controversy it generated. Nevertheless, the criticisms and pushbacks are surely an important part of the outcome of this process.

2. Some thoughts on Transparency

ISED has not only published a summary of the results of its consultation and of the Task Force Reports, it has published in its open government portal the raw data from the consultation, as well as the individual task force reports. This seems to be in line with a new commitment to greater transparency around AI – in the fall of 2025 ISED also published its beta version of a register of AI in use within the federal public service. These are positive developments, although it is worth watching to see if tools like the register of AI are refined, improved (and updated).

ISED was also transparent about its use of generative AI to process the results of the consultation. Page 16 of the summary document explains how it used (unspecified) LLMs to create a “classification pipeline” to “clean survey responses and categorize them into a structured set of themes and subthemes”. The report also describes the use of human oversight to ensure that there was “at least a 90% success rate in categorizing responses into specific intents”. ISED explains that it consulted research experts about their methodology and indicated that the methods they used were in conformity with the recent Treasury Board Guide on the use of generative artificial intelligence. The declaration on the use of AI indicates that the output was used to produce the final report, which is apparently a combination of human authorship and extracts from the AI generated content.

It would frankly be astonishing if generative AI tools have not already been used in other contexts to process submissions to government consultations (but likely without having been disclosed). As a result, the level of transparency about the use here is important. This is illustrated by my colleague Michael Geist’s criticisms of the results of ISED’s use of AI. He ran the Task Force reports through two (identified) LLMs and noted differences in the results between his generated analysis and ISED’s. He argues that “the government had not provided the public with the full picture” and posits that the results were softened by ISED to suggest a consensus that is not actually present. Putting a particular spin on things is not exclusively the result of the use of AI tools – humans do this all the time. However, explaining how results were arrived at using a technological system can create an impression of objectivity and scientific rigor that can mislead, and this underscores the importance of Prof. Geist’s critique.

It is worth noting that it is the level of transparency provided by ISED that allowed this analysis and critique. The immediacy of the publication of the data on which the report was based is important as well. Prolonged access to information request processes were unnecessary here. This approach should become standard government practice.

3. AI Governance/Regulation

The consultation covered many themes, and the AI Strategy is clearly intended to be about more than just how to regulate or govern AI. In fact, one could be forgiven for thinking that the AI Strategy will be about everything except governance and regulation, given the limited expertise from these areas on the Task Force. These focus areas emphasized adoption, investment in, and scaling of AI innovation, as well as strengthening sovereign infrastructure. Among the focus areas only “public trust, skills and safety” gives a rather offhand nod to governance and regulation.

That said, reading between the lines of the summary of inputs, Canadian are concerned about AI governance and regulation. This can be seen in statements such as “Respondents…urged Canada to prioritize responsible governance” (p. 7). Respondents also called for “meaningful regulation” (p. 8) and reminded the government of the need to “modernize regulations” (p. 8). There were also references to “accountable and robust governance”(p. 8) and “strict regulation, penalties for non-compliance and frameworks that uphold Canadian values” (p. 8) when it comes to generative AI. There were also calls for “strict liability laws” (p. 9), and concerns expressed over “lack of regulation and accountability” (p. 9).

One finds these snippets throughout the summary document, which suggests that meaningful regulation was a matter of real concern for respondents. However, the “Conclusions and next steps” section of the report mentions only the need for “regulatory clarity” and streamlined regulatory frameworks – neither of which is a bad thing, but neither of which is really about new regulation or governance. Instead, the report concludes that: “There was general consensus among participants that public trust depends on transparency, accountability, and robust governance, supported by certification standards, independent audits and AI literacy programs” (p. 15, my emphasis). While those tools are certainly part of a regulatory toolkit for AI, on their own and outside of a framework that builds in accountability and oversight, they are basically soft-law and self-regulation. This feels like a rather convenient consensus around where the government was likely heading in the first place.

 

Published in Privacy

The Ontario and British Columbia Information and Privacy Commissioners each released new AI medical scribes guidance on Privacy Day (January 28, 2026). This means that along with Alberta and Saskatchewan, a total for four provincial information and privacy commissioners have now issued similar guidance. BC’s guidance is aimed at health care practitioners running their own practices and governed by the province’s Personal Information Protection Act. It does not extend to health authorities and hospitals that fall under the province’s Freedom of Information and Protection of Privacy Act. Ontario’s guidance is for both public institutions and physicians in private practice who are governed by the Personal Health Information Protection Act.

This flurry of guidance on AI Scribes shows how privacy regulators are responding to the very rapid adoption in the Canadian health sector of an AI-tool that raises sometimes complicated privacy issues with a broad public impact.

At its most basic level, an AI medical scribe is a tool that records a doctor’s interaction with their patient. The recording is then transcribed by the scribe, and a summary is generated that can be cut and pasted by the doctor into the patient’s electronic medical record (EMR). The development and adoption of AI scribes has been rapid, in part because physicians have been struggling with both significant administrative burdens as well as burnout. This is particularly acute in the primary care sector. AI scribes offer the promise of better patient care (doctors are more focused on the patient as they are freed up from notetaking during appointments), as well as potentially significantly reduced time spent on administrative work.

AI medical scribes raise a number of different privacy issues. These can include issues relating to the scribe tool itself (for example, how good is the data security of the scribe company? What kind of personal health information (PHI) is stored, where, and for how long? Are secondary uses made of de-identified PHI? Is the scribe company’s definition of de-identification consistent with the relevant provincial health information legislation?) They may also include issues around how the technology is adopted and implemented by the physician (including, for example” whether the physician retains the full transcription as well as the chart summary and for how long; what data security measures are in place within the physician’s practice; and how consent is obtained from patients to the use of this tool). As the BC IPC’s guidance notes, “What distinguishes an AI scribe’s collection of personal information from traditional notetaking with a pen and notepad is that there are many processes taking place with an AI scribe that are more complex, potentially more privacy invasive, and less obvious to the average person” (at 5).

AI scribes raise issues other than privacy that touch on patient data. In their guidance, Ontario’s IPC notes the human rights considerations raised by AI scribes and refers to its recent AI Principles issued jointly with the Ontario Human Rights Commission (which I have written about here). The quality of AI technologies depends upon the quality of their training data. Where training data does not properly represent the populations impacted by the tool, there can be bias and discrimination. Concerns exist, for example, about how well AI scribes will function for people (or physicians) with accents, or for those with speech impaired by disease or disability. Certainly, the accuracy of personal health information that is recorded by the physician is a data protection issue; it is also a quality of health care issue. There are concerns that busy physicians may develop automation bias, increasingly trusting the scribe tool and reducing time spent on reviewing and correcting summaries – potentially leading to errors in the patient’s medical record.

AI scribes are being adopted by individual physicians, but they are also adopted and used within institutions – either with the engagement of the institution, or as a form of ‘shadow use’. A recent response to a breach by Ontario’s IPC relating to the use of a general-purpose AI scribe illustrates how complex the privacy issues may be in such as case (I have written about this incident here). In that case, the scribe tool ‘attended’ nephrology rounds at a hospital, transcribed the meeting, sent a summary to all 65 people on the mailing list for the meeting and provided a link to the full transcript. The summary and transcript contained the sensitive personal information of the patients seen on those rounds. Complicating the matter was the fact that the physician whose scribe attended the meeting was no longer even at the hospital.

Privacy commissioners are not the only ones who have stepped up to provide guidance and support to physicians in the choice of AI scribe tools. Ontario MD, for example, conducted an evaluation of AI medical scribes, and is assisting in assessing and recommending scribing tools that are considered safe and compliant with Ontario law.

Of course, scribe technologies are not standing still. It is anticipated that these tools will evolve to include suggestions for physicians for diagnosis or treatment plans, raising new and complex issues that will extend beyond privacy law. As the BC guidance notes, some of these tools are already being used to “generate referral letters, patient handouts, and physician reminders for ordering lab work and writing prescriptions for medication” (at 2). Further, this is a volatile area where scribe tools are likely to be acquired by EMR companies to integrate with their offerings, reducing the number of companies and changing the profile of the tools. The mutable tools and volatile context might suggest that guidance is premature; but the AI era is presenting novel regulatory challenges, and this is an example of guidance designed not to consolidate and structure rules and approaches that have emerged over time; but rather to reduce risk and harm in a rapidly evolving context. Regulator guidance may serve other goals here as well, as it signals to developers and to EMR companies those design features which will be important for legal compliance. Both the BC and Ontario guidance caution that function creep will require those who adopt and use these technologies to be alert to potential new issues that may arise as the adopted tools’ functionalities change over time.

Note: Daniel Kim and I have written a paper on the privacy and other risks related to AI medical scribes which is forthcoming in the TMU Law Review. A pre-print version can be found here: Scassa, Teresa and Kim, Daniel, AI Medical Scribes: Addressing Privacy and AI Risks with an Emergent Solution to Primary Care Challenges (January 07, 2025). (2025) 3 TMU Law Review, Available at SSRN: https://ssrn.com/abstract=5086289

 

Published in Privacy

Ontario’s Office of the Information and Privacy Commissioner (IPC) and Human Rights Commission (OHRC) have jointly released a document titled Principles for the Responsible Use of Artificial Intelligence.

Notably, this is the second collaboration of these two institutions on AI governance. Their first was a joint statement on the use of AI technologies in 2023, which urged the Ontario government to “develop and implement effective guardrails on the public sector’s use of AI technologies”. This new initiative, oriented towards “the Ontario public sector and the broader public sector” (at p. 1), is interesting because it deepens the cooperation between the IPC and the OHRC in relation to a rapidly evolving technology that is increasingly used in the public sector. It also fills a governance gap left by the province’s delay in developing its public sector AI regulatory framework.

In 2024, the Ontario government enacted the Enhancing Digital Security and Trust Act, 2024 (EDSTA), which contains a series of provisions addressing the use of AI in the broader public sector (which includes hospitals and universities). It also issued the Responsible Use of Artificial Intelligence Directive which sets basic rules and principles for Ontario ministries and provincial agencies. The Directive is currently in force and is built around principles similar to those set out by the IPC and OHRC. It outlines a set of obligations for ministries and agencies that adopt and use AI systems. These include transparency, risk management, risk mitigation, and documentation requirements. The EDSTA, which would have a potentially broader application, creates a framework for transparency, accountability, and risk management obligations, but the actual requirements have been left to regulations. Those regulations will also determine to whom any obligations will apply. Although the EDSTA can apply to all actors within the public sector, broadly defined, its obligations can be tailored by regulations to specific departments or agencies, and can include or exclude universities and hospitals. There has been no obvious movement on the drafting of the regulations needed to breathe life into EDSTA’s AI provisions

It is clear that AI systems will have both privacy and human rights implications, and that both the IPC and the OHRC will have to deal with complaints about such systems in relation to matters within their respective jurisdictions. As the Commissioners put it, the principles “will ground our assessment of organizations’ adoption of AI systems consistent with privacy and human rights obligations.” (at p. 1) The document clarifies what the IPC and OHRC expect from institutions. For example, conforming to the ‘Valid and reliable” principle will require compliance with independent testing standards and objective evidence will be required to demonstrate that systems “fulfil the intended requirements for a specified use or application”. (at p. 3) The safety principle also requires demonstrable cybersecurity protection and safeguards for privacy and human rights. The Commissioners also expect institutions to provide opportunities for access and correction of individuals’ personal data both used in and generated by AI systems. The “Human rights affirming” principle includes a caution that public institutions “should avoid the uniform use of AI systems with diverse groups”, since such practices could lead to adverse effects discrimination. The Commissioners also caution against uses of systems that may “unduly target participants in public or social movements, or subject marginalized communities to excessive surveillance that impedes their ability to freely associate with one another.” (at p. 6)

The Commissioners’ “Transparency” principle requires that the use by the public sector of AI be visible. The IPC’s mandate covers both access to information and privacy. The Principles state that the documentation required for the “public account” of AI use “may include privacy impact assessments, algorithmic impact assessments, or other relevant materials.” (at p. 6) There must also be transparency regarding “the sources of any personal data collected and used to train or operate the system, the intended purposes of the system, how it is being used, and the ways in which its outputs may affect individuals or communities.” (at p. 6)

The Principles also require that systems used in the public sector be understandable and explainable. The accountability principle requires public sector institutions to document design and application choices and to be prepared to explain how the system works to an oversight body. They should also establish mechanisms to receive and respond to complaints and concerns. The Principles call for whistleblower protections to support reporting of non-compliant systems.

The joint nature of the Principles highlights how issues relating to AI do not easily fall within the sole jurisdiction of any one regulator. It also highlights that the dependence of AI systems on data – often personal data or de-identified personal data – carries with it implications both for privacy and human rights.

That the IPC and OHRC will have to deal with complaints and investigations that touch on AI issues is indisputable. In fact, the IPC has already conducted formal and informal investigations that touch on AI-enabled remote proctoring, AI scribes, and vending machines on university campuses that incorporate face-detection technologies. The Principles offer important insights into how these two oversight bodies see privacy and human rights intersecting with the adoption and use of AI technologies, and what organizations should be doing to ensure that the systems they procure, adopt and deploy are legally compliant.

 

 

Published in Privacy
Monday, 05 January 2026 08:32

Canada's New Regulatory Sandbox Policy

In November 2025, Canada’s federal government published a new Policy on Regulatory Sandboxes in anticipation of amendments to the Red Tape Reduction Act which had been announced in the 2024 budget. This development deserves some attention, particularly as the federal government embraces a pro-innovation agenda and shifts its approach to regulation of innovative technologies such as artificial intelligence (AI).

Regulatory sandboxes have received considerable attention since the first use of one by the Financial Conduct Authority the UK in 2017. Although they first took hold in the financial services sector, they have since attracted interest in other sectors. For example, several European data protection authorities have created privacy regulatory sandboxes (see, e.g., the UK Information Commissioner and France’s CNIL). In Canada, the Ontario Energy Board and the Law Society of Ontario – to give just two examples – both have regulatory sandboxes. Alberta also created a fintech regulatory sandbox by legislation in 2022. Regulatory sandboxes are expected to be an important component in AI regulation in the European Union. Article 57 of the EU Artificial Intelligence Act requires all member states to establish an AI regulatory sandbox – or at the very least to partner with one or more members states to jointly create such a sandbox.

Regulatory sandboxes are seen as a regulatory tool that can be effectively deployed in rapidly evolving technological contexts where existing regulations may create barriers to innovation. In some cases, innovators may hesitate to develop novel products or services where they see no clear pathway to regulatory approval. In many instances, regulators struggle to understand rapidly evolving technologies and the novel business methods they may bring with them. A regulatory sandbox is a space created by a regulator that allows selected innovators to work with regulators to explore how these innovations can be brought to market in a safe and compliant way, and to learn whether and how existing regulations might need to be adapted to a changing technological environment. It is a form of experimental regulation with benefits both for the regulator and for regulated parties.

This is the context in which the federal Policy has been introduced. It defines a regulatory sandbox in these terms:

[A] regulatory sandbox, in the context of this policy, is the practice by which a temporary authorization is provided for innovation (for example, a new product, service, process, application, regulatory and non-regulatory approaches) and is for the purpose of evaluating the real-life impacts of innovation, in order to provide information to the regulator to support the development, management and/or review and assessment of the results of regulations. This can also include for the purposes of equipping the regulatory framework to support innovation, competitiveness or economic growth.

It is important to remember that the policy is anchored in the Red Tape Reduction Act and has a particular slant that sets it apart from other sandbox initiatives. An example of the type of sandbox likely contemplated by this policy can be found in a new regulatory sandbox proposed by Transport Canada to address a very specific regulatory issue arising with respect to the design of aircraft. This sandbox is described as being for “minor change approvals used in support of a major modification.” It is narrow in scope, using modifications to existing regulations to try out a new regulatory process for the certification of major modifications to aircraft design. The end goal is to reduce regulatory burden and to relieve uncertainties caused by existing regulations. Data will be collected from the sandbox experiment to assess the impact of regulatory changes before they might be made permanent.

This approach frames sandboxing as a means to enable innovation by improving existing regulations and streamlining processes. While this is a worthy objective, there is a risk that the policy may be cast too narrowly by focusing on a regulatory sandbox as a means to improve regulation, rather than more broadly as a means of understanding how novel technologies or processes can be brought safely to market – sometimes under existing regulatory frameworks. This is reflected in the policy document, which states that sandboxes proposed under this policy “must demonstrate how regulatory regimes could be modernized”.

The definition of a regulatory sandbox in the Policy, reproduced above, essentially describes a data gathering process by the regulator “to support the development, management and/or review and assessment of the results of regulations.” This can be contrasted with the more open-ended definition adopted in the relatively recent standard for regulatory sandboxes developed by the Digital Governance Standardization Initiative (DGSI):

A regulatory sandbox is a facility created and controlled by a regulator, designed to allow the conduct of testing or experiments with novel products or processes prior to their entry into a regulated marketplace.

Rather than focus on the regulator conducting an assessment of its regulations, the DGSI definition is focused on innovative products and processes, and frames sandboxes in terms of their recognized mutual benefits for both regulators and innovators. The focus of the DGSI’s sandbox definition is on the bringing to market of novel products or processes. Although improving regulations and regulatory processes is a perfectly acceptable outcome of a regulatory sandbox, it is not the only possible outcome – nor is it even a necessary one. In this context, the new federal policy is rather narrow. It is focused on regulations themselves at the core of the sandbox experiments – rather than how innovative technologies challenge regulatory frameworks.

An example of this latter approach is found in the Ontario Bar Association’s regulatory sandbox for AI-enabled access to justice innovations (A2I). In some cases, innovations of this kind might be characterized as constituting the illegal practice of law, creating a barrier to market entry. In the A2I sandbox the novel products or services are developed and live-tested under supervision to assess whether they can be deployed in a way that is sufficiently protective of the public. The issue is partly a regulatory one – but it is not that any particular regulations necessarily require changing – rather, it is that innovators need a level of comfort that their innovation will not be blocked by existing regulations. At the same time, the regulator needs to understand the emerging technology and how they can fulfil their public protection mandate while supporting useful innovation. One out come of a sandbox process might be to learn that a particular innovation cannot safely be brought to market.

A similar paradigm exists with privacy regulatory sandboxes, which might either explore ways in which a novel technology can be designed to comply with the legislation, or examine how existing rules should be understood and applied in novel circumstances.

In all cases, the regulator may learn something about how existing regulations might need to adapt to an evolving technological context, and this too is a useful outcome. However, it does not have to be the principal goal of the regulatory sandbox. While the federal Policy is interesting, it seems narrowly focused. It appears to primarily be a tool conceived of to help streamline and improve regulatory processes (still a worthy goal) rather than a more ambitious sandboxing initiative. The policy is interesting and signals an openness to the concept of regulatory sandboxes. Unfortunately, it is still a rather narrow framing of the nature and potential of this regulatory tool.

 

Published in Privacy
Saturday, 29 November 2025 14:42

Canada launches its beta AI Register

Canada’s federal government has just released an early version of the AI Register it promised after its election earlier this year.

An AI Register is an important transparency tool – it will help researchers and the broader public understand what AI-enabled tools are in use in the federal public sector and provides basic information about them. The government also intends the register to be a resource for the public sector – allowing different departments and agencies to better see what others are doing so as to avoid duplication and to learn from each other.

The information accompanying the Register (which is published on Canada’s open government portal) indicates that this is a “Minimum Viable Product”. This means that it is “an early version with only basic features and content that is used to gather feedback.” It will be interesting to see how it develops over time.

One interesting aspect of the register is that it states that it was “assembled from existing sources of information, including Algorithmic Impact Assessments, Access to Information requests, responses to Parliamentary Questions, Personal Information Banks, and the GC Service Inventory.” Since it contains 409 entries at the time of writing, and since there are only a few dozen published Algorithmic Impact Assessments (AIAs), this suggests that the database was compiled largely using sources other than AIAs. The reference to access to information requests suggest that some of the data may have been gathered using the TAG Register Canada laboriously compiled by Joanna Redden and her team at the Western University. The sources for the TAG Register also included access to information requests and responses to questions by Members of Parliament. Prior to the development of the federal AI Register, the TAG Register was probably the most important source of information about public sector AI in Canada. The TAG Register is not made redundant by the new AI Register – it contains additional information about the systems derived from the source materials.

The federal AI Register sets out the name of each system and provides a description. It indicates who the primary users are, and which government organization is responsible for it. Other fields provide data about whether the system is designed in-house or is furnished by a vendor (and if so, which one). It also indicates whether the system is in development, in production, or retired. There is a brief description of the system’s capabilities, some information about the data sources used, and an indication of whether it uses personal data. The register also indicates whether users are given notice of use. There is a brief description of the expected outcomes of the system use.

All in all, it’s a good start, and clearly the developers of this database are open to feedback. (For example, I would like to see a link to the Algorithmic Impact Assessment under the Directive on Automated Decision-Making, if such an assessment has been carried out).

This is an important transparency initiative, and it will be a good source of data for researchers interested in public sector AI. It is also an interesting model that provincial governments might want to consider as they also roll out AI use across their public sectors.

 

Published in Privacy

Regulatory sandboxes are a relatively recent innovation in regulation (with the first one being launched by the UK Financial Authority in 2015). Since that time, they have spread rapidly in the fintech sector. The EU’s new Artificial Intelligence Act has embraced this new tool, making AI regulatory sandboxes mandatory for member states. In its most recent budget, Canada’s federal government also revealed a growing interest in advancing the use of regulatory sandboxes, although sandboxes are not mentioned in the ill-fated Artificial Intelligence and Data Act in Bill C-27.

Regulatory sandboxes are seen as a tool that can support innovation in areas where complex technology evolves rapidly, creating significant regulatory hurdles for innovators to overcome. The goal is not to evade or dilute regulation; rather, it is to create a space where regulators and innovators can explore how regulations designed to protect the public should be applied to technologies that were unforeseen at the time the regulations were drafted. The sandbox is meant to be a learning experience for both regulators and innovators. Outcomes can include new guidance that can be shared with all innovators; recommendations for legislative or regulatory reform; or even decisions that a particular innovation is not yet capable of safe deployment.

Of course, sandboxes can raise issues about regulatory capture and the independence of regulators. They are also resource intensive, requiring regulators to make choices about how to meet their goals. They require careful design to minimize risks and maximize return. They also require the interest and engagement of regulated parties.

In the autumn of 2023, Elif Nur Kumru and I began a SSHRC-funded project to explore the potential for a privacy regulatory sandbox for Ontario. Working in partnership with the Office of Ontario’s Information and Privacy Commissioner, we examined the history and evolution of regulatory sandboxes. We met with representatives of data protection authorities in the United Kingdom, Norway and France to learn about the regulatory sandboxes they had developed to address privacy issues raised by emerging technologies, including artificial intelligence. We identified some of the challenges and issues, as well as key features of regulatory sandboxes. Our report is now publicly available in both English and French.

Published in Privacy

A recent decision of the Federal Court of Canada (Ali v. Minister of Public Safety and Emergency Preparedness) highlights the role of judicial review in addressing automated decision-making. It also prompts reflection on the limits of emerging codified rights to an explanation.

In July 2024, Justice Battista overturned a decision of the Refugee Protection Division (RPD) which had vacated the refugee status of the applicant, Mr. Ali. The decision of the RPD was based largely on a photo comparison that the RPD to conclude that Mr. Ali was not a Somali refugee as he had claimed. Rather, they concluded that he was a Kenyan student who had entered Canada on a student visa in 2016, a few months prior to Mr. Ali’s refugee protection claim.

Throughout the proceedings the applicant had sought information about how photos of the Kenyan student had been found and matched with his own. He was concerned that facial recognition technology (FRT) – which has had notorious deficiencies when used to identify persons of colour – had been used. In response, the Minister denied the use of FRT, maintaining instead that the photographs had been found and analyzed through a ‘manual process’. A Canadian Border Services agent subsequently provided an affidavit to the effect that “a confidential manual investigative technique was used” (at para 15). The RPD was satisfied with this assurance. It considered that how the photographs had been gathered was irrelevant to their own capacity as a tribunal to decide based on the photographs before them. They concluded that Mr. Ali had misrepresented his identity.

On judicial review, Justice Battista found that the importance of the decision to Mr. Ali and the quasi-judicial nature of the proceedings meant that he was owed a high level of procedural fairness. Because a decision of the RPD cannot be appealed, and because the consequences of revocation of refugee status are very serious (including loss of permanent resident status and possible removal from the country), Justice Battista found that “it is difficult to find a process under [the Immigration and Refugee Protection Act] with a greater imbalance between severe consequences and limited recourse” (at para 23). He found that the RPD had breached Mr. Ali’s right to procedural fairness “when it denied his request for further information about the source and methodology used by the Minister in obtaining and comparing the photographs” (at para 28).

Justice Battista ruled that, given the potential consequences for the applicant, disclosure of the methods used to gather the evidence against him “had to be meaningful” (at para 33). He concluded that it was unfair for the RPD “to consider the photographic evidence probative enough for revoking the Applicant’s statuses and at the same time allow that evidence to be shielded from examination for reliability” (at para 37).

In addition to finding a breach of procedural fairness, Justice Battista also found that the RPD’s decision was unreasonable. He noted that there had been sufficiently credible evidence before the original RPD refugee determination panel to find that Mr. Ali was a Somali national entitled to refugee protection. None of this evidence had been assessed in the decision of the panel that vacated Mr. Ali’s refugee status. Justice Battista noted that “[t]he credibility of this evidence cannot co-exist with the validity of the RPD vacation panel’s decision” (at para 40). He also noted that the applicant had provided an affidavit describing differences between his photo and that of the Kenyan student; this evidence had not been considered in the RPD’s decision, contributing to its unreasonableness. The RPD also dismissed evidence from a Kenyan official that, based on biometric records analysis, there was no evidence that Mr. Ali was Kenyan. Justice Battista noted that this dismissal of the applicant’s evidence was in “stark contrast to its treatment of the Minister’s photographic evidence” (at para 44).

The Ali decision and the right to an explanation

Ali is interesting to consider in the context of the emerging right to an explanation of automated decision-making. Such a right is codified for the private sector context in the moribund Bill C-27, and Quebec has enacted a right to an explanation for both public and private sector contexts. Such rights would apply in cases where an automated decision system (ADS) has been used (and in the case of Quebec, the decision must be based “exclusively on an automated processing” of personal information. Yet in Ali there is no proof that the decision was made or assisted by an AI technology – in part because the Minister refused to explain their ‘confidential’ process. Further, the ultimate decision was made by humans. It is unclear how a codified right to an explanation would apply if the threshold for the exercise of the right is based on the obvious and/or exclusive use of an ADS.

It is also interesting to consider the outcome here in light of the federal Directive on Automated Decision Making (DADM). The DADM, which largely addresses the requirements for design and development of ADS in the federal public sector, incorporates principles of fairness. It applies to “any system, tool, or statistical model used to make an administrative decision or a related assessment about a client”. It defines an “automated decision system” as “[a]ny technology that either assists or replaces the judgment of human decision-makers […].” In theory, this would include the use of automated systems such as FRT that assist in human decision-making. Where and ADS is developed and used, the DADM imposes transparency obligations, which include an explanation in plain language of:

  • the role of the system in the decision-making process;
  • the training and client data, their source, and method of collection, as applicable;
  • the criteria used to evaluate client data and the operations applied to process it;
  • the output produced by the system and any relevant information needed to interpret it in the context of the administrative decision; and
  • a justification of the administrative decision, including the principal factors that led to it. (Appendix C)

The catch, of course, is that it might be impossible for an affected person to know whether a decision has been made with the assistance of an AI technology, as was the case here. Further, the DADM is not effective at capturing informal or ‘off-the-books’ uses of AI tools. The decision in Ali therefore does two important things in the administrative law context. First, it confirms that – in the case of a high impact decision – the right of the individual to an explanation of how the decision was reached as a matter of procedural fairness. Judicial review thus provides recourse for affected individuals – something that the more prophylactic DADM does not. Second, this right includes an obligation to provide details that could either explain or rule out the use of an automated system in the decisional process. In other words, procedural fairness includes a right to know whether and how AI technologies were used in reaching the contested decision. Mere assertions that no algorithms were used in gathering evidence or in making the decision are insufficient – if an automated system might have played a role, the affected individual is entitled to know the details of the process by which the evidence was gathered and the decision reached. Ultimately, what Justice Battista crafts in Ali is not simply a right to an explanation of automated decision-making; rather, it is a right to the explanation of administrative decision-making processes that account for an AI era. In a context in which powerful computing tools are available for both general and personal use, and are not limited to purpose-specific, carefully governed and auditable in-house systems, the ability to demand an explanation of the decisional process in order to rule out the non-transparent use of AI systems seems increasingly important.

Note: The Directive on Automated Decision-Making is currently undergoing its fourth review. You may participate in consultations here.

Published in Privacy
<< Start < Prev 1 2 3 Next > End >>
Page 1 of 3

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law