Tags
access to information
AI
AIDA
AI governance
AI regulation
Ambush Marketing
artificial intelligence
big data
bill c11
Bill c27
copyright
data governance
data protection
data scraping
data strategy
Electronic Commerce
freedom of expression
Geospatial
geospatial data
intellectual property
Internet
internet law
IP
open data
open government
personal information
pipeda
Privacy
trademarks
transparency
|
Displaying items by tag: personal data
Monday, 20 April 2026 06:47
Privacy Act Reform: Enhancing accountability and transparency
This is the third in a series of posts discussing the federal government’s new consultation document on reform of the federal Privacy Act. The previous posts are here and here. This post addresses the second theme in the document: Enhancing accountability and transparency. Accountability and transparency are important privacy principles, and it is no surprise that the TBS consultation document on reform of the federal Privacy Act addresses these issues in four proposals set out in its second theme. The first of these (Proposal #3 overall in the document) would create a “legal requirement to conduct a privacy impact assessment when a program or activity uses personal data to make a decision about someone”. Privacy impact assessments are currently required under the Directive on Privacy Practices when “personal information is to be used for an administrative purpose”. The consultation paper suggests that the proposal to reform the Privacy Act would “make PIAs a legal requirement instead of a policy requirement.” Under the proposal, PIAs would be shared with the Privacy Commissioner of Canada, who would assess whether they comply with the Privacy Act, and also with TBS. The consultation document notes that the incorporation of these existing policy requirements into the law would “not create an additional approval process or delay program implementation”. (See my discussion of the pragmatic privacy in my second post in the series). Although this is framed as a proposal to make an existing obligation more concrete and enforceable, according to the consultation document, the PIA requirement would be activated where there is a new program or a substantial modification to an existing program that uses personal data “to make decisions about people”. This is narrower than what the current policy on PIAs requires, and the difference is significant. I will return to this issue in the discussion of transparency, below. TBS also proposes to leave the contents of the PIA to policy to allow “the rules to be updated more easily as technologies, risks, and best practices change over time.” This tendency to leave details to regulations is becoming increasingly common in Canadian laws addressing rapidly evolving technologies. Nonetheless, although the law could simply require PIA’s to be completed according to a prescribed set of requirements (for example, there is currently a PIA template document for the federal public service), basic elements should still be set out in the law. For example, Alberta’s new public sector Protection of Privacy Act sets out four statutory requirements for PIAs. They must: 26. [. . .] (a) identify and review risks associated with the public body’s collection, use and disclosure of personal information, (b) develop mitigation strategies and safeguards respecting those risks, (c) address how the public body will comply with its duties under this Act, and (d) comply with the prescribed requirements. Section 38(3) of Ontario’s Freedom of Information and Protection of Privacy Act also provides a list of essential elements of a PIA, along with “any other prescribed elements”. A reformed federal Privacy Act should take the same approach, articulating essential requirements in the law, with other more variable elements to be prescribed. The consultation paper also proposes requiring the publication of plain language summaries of PIAs, suggesting that these would exclude information that might adversely impact “law enforcement, investigations, or national security”. The publication of plain language PIA summaries would offer an important level of transparency in an accessible format to a broader public. However, the level of detail in a full PIA could still be valuable to researchers and journalists. Both the detailed and plain language versions could be proactively published. After all, algorithmic impact assessments carried out under the Directive on Automated Decision-Making (DADM) are meant to be shared via the open government portal. In the US, PIAs under the E-Government Act 2002 must be proactively published unless certain exceptions apply. The second proposal under this theme (Proposal #4 overall) is to create a central registry of personal data holdings and to publish key information on personal data management practices. This system would replace the current Personal Information Banks system along with its classifications of personal data. Instead, there would be “a centralized registry of personal data holdings” (not a centralized data storage repository). The registry would include “privacy notices explaining why data is collected and how it will be used, general descriptions of how personal data is shared between programs, and summaries of PIAs.” Exceptions to disclosure would likely be created for law enforcement or national security, although the consultation document emphasizes that any exceptions should be “limited, specific, and clearly set out in the Act” and would require justification. This recommendation is aimed at modernizing how transparency is provided about government management of its personal data holdings. In the case of horizontal data sharing, it would ensure that the “flow of data between programs would be more clearly articulated”. The third proposal under this theme (Proposal #5) would establish “transparency requirements for the use of artificial intelligence and automated decision systems that support the right to the correction of personal data”. What is contemplated is an amendment to the Privacy Act to require – at the request of an individual – an explanation of “how an ADS [automated decision system] supported a decision and what personal data was used.” An automated decision system is currently defined in the DADM as “[a]ny technology that either assists or replaces the judgment of human decision makers.” A right to verify the accuracy of the data and to ask for corrections would also be provided. Where an individual believes that an error has been made, they could request a human review of the decision. The final proposal under this theme (Proposal #6) also deals with automated decision systems and would require notices that explain why data is being collected, for what purposes, and with whom it might be shared. The proposal would add a plain language requirement for such notices and would require them to be posted in the central registry. Additional notices would be required for ADS, and these would “provide a general explanation so the person can understand how the ADS handled their personal data and how the decision was made.” It is not entirely clear whether the ADS notice would be sent directly to affected individuals or placed in the centralized registry, but it seems that it might be the latter. The recommendations in this part of the proposal are clearly oriented towards automated decision-making. Although the federal Directive on Automated Decision Making (DADM) sets out certain transparency requirements, the DADM does not apply to all of the institutions that fall under the Privacy Act. The proposed reform would not only elevate these transparency requirements to law, but it would also ensure that they extend further across the public sector. While this would be a positive development, it is important to note that the DADM was developed as a form of AI governance, not as a privacy measure. The scope of the DADM is therefore shaped by its focus on automated decision-making. Indeed, TBS states that the transparency/correction requirement “would only apply to ADS that use personal data to make or support decisions that directly affect individuals”, language that echoes that used in the DADM. This is where the PIA requirement in Proposal #3 and the transparency requirement in Proposal #5 run into potential problems. As noted earlier, the PIA requirement in the consultation document would apply only where a new or modified program uses personal data “to make decisions about people”. (Compare this with the right to an explanation that featured in Bill C-27’s Consumer Privacy Protection Act, which would have applied to systems used to “make a prediction, recommendation or decision about an individual that could have a significant impact on them.”) The scope of this obligation will therefore be determined by how making “decisions about people” is defined. The DADM defines an administrative decision as one that “affects legal rights, privileges or interests”, which appears to be a relatively high threshold. The Guide on the scope of the DADM identifies a list of activities that are both in and out of scope of the Directive. In-scope activities include: · Triaging client applications based on their complexity as determined through machine-defined criteria · Examining a financial transaction to estimate the probability of fraud · Generating an assessment, score or classification about the client · Generating a summary of relevant client information for officers to determine eligibility to a program · Presenting information from multiple sources to an officer (such as by data matching and fuzzy matching) · Using facial recognition or other biometric technology to target subjects for additional scrutiny · Recommending one or multiple options to the decision maker · Using an AI resumé-screening tool or skills-based assessment tool to filter top-performing candidates to the interview stage in a recruitment process · Reviewing client applications for benefits and recommending approval or denial to an officer · Chatbot that officers use to recommend a course of action These offer some examples of the fairly wide net cast by the DADM and clearly go beyond some of the most obvious forms of automated decision-making. They help clarify what “decisions about people” mean, but any change to the legislation to add transparency and accountability in relation to automated decision making will need to be crystal clear that the scope of language such as “decisions about people” and about decisions that affect “legal rights, privileges or interest”, are as inclusive as this list. The risk is that without clear parameters, the interpretation of these rights could be too narrow.
Published in
Privacy
Monday, 14 August 2023 06:06
Use by the Public Sector of Private Sector Personal DataThe following is a short excerpt from a new paper which looks at the public sector use of private sector personal data (Teresa Scassa, “Public Sector Use of Private Sector Personal Data: Towards Best Practices”, forthcoming in (2024) 47:2 Dalhousie Law Journal ) The full pre-print version of the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4538632 Governments seeking to make data-driven decisions require the data to do so. Although they may already hold large stores of administrative data, their ability to collect new or different data is limited both by law and by practicality. In our networked, Internet of Things society, the private sector has become a source of abundant data about almost anything – but particularly about people and their activities. Private sector companies collect a wide variety of personal data, often in high volumes, rich in detail, and continuously over time. Location and mobility data, for example, are collected by many different actors, from cellular service providers to app developers. Financial sector organizations amass rich data about the spending and borrowing habits of consumers. Even genetic data is collected by private sector companies. The range of available data is constantly broadening as more and more is harvested, and as companies seek secondary markets for the data they collect. Public sector use of private sector data is fraught with important legal and public policy considerations. Chief among these is privacy since access to such data raises concerns about undue government intrusion into private lives and habits. Data protection issues implicate both public and private sector actors in this context, and include notice and consent, as well as data security. And, where private sector data is used to shape government policies and actions, important questions about ethics, data quality, the potential for discrimination, and broader human rights questions also arise. Alongside these issues are interwoven concerns about transparency, as well as necessity and proportionality when it comes to the conscription by the public sector of data collected by private companies. This paper explores issues raised by public sector access to and use of personal data held by the private sector. It considers how such data sharing is legally enabled and within what parameters. Given that laws governing data sharing may not always keep pace with data needs and public concerns, this paper also takes a normative approach which examines whether and in what circumstances such data sharing should take place. To provide a factual context for discussion of the issues, the analysis in this paper is framed around two recent examples from Canada that involved actual or attempted access by government agencies to private sector personal data for public purposes. The cases chosen are different in nature and scope. The first is the attempted acquisition and use by Canada’s national statistics organization, Statistics Canada (StatCan), of data held by credit monitoring companies and financial institutions to generate economic statistics. The second is the use, during the COVID-19 pandemic, of mobility data by the Public Health Agency of Canada (PHAC) to assess the effectiveness of public health policies in reducing the transmission of COVID-19 during lockdowns. The StatCan example involves the compelled sharing of personal data by private sector actors; while the PHAC example involves a government agency that contracted for the use of anonymized data and analytics supplied by private sector companies. Each of these instances generated significant public outcry. This negative publicity no doubt exceeded what either agency anticipated. Both believed that they had a legal basis to gather and/or use the data or analytics, and both believed that their actions served the public good. Yet the outcry is indicative of underlying concerns that had not properly been addressed. Using these two quite different cases as illustrations, the paper examines the issues raised by the use of private sector data by government. Recognizing that such practices are likely to multiply, it also makes recommendations for best practices. Although the examples considered are Canadian and are shaped by the Canadian legal context, most of the issues they raise are of broader relevance. Part I of this paper sets out the two case studies that are used to tease out and illustrate the issues raised by public sector use of private sector data. Part II discusses the different issues and makes recommendations. The full pre-print version of the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4538632
Published in
Privacy
Tuesday, 22 January 2019 16:56
Canada's Shifting Privacy LandscapeNote: This article was originally published by The Lawyer’s Daily (www.thelawyersdaily.ca), part of LexisNexis Canada Inc. In early January 2019, Bell Canada caught the media spotlight over its “tailored marketing program”. The program will collect massive amounts of personal information, including “Internet browsing, streaming, TV viewing, location information, wireless and household calling patterns, app usage and the account information”. Bell’s background materials explain that “advertising is a reality” and that customers who opt into the program will see ads that are more relevant to their needs or interests. Bell promises that the information will not be shared with third party advertisers; instead it will enable Bell to offer those advertisers the ability to target ads to finely tuned categories of consumers. Once consumers opt in, their consent is presumed for any new services that they add to their account. This is not the first time Bell has sought to collect vast amounts of data for targeted advertising purposes. In 2015, it terminated its short-lived and controversial “Relevant Ads” program after an investigation initiated by the Privacy Commissioner of Canada found that the “opt out” consent model chosen by Bell was inappropriate given the nature, volume and sensitivity of the information collected. Nevertheless, the Commissioner’s findings acknowledged that “Bell’s objective of maximizing advertising revenue while improving the online experience of customers was a legitimate business objective.” Bell’s new tailored marketing program is based on “opt in” consent, meaning that consumers must choose to participate and are not automatically enrolled. This change and the OPC’s apparent acceptance of the legitimacy of targeted advertising programs in 2015 suggest that Bell may have brought its scheme within the parameters of PIPEDA. Yet media coverage of the new tailored ads program generated public pushback, suggesting that the privacy ground has shifted since 2015. The rise of big data analytics and the stunning recent growth of artificial intelligence have sharply changed the commercial value of data, its potential uses, and the risks it may pose to individuals and communities. After the Cambridge Analytica scandal, there is also much greater awareness of the harms that can flow from consumer profiling and targeting. While conventional privacy risks of massive personal data collection remain (including the risk of data breaches, and enhanced surveillance), there are new risks that impact not just privacy but consumer choice, autonomy, and equality. Data misuse may also have broader impacts than just on individuals; such impacts may include group-based discrimination, and the kind of societal manipulation and disruption evidenced by the Cambridge Analytica scandal. It is not surprising, then, that both the goals and potential harms of targeted advertising may need rethinking; along with the nature and scope of data on which they rely. The growth of digital and online services has also led to individuals effectively losing control over their personal information. There are too many privacy policies, they are too long and often obscure, products and services are needed on the fly and with little time to reflect, and most policies are ‘take-it-or-leave-it”. A growing number of voices are suggesting that consumers should have more control over their personal information, including the ability to benefit from its growing commercial value. They argue that companies that offer paid services (such as Bell) should offer rebates in exchange for the collection or use of personal data that goes beyond what is needed for basic service provision. No doubt, such advocates would be dismayed by Bell’s quid pro quo for its collection of massive amounts of detailed and often sensitive personal information: “more relevant ads”. Yet money-for-data schemes raise troubling issues, including the possibility that they could make privacy something that only the well-heeled can afford. Another approach has been to call for reform of the sadly outdated Personal Information Protection and Electronic Documents Act. Proposals include giving the Privacy Commissioner enhanced enforcement powers, and creating ‘no go zones’ for certain types of information collection or uses. There is also interest in creating new rights such as the right to erasure, data portability, and rights to explanations of automated processing. PIPEDA reform, however, remains a mirage shimmering on the legislative horizon. Meanwhile, the Privacy Commissioner has been working hard to squeeze the most out of PIPEDA. Among other measures, he has released new Guidelines for Obtaining Meaningful Consent, which took effect on January 1, 2019. These guidelines include a list of “must dos” and “should dos” to guide companies in obtaining adequate consent While Bell checks off many of the ‘must do’ boxes with its new program, the Guidelines indicate that “risks of harm and other consequences” of data collection must be made clear to consumers. These risks – which are not detailed in the FAQs related to the program – obviously include the risk of data breach. The collected data may also be of interest to law enforcement, and presumably it would be handed over to police with a warrant. A more complex risk relates to the fact that internet, phone and viewing services are often shared within a household (families or roommates) and targeted ads based on viewing/surfing/location could result in the disclosure of sensitive personal information to other members of the household. Massive data collection, profiling and targeting clearly raise issues that go well beyond simple debates over opt-in or opt-out consent. The privacy landscape is changing – both in terms of risks and responses. Those engaged in data collection would be well advised to be attentive to these changes.
Published in
Privacy
|
Electronic Commerce and Internet Law in Canada, 2nd EditionPublished in 2012 by CCH Canadian Ltd.
Intellectual Property for the 21st CenturyIntellectual Property Law for the 21st Century: Interdisciplinary Approaches
|