Teresa Scassa - Blog

Teresa Scassa

Teresa Scassa

Thursday, 07 February 2019 08:09

Ontario Launches Data Strategy Consultation

On February 5, 2019 the Ontario Government launched a Data Strategy Consultation. This comes after a year of public debate and discussion about data governance issues raised by the proposed Quayside smart cities development in Toronto. It also comes at a time when the data-thirsty artificial intelligence industry in Canada is booming – and hoping very much to be able to continue to compete at the international level. Add to the mix the view that greater data sharing between government departments and agencies could make government ‘smarter’, more efficient, and more user-friendly. The context might be summed up in these terms: the public is increasingly concerned about the massive and widespread collection of data by governments and the private sector; at the same time, both governments and the private sector want easier access to more and better data.

Consultation is a good thing – particularly with as much at stake as there is here. This consultation began with a press release that links to a short text about the data strategy, and then a link to a survey which allows the public to provide feedback in the form of answers to specific questions. The survey is open until March 7, 2019. It seems that the government will then create a “Minister’s Task Force on Data” and that this body will be charged with developing a draft data strategy that will be opened for further consultation. The overall timeline seems remarkably short, with the process targeted to wrap up by Fall 2019.

The press release telegraphs the government’s views on what the outcome of this process must address. It notes that 55% of Canada’s Big data vendors are located in Ontario, and that government plans “to make life easier for Ontarians by delivering simpler, faster and better digital services.” The goal is clearly to develop a data strategy that harnesses the power of data for use in both the private and public sectors.

If the Quayside project has taught anyone anything, it is that people do care about their data in the hands of both public and private sector actors. The press release acknowledges this by referencing the need for “ensuring that data privacy and protection is paramount, and that data will be kept safe and secure.” Yet perhaps the Ontario government has not been listening to all of the discussions around Quayside. While the press release and the introduction to the survey talk about privacy and security, neither document addresses the broader concerns that have been raised in the context of Quayside, nor those that are raised in relation to artificial intelligence more generally. There are concerns about bias and discrimination, transparency in algorithmic decision-making, profiling, targeting, and behavioural modification. Seamless sharing of data within government also raises concerns about mass surveillance. There is also a need to consider innovative solutions to data governance and the role the government might play in fostering or supporting these.

There is no doubt that the issues underlying this consultation are important ones. It is clear that the government intends to take steps to facilitate intra-governmental sharing of data as well as greater sharing of data between government and the private sector. It is also clear that much of that data will ultimately be about Ontarians. How this will happen, and what rights and values must be protected, are fundamental questions.

As is the case at the provincial and federal level across the country, the laws which govern data in Ontario were written for a different era. Not only are access to information and protection of privacy laws out of date, data-driven practices increasingly impact areas such as consumer protection, competition, credit reporting, and human rights. An effective data strategy might need to reach out across these different areas of law and policy.

Privacy and security – the issues singled out in the government’s documents – are important, but privacy must mean more than the narrow view of protecting identifiable individuals from identity theft. We need robust safeguards against undue surveillance, assurances that our data will not be used to profile or target us or our communities in ways that create or reinforce exclusion or disadvantage; we need to know how privacy and autonomy will be weighed in the balance against the stimulation of the economy and the encouragement of innovation. We also need to consider whether there are uses to which our data should simply not be put. Should some data be required to be stored in Canada, and if so in what circumstances? These and a host of other questions need to be part of the data strategy consultation. Perhaps a broader question might be why we are talking only about a data strategy and not a digital strategy. The approach of the government seems to focus on the narrow question of data as both an input and output – but not on the host of other questions around the digital technologies fueled by data. Such questions might include how governments should go about procuring digital technologies, the place of open source in government, the role and implication of technology standards – to name just a few.

With all of these important issues at stake, it is hard not to be disappointed by the form and substance of at least this initial phase of the government's consultation. It is difficult to say what value will be derived from the survey which is the vehicle for initial input. Some of the questions are frankly vapid. Consider question 2:

2. I’m interested in exploring the role of data in:

creating economic benefits

increasing public trust and confidence

better, smarter government

other

There is no box in which to write in what the “other” might be. And questions 9 to 11 provide sterling examples of leading questions:

9. Currently, the provincial government is unable to share information among ministries requiring individuals and businesses to submit the same information each time they interact with different parts of government. Do you agree that the government should be able to securely share data among ministries?

Yes

No

I’m not sure

10. Do you believe that allowing government to securely share data among ministries will streamline and improve interactions between citizens and government?

Yes

No

I’m not sure

11. If government made more of its own data available to businesses, this data could help those firms launch new services, products, and jobs for the people of Ontario. For example, government transport data could be used by startups and larger companies to help people find quicker routes home from work. Would you be in favour of the government responsibly sharing more of its own data with businesses, to help them create new jobs, products and services for Ontarians?

Yes

No

I’m not sure

In fairness, there are a few places in the survey where respondents can enter their own answers, including questions about what issues should be put to the task force and what skills and experience members should have. Those interested in data strategy should be sure to provide their input – both now and in the later phases to come.

Tuesday, 22 January 2019 16:56

Canada's Shifting Privacy Landscape

Note: This article was originally published by The Lawyer’s Daily (www.thelawyersdaily.ca), part of LexisNexis Canada Inc.

In early January 2019, Bell Canada caught the media spotlight over its “tailored marketing program”. The program will collect massive amounts of personal information, including “Internet browsing, streaming, TV viewing, location information, wireless and household calling patterns, app usage and the account information”. Bell’s background materials explain that “advertising is a reality” and that customers who opt into the program will see ads that are more relevant to their needs or interests. Bell promises that the information will not be shared with third party advertisers; instead it will enable Bell to offer those advertisers the ability to target ads to finely tuned categories of consumers. Once consumers opt in, their consent is presumed for any new services that they add to their account.

This is not the first time Bell has sought to collect vast amounts of data for targeted advertising purposes. In 2015, it terminated its short-lived and controversial “Relevant Ads” program after an investigation initiated by the Privacy Commissioner of Canada found that the “opt out” consent model chosen by Bell was inappropriate given the nature, volume and sensitivity of the information collected. Nevertheless, the Commissioner’s findings acknowledged that “Bell’s objective of maximizing advertising revenue while improving the online experience of customers was a legitimate business objective.”

Bell’s new tailored marketing program is based on “opt in” consent, meaning that consumers must choose to participate and are not automatically enrolled. This change and the OPC’s apparent acceptance of the legitimacy of targeted advertising programs in 2015 suggest that Bell may have brought its scheme within the parameters of PIPEDA. Yet media coverage of the new tailored ads program generated public pushback, suggesting that the privacy ground has shifted since 2015.

The rise of big data analytics and the stunning recent growth of artificial intelligence have sharply changed the commercial value of data, its potential uses, and the risks it may pose to individuals and communities. After the Cambridge Analytica scandal, there is also much greater awareness of the harms that can flow from consumer profiling and targeting. While conventional privacy risks of massive personal data collection remain (including the risk of data breaches, and enhanced surveillance), there are new risks that impact not just privacy but consumer choice, autonomy, and equality. Data misuse may also have broader impacts than just on individuals; such impacts may include group-based discrimination, and the kind of societal manipulation and disruption evidenced by the Cambridge Analytica scandal. It is not surprising, then, that both the goals and potential harms of targeted advertising may need rethinking; along with the nature and scope of data on which they rely.

The growth of digital and online services has also led to individuals effectively losing control over their personal information. There are too many privacy policies, they are too long and often obscure, products and services are needed on the fly and with little time to reflect, and most policies are ‘take-it-or-leave-it”. A growing number of voices are suggesting that consumers should have more control over their personal information, including the ability to benefit from its growing commercial value. They argue that companies that offer paid services (such as Bell) should offer rebates in exchange for the collection or use of personal data that goes beyond what is needed for basic service provision. No doubt, such advocates would be dismayed by Bell’s quid pro quo for its collection of massive amounts of detailed and often sensitive personal information: “more relevant ads”. Yet money-for-data schemes raise troubling issues, including the possibility that they could make privacy something that only the well-heeled can afford.

Another approach has been to call for reform of the sadly outdated Personal Information Protection and Electronic Documents Act. Proposals include giving the Privacy Commissioner enhanced enforcement powers, and creating ‘no go zones’ for certain types of information collection or uses. There is also interest in creating new rights such as the right to erasure, data portability, and rights to explanations of automated processing. PIPEDA reform, however, remains a mirage shimmering on the legislative horizon.

Meanwhile, the Privacy Commissioner has been working hard to squeeze the most out of PIPEDA. Among other measures, he has released new Guidelines for Obtaining Meaningful Consent, which took effect on January 1, 2019. These guidelines include a list of “must dos” and “should dos” to guide companies in obtaining adequate consent

While Bell checks off many of the ‘must do’ boxes with its new program, the Guidelines indicate that “risks of harm and other consequences” of data collection must be made clear to consumers. These risks – which are not detailed in the FAQs related to the program – obviously include the risk of data breach. The collected data may also be of interest to law enforcement, and presumably it would be handed over to police with a warrant. A more complex risk relates to the fact that internet, phone and viewing services are often shared within a household (families or roommates) and targeted ads based on viewing/surfing/location could result in the disclosure of sensitive personal information to other members of the household.

Massive data collection, profiling and targeting clearly raise issues that go well beyond simple debates over opt-in or opt-out consent. The privacy landscape is changing – both in terms of risks and responses. Those engaged in data collection would be well advised to be attentive to these changes.

In Netlink Computer Inc. (Re), the British Columbia Supreme Court dismissed an application for leave to sue a trustee in bankruptcy for the an alleged improper disposal of assets of a bankrupt company that contained the personal information of the company’s customers.

The issues at the heart of the application first reached public attention in September 2018 when a security expert described in a blog post how he noticed that servers from the defunct company were listed for sale on Craigslist. Posing as an interested buyer, he examined the computers and found that their unwiped hard drives contained what he reported as significant amounts of sensitive customer data, including credit card information and photographs of customer identification documents. Following the blog post, the RCMP and the BC Privacy Commissioner both launched investigations. Kipling Warner, who had been a customer of the defunct company Netlink, filed law suits against Netlink, the trustee in bankruptcy which had disposed of Netlink’s assets, the auction company Able Solutions, which and sold the assets, and Netlink’s landlord. All of the law suits include claims of breach statutory obligations under the Personal Information Protection and Electronic Documents Act, breach of B.C.’s Privacy Act, and breach of B.C.’s Personal Information Protection Act. The plan was to have the law suits certified as class action proceedings. The action against Netlink was stayed due to the bankruptcy. The B.C. Supreme Court decision deals only with the action against the trustee, as leave of the court must be obtained in order to sue a trustee in bankruptcy.

As Master Harper explained in his reasons for decision, the threshold for granting leave to sue a trustee in bankruptcy is not high. The evidence presented in the claim must advance a prima facie case. Leave to proceed will be denied if the proposed action is considered frivolous or vexations, since such a lawsuit would “interfere with the due administration of the bankrupt’s estate by the trustee” (at para 9). Essentially the court must balance the competing interests of the party suing the trustee and the interest in the efficient and timely wrapping up of the bankrupt’s estate.

The decision to dismiss the application in this case was based on a number of factors. Master Harper was not impressed by the fact that the multiple law suits brought against different actors all alleged the same grounds. He described this as a “scattergun approach” that suggested a weak evidentiary foundation. The application was supported by two affidavits, one from Mr. Warner, which he described as being based on inadmissible ‘double hearsay’ and one from the blogger, Mr. Doering. While Master Harper found that the Doering affidavit contained first hand evidence from Doering’s investigation into the servers sold on Craigslist, he noted that Doering himself had not been convinced by the seller’s statements about how he came to be in possession of the servers. The Master noted that this did not provide a basis for finding that it was the trustee in bankruptcy who was responsible. The Master also noted that although an RCMP investigation had been launched at the time of the blog post, it had since concluded with no charges being laid. The Master’s conclusion was that there was no evidence to support a finding that any possible privacy breach “took place under the Trustee’s ‘supervision and control’.” (at para 58)

Although the application was dismissed, the case does highlight some important concerns about the handling of personal information in bankruptcy proceedings. Not only can customer databases be sold as assets in bankruptcy proceedings, Mr Doering’s blog post raised the spectre of computer servers and computer hard drives being disposed of without properly being wiped of the personal data that they contain. Although he dismissed the application to file suit against the Trustee, Master Harper did express some concern about the Trustee’s lack of engagement with some of the issues raised by Mr. Warner. He noted that no evidence was provided by the Trustee “as to how, or if, the Trustee seeks to protect the privacy of customers when a bankrupt’s assets (including customer information) are sold in the bankruptcy process.” (at para 44) This is an important issue, but it is one on which there is relatively little information or discussion. A 2009 blog post from Quebec flags some of the concerns raised about privacy in bankruptcy proceedings; a more recent post suggests that while larger firms are more sophisticated in how they deal with personal information assets, the data in the hands of small and medium sized firms that experience bankruptcy may be more vulnerable.

Digital and data governance is challenging at the best of times. It has been particularly challenging in the context of Sidewalk Labs’ proposed Quayside development for a number of reasons. One of these is (at least from my point of view) an ongoing lack of clarity about who will ‘own’ or have custody or control over all of the data collected in the so-called smart city. The answer to this question is a fundamentally important piece of the data governance puzzle.

In Canada, personal data protection is a bit of a legislative patchwork. In Ontario, the collection, use or disclosure of personal information by the private sector, and in the course of commercial activity, is governed by the federal Personal Information Protection and Electronic Documents Act (PIPEDA). However, the collection, use and disclosure of personal data by municipalities and their agencies is governed by the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA), while the collection, use and disclosure of personal data by the province is subject to the Freedom of Information and Protection of Privacy Act (FIPPA). The latter two statutes – MFIPPA and FIPPA – contain other data governance requirements for public sector data. These relate to transparency, and include rules around access to information. The City of Toronto also has information management policies and protocols, including its Open Data Policy.

The documentation prepared for the December 13, 2018 Digital Strategy Advisory Panel (DSAP) meeting includes a slide that sets out implementation requirements for the Quayside development plan in relation to data and digital governance. A key requirement is: “Compliance with or exceedance of all applicable laws, regulations, policy documents and contractual obligations” (page 95). This is fine in principle, but it is not enough on its own to say that the Quayside project must “comply with all applicable laws”. At some point, it is necessary to identify what those applicable laws are. This has yet to be done. And the answer to the question of which laws apply in the context of privacy, transparency and data governance, depends upon who ultimately is considered to ‘own’ or have ‘custody or control’ of the data.

So – whose data is it? It is troubling that this remains unclear even at this stage in the discussions. The fact that Sidewalk Labs has been asked to propose a data governance scheme suggests that Sidewalk and Waterfront may be operating under the assumption that the data collected in the smart city development will be private sector data. There are indications buried in presentations and documentation that also suggest that Sidewalk Labs considers that it will ‘own’ the data. There is a great deal of talk in meetings and in documents about PIPEDA, which also indicates that there is an assumption between the parties that the data is private sector data. But what is the basis for this assumption? Governments can contract with a private sector company for data collection, data processing or data stewardship – but the private sector company can still be considered to act as an agent of the government, with the data being legally under the custody or control of the government and subject to public sector privacy and freedom of information laws. The presence of a private sector actor does not necessarily make the data private sector data.

If the data is private sector data, then PIPEDA will apply, and there will be no applicable access to information regime. PIPEDA also has different rules regarding consent to collection than are found in MFIPPA. If the data is considered ultimately to be municipal data, then it will be subject to MFIPPA’s rules regarding access and privacy, and it will be governed by the City of Toronto’s information management policies. These are very different regimes, and so the question of which one applies is quite fundamental. It is time for there to be a clear and forthright answer to this question.

Law and the “Sharing Economy”: Regulating Online Market Platforms is a new, peer-reviewed collection of papers co-edited by Derek McKee, Finn Makela and myself. The book is the product of a workshop held in January 2017. It is published in late November 2018 by the University of Ottawa press in both print and open access PDF formats.

The title of the book uses scare quotes around ‘Sharing Economy’ because of the deep ambivalence felt about the term amongst contributors to the volume, and the inability to find a suitable alternative. The term ‘sharing economy’ is used by some to suggest an alternative to the market; others have used it to describe activities taking place over large, commercial platforms. And, while some of the platforms use the rhetoric of helping ordinary individuals make ends meet by providing them with the ability to commercialize (‘share’) underutilized resources, the reality is that many of the large platform companies have resulted in other resources finding their way into the ‘sharing economy’. These resources may include, for example, living spaces once rented out on a long term basis that now turn greater profits as short term accommodation. Platform companies have had broad disruptive impacts. Our authors consider their impacts on licensing regimes, alternative dispute resolution, legal normativity, local governance, specific industries, and labour rights. The also consider platform companies’ digital data, their relationship to international trade agreements, and the competition law and policy issues they raise.

The collection of papers in this book offer “a set of diverse lenses through which we can examine both the sharing economy and its broader social impacts, and from which certain key themes emerge” (introduction, p. 5). The book is organized into 5 broad themes: Technologies of Regulation; Regulating Technology; The Space of Regulation – Local to Global; Regulating Markets; and Regulating Labour. The papers reflect a diversity of perspectives. Some explore issues in the context of specific platforms such as Airbnb or Uber; others consider the issues raised by the ‘sharing economy’ more broadly. A Table of Contents for the book is found below.

 

Law and the “Sharing Economy”: Regulating Online Market Platforms

Derek McKee, Finn Makela, Teresa Scassa, eds.

 

Table of contents

Introduction

Derek McKee, Finn Makela and Teresa Scassa

Technologies of regulation

Peer Platform Markets and Licensing Regimes

Derek McKee

The False Promise of the Sharing Economy

Harry Arthurs

The Fast to the Furious

Nofar Sheffi

Regulating technology

The Normative Ecology of Disruptive Technology

Vincent Gautrais

Information Law in the Platform Economy: Ownership, Control and Reuse of Platform Data

Teresa Scassa

The space of regulation: local to global

Urban Cowboy E-Capitalism Meets Dysfunctional Municipal Policy-Making: What the Uber Story Tells Us About Canadian Local Governance

Mariana Valverde

The Sharing Economy and Trade Agreements: The Challenge to Domestic Regulation

Michael Geist

Regulating markets

Should Licence Plate Owners be Compensated when Uber Comes to Town?

Eran Kaplinsky

Competition Law and Policy Issues in the Sharing Economy

Francesco Ducci

Regulating labour

The Legal Framework for Digital Platform Work: The French Experience

Marie-Cécile Escande-Varniol

Uber and the Unmaking and Remaking of Taxi Capitalisms: Technology, Law and Resistance in Historical Perspective

Eric Tucker

Making Sense of the Public Discourse on Airbnb and Labour: What About Labour Rights?

Sabrina Tremblay-Huet

 

 

 

On November 23, 2018, Waterfront Toronto hosted a Civic Labs workshop in Toronto. The theme of the workshop was Smart City Data Governance. I was asked to give a 10 minute presentation on the topic. What follows is a transcript of my remarks.

Smart city governance relates to how smart cities govern themselves and their processes; how they engage citizens and how they are transparent and accountable to them. Too often the term “smart city” is reduced to an emphasis on technology and on technological solutionism – in other words “smart cities” are presented as a way in which to use technology to solve urban problems. In its report on Open Smart Cities, Open North observes that “even when driven in Canada by good intentions and best practices in terms of digital strategies, . . . [the smart city] remains a form of innovation and efficient driven technological solutionism that is not necessarily integrated with urban plans, with little or no public engagement and little to no relation to contemporary open data, open source, open science or open government practices”.

Smart cities governance puts the emphasis on the “city” rather than the “smart” component, focusing attention on how decisions are made and how the public is engaged. Open North’s definition of the Open Smart City is in fact a normative statement about digital urban governance:

An Open Smart City is where residents, civil society, academics, and the private sector collaborate with public officials to mobilize data and technologies when warranted in an ethical, accountable and transparent way to govern the city as a fair, viable and liveable commons and balance economic development, social progress and environmental responsibility.

This definition identifies the city government as playing a central role, with engagement from a range of different actors, and with particular economic, social and environmental goals in mind. This definition of a smart city involves governance in a very basic and central way – stakeholders are broadly defined and they are engaged not just in setting limits on smart cities technology, but in deciding what technologies to adopt and deploy and for what purposes.

There are abundant interesting international models of smart city governance – many of them arise in the context of specific projects often of a relatively modest scale. Many involve attempts to find ways to include city residents in both identifying and solving problems, and the use of technology is relevant both to this engagement and to finding solutions.

The Sidewalk Toronto project is somewhat different since this is not a City of Toronto smart city initiative. Rather, it is the tri-governmental entity Waterfront Toronto that has been given the lead governance role. This has proved challenging since while Waterfront Toronto has a public-oriented mandate, it is not a democratically elected body, and its core mission is to oversee the transformation of specific brownfield lands into viable communities. This is important to keep in mind in thinking about governance issues. Waterfront Toronto has had to build public engagement into its governance framework in ways that are different from a municipal government. The participation of federal and provincial privacy commissioners, and representatives from federal and provincial governments feed into governance as does the DSAP and there has been public outreach. There will also be review of and consultation of the Master Innovation Development Plan (MIDP) once it is publicly released. But it is a different model from city government and this may set it apart in important ways from other smart cities initiatives in Canada and around the world.

Setting aside for a moment the smart cities governance issue, let’s discuss data governance. The two are related – especially with respect to the issue of what data is collected in the smart city and for what purposes.

Broadly speaking, data governance goes to the question of how data will be stewarded (and by whom) and for what purposes. Data governance is about managing data. As such, it is not a new concept. Data governance is a practice that is current in both private and public sector contexts. Most commonly it takes place within a single organization which develops practices and protocols to manage its existing and future data. Governance issues include considering who is responsible for the data, who is entitled to set the rules for access to and reuse of it, how those rules will be set, and who will profit/benefit from the data and on what terms. It also includes addressing issues such as data security, standards, interoperability, and localization. Where the data include personal information, compliance with privacy laws is an aspect of data governance. But governance is not limited to compliance – for example, an organization may adopt higher standards than those required by privacy law, or may develop novel approaches to managing and protecting personal information.

There are many different data governance models. Some (particularly in the public sector) are shaped by legislation, regulations and government policies. Others may be structured by internal policies, standards, industry practice, and private law instruments such as contracts or trusts. As the term is commonly used, data governance does not necessarily implicate citizen involvement or participation in the same way as “smart city governance” does – it is the “city” part of “smart city governance” that brings in to focus democratic principles of transparency, accountability, engagement and so on. However, where there is a public sector dimension to the collection or control of data, then public sector laws, including those relating to transparency and accountability, may apply.

With the rise of the data economy, data sharing is becoming an important activity for both public and private sector actors. As a result, new models of data governance are needed to facilitate data sharing. There are many different benefits that flow from data sharing. It may be carried out for financial gain, or it may be done to foster innovation, enable new insights, stimulate the economy, increase transparency, solve thorny problems, and so on. There are also different possible beneficiaries. Data may be shared amongst a group of entities each of which will find advantages in the mutual pooling of their data resources. Or it may be shared broadly in the hope of generating new data-based solutions to existing problems. In some cases, data sharing has a profit motive. The diversity of actors, beneficiaries, and motivations, makes it necessary to find multiple, diverse and flexible frameworks and principles to guide data sharing arrangements.

Open government data regimes are an important example of a data governance model for data sharing. Many governments have decided that opening government data is a significant public policy goal, and have done tremendous amount of work to create the infrastructure not just for sharing data, but for doing it in a useful, accessible and appropriate manner. This means the development of standards for data and metadata, and the development of portals and search functions. It has meant paying attention to issues of interoperability. It has also required governments to consider how best to protect privacy and confidential information, or information that might impact on security issues. Once open, the sharing frameworks are relatively straightforward -- open data portals typically offer data to anyone, with no registration requirement, under a simple open licence.

Governments are not the only ones developing open data portals – research institutions are increasingly searching for ways in which to publicly share research outputs including publications and data. Some research data infrastructures support sharing, but not necessarily on fully open terms – this requires another level of consideration as to the policy reasons for limiting access, how to limit access effectively, and how to set and ensure respect for appropriate limits on reuse.

The concept of a data trust has also received considerable attention as a means of data sharing. The term data trust is now so widely and freely used that it does not have a precise meaning. In its publication “What is a Data Trust”, the ODI identifies at least 5 different concepts of a data trust, and they provide examples of each:

· A data trust as a repeatable framework of terms and mechanisms.

· A data trust as a mutual organisation.

· A data trust as a legal structure.

· A data trust as a store of data.

· A data trust as public oversight of data access.

The diversity of “data trusts” means that there are a growing number of models to study and consider. However, it also makes it a little dangerous to talk about “data trust” as if it has a precise meaning. With data trusts, the devil is very much in the details. If Sidewalk Labs is to propose a ‘data trust’ for the management of data gathered in the Sidewalk Toronto development, then it will be important to probe into exactly what the term means in this context.

What Sidewalk Labs is proposing is a particular vision of a data trust as a data governance model for data sharing in a smart cities development. It is admittedly a work in progress. It has some fairly particular characteristics. For example, not only is it a framework to set the parameters for sharing the subset “urban data” (defined by Sidewalk Labs) collected through the project, it also contemplates providing governance for any proposals by third parties who might want to engage in the collection of new kinds, categories or volumes of data.

In thinking about the proposed ‘trust’, some questions I would suggest considering are:

1) What is the relationship between the proposed trust and the vision for smart city governance? In other words, to what extent is the public and/or are public sector decision-makers engaged in determining what data will be governed by the trust, on what terms, for whose benefit, and on what terms will sharing take place?

2) A data governance model does not make up for a robust smart city governance up front (in identifying the problems to be solved, the data to be collected to solve them, etc.). If this piece is missing, then discussion of the trust may involve discussing the governance of data where there is no group consensus or input as to its collection. How should this be done (if at all)?

3) A data governance model can be created for the data of a single entity (e.g. an open government portal, or a data governance framework for a corporation); but it can also be developed to facilitate data sharing between entities, or even between a group of entities and a broader public. So an important question in the ST context is what model is this? Is this Sidewalk Labs data that is being shared? Or is it Waterfront’s? Or the City’s? Who has custody/control or ownership of the data that will be governed by the ‘trust’?

4) Data governance is crucial with respect to all data held by an entity. Not all data collected through the Sidewalk Toronto project will fall within Sidewalk’s definition of “urban data” (for which the ‘trust’ is proposed). If the data governance model under consideration only deals with a subset of data, then there must be some form of data governance for the larger set. What is it? And who determines its parameters?

The following is a copy of remarks I made in an appearance before the Standing Senate Committee on Banking, Trade and Commerce, on November 21, 2018. The remarks are about proposed amendments to the Trade-marks Act found in the omnibus Bill C-86. I realize that not everyone is as interested in official marks as I am. To get a sense of what the fuss is all about, you can have a look at my posts about the overprotection of Olympic Marks, the dispute over the Spirit Bear mark, the struggle to register a trademark in the face of an wrongly granted and since abandoned official mark, Canada Post’s official marks for POSTAL CODE and CODE POSTAL and a previous private member’s bill to reform official marks.

Canada’s official marks regime has long been criticized by lawyers, academics and the Federal Court. In fact, it is the Federal Court that has, over the years, created some much needed boundaries for these “super marks”. The problems with official marks are well known, but they have largely been ignored by Parliament. It is therefore refreshing to see the proposed changes in ss. 215 and 216 of Bill C-86.

Sections 215 and 216 address only one of the problems with the official marks regime. Although it is an important one, it is worth noting that there is more that could be done. The goal of my remarks will be to identify what I see as two shortfalls of ss 215 and 216.

Official marks are a subcategory of prohibited marks, which may not be adopted, used or registered unless consent is provided. They are available to “public authorities”. A public authority need only ask the Registrar of Trademarks to give public notice of its adoption and use of an official mark for that mark to be protected. There are no limits to what can be adopted. There are no registration formalities, no examination or opposition proceedings. Until the very recent decision of the Federal Court in Quality Program Services Inc. v. Canada, it seemed nothing prevented a public authority from obtaining an official mark that was identical to or confusing with an already registered trademark. While Quality Program Services at least provides consequences for adopting a confusing official mark, it is currently under appeal and it is not certain that the decision will be upheld. This is another instance of the Federal Court trying to set boundaries for official marks that simply have not been set in the legislation.

Official marks are theoretically perpetual in duration. They remain on the register until they are either voluntarily withdrawn by the ‘owner’ (and owners rarely think to do this), or until a successful (and costly) action for judicial review results in one being struck from the Register. Until the Ontario Association of Architects decision in 2002 tightened up the meaning of ‘public authority’, official marks were handed out like Halloween candy, and many entities that were not ‘public authorities’ were able to obtain official marks. Many of these erroneously-issued official marks continue to exist today; in fact, the Register of Trademarks has become cluttered with official marks that are either invalid or no longer in use.

Sections 215 and 216 address at least part of this last problem. They provide an administrative process through which either the Registrar or any person prepared to pay the prescribed fee can have an official mark invalidated if the entity that obtained the mark “is not a public authority or no longer exists.” This is a good thing. I would, however, suggest one amendment to the proposed new s. 9(4). Where it is the case (as per the new s. 9(3)) that the entity that obtained the official mark was not a public authority or has ceased to exist, s. 9(4) allows the Registrar to give public notice that subparagraph (1)(n)(iii) “does not apply with respect to the badge, crest, emblem or mark”. As it is currently worded, this is permissive– the Registrar “may” give public notice of non-application. In my view, it should read:

(4) In the circumstances set out in subsection (3), the Registrar may, on his or her own initiative or shall, at the request of a person who pays a prescribed fee, give public notice that subparagraph (1)‍(n)‍(iii) does not apply with respect to the badge, crest, emblem or mark.

There is no reason why a person who has paid a fee to have established the invalidity of an official mark should not have the Registrar give public notice of this.

I would also suggest that the process for invalidating official marks should extend to those that have not been used within the preceding three years – in other words, something parallel to s. 45 of the Trade-marks Act which provides an administrative procedure to remove unused registered trademarks from the Register. There are hundreds of ‘public authorities’ at federal and provincial levels across Canada, and they adopt official marks for all sorts of programs and initiatives, many of which are relatively transient. There should be a means by which official marks can simply be cleared from the Register when they are no longer used. Thus, I would recommend adding new subsections 9(5) and (6) to the effect that:

(5) The Registrar may at any time – and, at the written request of a person who pays a prescribed fee, made after three years from the date that public notice was given of an official mark, shall, unless the Registrar sees good reason to the contrary – give notice to the public authority requiring it to furnish, within three months, an affidavit or a statutory declaration showing that the official mark was in use at any time during the three year period immediately preceding the date of the notice and, if not, the date when it was last so in use and the reason for the absence of such use since that date.

(6) Where, by reason of the evidence furnished to the Registrar or the failure to furnish any evidence, it appears to the Registrar that an official mark was not used at any time during the three year period immediately preceding the date of the notice and that the absence of use has not been due to special circumstances that excuse it, the Registrar shall give public notice that subparagraph (1)‍(n)‍(iii) does not apply with respect to the badge, crest, emblem or mark.

These are my comments on changes to the official marks regime that most closely relate to the amendments in Bill C-86. The regime has other deficiencies which I would be happy to discuss.

A Global News story about Statistics Canada’s collection of detailed financial data of a half million Canadians has understandably raised concerns about privacy and data security. It also raises interesting questions about how governments can or should meet their obligations to produce quality national statistics in an age of big data.

According to Andrew Russell’s follow-up story, Stats Canada plans to collect detailed customer information from Canada’s nine largest banks. The information sought includes financial information including account balances, transaction data, credit card and bill payments. It is unclear whether the collection has started.

As a national statistical agency, Statistics Canada is charged with the task of collecting and producing data that “ensures Canadians have the key information on Canada's economy, society and environment that they require to function effectively as citizens and decision makers.” Canadians are perhaps most familiar with providing census data to Statistics Canada, including more detailed data through the long form census. However, the agency’s data collection is not limited to the census.

Statistics Canada’s role is important, and the agency has considerable expertise in carrying out its mission and in protecting privacy in the data it collects. This is not to say, however, that Statistics Canada never makes mistakes and never experiences privacy breaches. One of the concerns, therefore, with this large-scale collection of frankly sensitive data is the increased risk of privacy breaches.

The controversial collection of detailed financial data finds its legislative basis in this provision of the Statistics Act:

13 A person having the custody or charge of any documents or records that are maintained in any department or in any municipal office, corporation, business or organization, from which information sought in respect of the objects of this Act can be obtained or that would aid in the completion or correction of that information, shall grant access thereto for those purposes to a person authorized by the Chief Statistician to obtain that information or aid in the completion or correction of that information. [My emphasis]

Essentially, it conveys enormous power on Stats Canada to request “documents or records” from third parties. Non-compliance with a request is an offence under s. 32 of the Act, which carries a penalty on conviction of a fine of up to $1000. A 2017 amendment to the legislation removed the possibility of imprisonment for this offence.

In case you were wondering whether Canada’s private sector data protection legislation offers any protection when it comes to companies sharing customer data with Statistics Canada, rest assured that it does not. Paragraph 7(3)(c.1) of the Personal Information Protection and Electronic Documents Act provides that an organization may disclose personal information without the knowledge or consent of an individual where the disclosure is:

(c.1) made to a government institution or part of a government institution that has made a request for the information, identified its lawful authority to obtain the information and indicated that

[. . .]

(iii) the disclosure is requested for the purpose of administering any law of Canada or a province

According to the Global News story, Statistics Canada notified the Office of the Privacy Commissioner about its data collection plan and obtained the Commissioner’s advice. In his recent Annual Report to Parliament the Commissioner reported on Statistic’s Canada’s growing practice of seeking private sector data:

We have consulted with Statistics Canada (StatCan) on a number of occasions over the past several years to discuss the privacy implications of its collection of administrative data – such as individuals’ mobile phone records, credit bureau reports, electricity bills, and so on. We spoke with the agency about this again in the past year, after a number of companies contacted us with concerns about StatCan requests for customer data.

The Commissioner suggested that Stats Canada might consider the collection of only data that has been deidentified at source rather than detailed personal information. He also recommended an ongoing assessment of the necessity and effectiveness of such programs.

The Commissioner also indicated that one of the problems with the controversial data collection by Statistics Canada is its lack of openness. He stated: “many Canadians might be surprised to learn the government is collecting their information in this way and for this purpose.” While part of this lack of transparency lies in the decision not to be more upfront about the data collection, part of it lies in the fact that the legislation itself – while capable of being read to permit this type of collection – clearly does not expressly contemplate it. Section 13 was drafted in a pre-digital, pre-big data era. It speaks of “documents or records”, and not “data”. While it is possible to interpret it so as to include massive quantities of data, the original drafters no doubt contemplated a collection activity on a much more modest scale. If Section 13 really does include the power to ask any organization to share its data with Stats Canada, then it has become potentially limitless in scope. At the time it was drafted, the limits were inherent in the analogue environment. There was only so much paper Stats Canada could ask for, and only so much paper it had the staff to process. In addition, there was only so much data that entities and organizations collected because they experienced the same limitations. The digital era means not only that there is a vast and increasing amount of detailed data collected by private sector organizations, but that this data can be transferred in large volumes with relative ease, and can be processed and analyzed with equal facility.

Statistics Canada is not the only national statistics organization to be using big data to supplement and enhance its data collection and generation. In some countries where statistical agencies struggle with a lack of human resources and funding, big data from the private sector offer opportunities to meet the data needs of their governments and economies. Statistical agencies everywhere recognize the potential of big data to produce more detailed, fine-grained and reliable data about many aspects of the economy. For example, the United Nations maintains a big data project inventory that catalogues experiments by national statistical agencies around the world with big data analytics. Remember the cancellation of the long form census by the Harper government? This was not a measure to protect Canadians’ privacy by collecting less information; it was motivated by a belief that better and more detailed data could be sought using other means – including reliance on private sector data.

It may well be that Statistics Canada needs the power to collect digital data to assist in data collection programs that serve national interests. However, the legislation that authorizes such collection must be up-to-date with our digital realities. Transparency requires an amendment to the legislation that would specifically enable the collection and use of digital and big data from the private sector tor statistical purposes. Debate over the scope and wording of such a provision would give both the public and the potential third party data sources an opportunity to identify their concerns. It would also permit the shaping of limits and conditions that are specific to the nature and risks of this form of data collection.

Late in the afternoon of Monday, October 15, 2018, Sidewalk Labs released a densely-packed slide-deck which outlined its new and emerging data governance plan for the Sidewalk Toronto smart city development. The plan was discussed by Waterfront Toronto’s Digital Strategy Advisory Panel at their meeting on Thursday, October 18. I am a member of that panel, and this post elaborates upon the comments I made at that meeting.

Sidewalk Labs’ new data governance proposal builds upon the Responsible Data Use Policy Framework (RDUPF) document which had been released by Sidewalk Labs in May 2018. It is, however, far more than an evolution of that document – it is a different approach reflecting a different smart city concept. It is so different that Ann Cavoukian, advisor to Sidewalk Labs on privacy issues, resigned on October 19. The RDUPF had made privacy by design its core focus and promised the anonymization of all sensor data. Cavoukian cited the fact that the new data governance framework contemplated that not all personal information would be deidentified as a reason for her resignation.

Neither privacy by design nor data anonymization are privacy panaceas, and the RDUPF document had a number of flaws. One of them was that by championing deidentification of personal information as the key to responsible data use, it very clearly only addressed privacy concerns relating to a subset of the data that would inevitably be collected in the proposed smart city. In addition, by focusing on privacy by design, it did little to address the many other data governance issues the project faced.

The new proposal embraces a broader concept of data governance. It is cognizant of privacy issues but also considers issues of data control, access, reuse, and localization. In approaching data governance, Sidewalk is also proposing using a ‘civic data trust’ as a governance model. Sidewalk has made it clear that this is a work in progress and that it is open to feedback and comment. It received some at the DSAP meeting on Thursday, and more is sure to come.

My comments at the DSAP focused on two broad issues. The first was data and the second was governance. I prefaced my discussion of these by warning that in my view it is a mistake to talk about data governance using either of the Sidewalk Labs documents as a departure point. This is because these documents embed assumptions that need to be examined rather than simply accepted. They propose a different starting point for the data governance conversation than I think is appropriate, and as a result they unduly shape and frame that discussion.

Data

Both the RDUPF and the current data governance proposal discuss how the data collected by the Sidewalk Toronto development will be governed. However, neither document actually presents a clear picture of what those data are. Instead, both documents discuss a subset of data. The RDUPF discussed only depersonalized data collected by sensors. The second discussed only what it defines as “urban data”:

Urban Data is data collected in a physical space in the city, which includes:

● Public spaces, such as streets, squares, plazas, parks, and open spaces

● Private spaces accessible to the public, such as building lobbies, courtyards, ground-floor markets, and retail stores

● Private spaces not controlled by those who occupy them (e.g. apartment tenants)

This is very clearly only a subset of smart cities data. (It is also a subset that raises a host of questions – but those will have to wait for another blog post.)

In my view, any discussion of data governance in the Sidewalk Toronto development should start with a mapping out of the different types of data that will be collected, by whom, for what purposes, and in what form. It is understood that this data landscape may change over time, but at least a mapping exercise may reveal the different categories of data, the issues they raise, and the different governance mechanisms that may be appropriate depending on the category. By focusing on deidentified sensor data, for example, the RDUPF did not address personal information collected in relation to the consumption of many services that will require identification – e.g., for billing or metering purposes. In the proposed development, what types of services will require individuals to identify themselves? Who will control such data? How will it be secured? What will policies be with respect to disclosure to law enforcement without a warrant? What transparency measures will be in place? Will service consumption data also be deidentified and made available for research? In what circumstances? I offer this as an example of a different category of data that still requires governance, and that still needs to be discussed in the context of a smart cities development. This type of data would also fall outside the category of “urban data” in the second governance plan, making that plan only a piece of the overall data governance required, as there are many other categories of data that are not captured by “urban data”. The first step in a data governance must be for all involved to understand what data is being collected, how, why, and by whom.

The importance of this is also made evident by the fact that between the RDUPF and the new governance plan, the very concept of the Sidewalk Toronto smart city seems to have changed. The RDUPF envisioned a city in which sensors were installed by Sidewalk and Sidewalk was committing to the anonymization of any collected personal information. In the new version, the model seems to be of the smart city as a technology platform on which any number of developers will be invited to build. As a result, the data governance model proposes an oversight body to provide approval for new data collection in public spaces, and to play some role in the sharing of the collected data if appropriate. This is partly behind the resignation of Ann Cavoukian. She objected to the fact that this model accepts that some new applications might require the collection of personal information and so deidentification could not be an upfront promise for all data collected.

The technology-platform model seems responsive to concerns that the smart city would effectively be subsumed by a single corporation. It allows other developers to build on the platform – and by extension to collect and process data. Yet from a governance perspective this is much messier. A single corporation can make bold commitments with respect to its own practices; it may be difficult or inappropriate to impose these on others. It also makes it much more difficult to predict what data will be collected and for what purposes. This does not mean that the data mapping exercise is not worthwhile – many kinds and categories of data are already foreseeable and mapping data can help to understand different governance needs. In fact, it is likely that a project this complex will require multiple data governance models.

Governance

The second point I tried to make in my 5 minutes at the Thursday meeting was about data governance. The new data governance plan raises more questions than it answers. One glaring issue seems to be the place for our already existing data governance frameworks. These include municipal and provincial Freedom of Information and Protection of Privacy Acts and PIPEDA. They may also include the City of Toronto’s open data policies and platforms. There are very real questions to be answered about which smart city data will be private sector data and which will be considered to be under the custody or control of a provincial or municipal government. Government has existing legal obligations about the management of data that are under its custody or control, and these obligations include the protection of privacy as well as transparency. A government that decides to implement a new data collection program (traffic cameras, GPS trackers on municipal vehicles, etc.) would be the custodian of this data, and it would be subject to relevant provincial laws. The role of Sidewalk Labs in this development challenges, at a very fundamental level, the understanding of who is ultimately responsible for the collection and governance of data about cities, their services and infrastructure. Open government data programs invite the private sector to innovate using public data. But what is being envisaged in this proposal seems to be a privatization of the collection of urban data – with some sort of ‘trust’ model put in place to soften the reality of that privatization.

The ‘civic data trust’ proposed by Sidewalk Labs is meant to be an innovation in data governance, and I am certainly not opposed to the development of innovative data governance solutions. However, the use of the word “trust” in this context feels wrong, since the model proposed is not a data trust in any real sense of the word. This view seems to be shared by civic data trust advocate Sean MacDonald in an article written in response to the proposal. It is also made clear in this post by the Open Data Institute which attempts to define the concept of a civic data trust. In fact, it is hard to imagine such an entity being created and structured without significant government involvement. This perhaps is at the core of the problem with the proposal – and at the root of some of the pushback the Sidewalk Toronto project has been experiencing. Sidewalk Labs is a corporation – an American one at that – and it is trying to develop a framework to govern vast amounts of data collected about every aspect of city life in a proposed development. But smart cities are still cities, and cities are public institutions created and structured by provincial legislation and with democratically elected councils. If data is to be collected about the city and its residents, it is important to ask why government is not, in fact, much more deeply implicated in any development of both the framework for deciding who gets to use city infrastructure and spaces for data collection, and what data governance model is appropriate for smart cities data.

A law suit filed in Montreal this summer raises novel copyright arguments regarding AI-generated works. The plaintiffs are artist Amel Chamandy and Galerie NuEdge Fine Arts (which sells and exhibits her art). They are suing artist Adam Basanta for copyright and trademark infringement. (The trademark infringement arguments are not discussed in this post). Mr Basanta is a world renowned new media artist who experiments with AI in his work. (See the Globe and Mail story by Chris Hannay on this law suit here).

According to a letter dated July 4, filed with the court, Mr. Basanta’s current project is “to explore connections between mass technologies, using those technologies themselves.” He explains his process in a video which can be found here. Essentially, he has created what he describes as an “art-factory” that randomly generates images without human input. The images created are then “analyzed by a series of deep-learning algorithms trained on a database of contemporary artworks in economic and institutional circulation” (see artist’s website). The images used in the database of artworks are found online. Where the analysis finds a match of more than 83% between one of the randomly generated images and an image in the database, the randomly generated image is presented online with the percentage match, the title of the painting it matches, and the artist’s name. This information is also tweeted out. The image of the painting that matches the AI image is not reproduced or displayed on the website or on Twitter.

One of Mr Basanta’s images was an 85.81% match with a painting by Ms Chamandy titled “Your World Without Paper”. This information was reported on Mr Basanta’s website and Twitter accounts along with the machine-generated image which resulted in the match.

The copyright infringement allegation is essentially that “the process used by the Defendant to compare his computer generated images to Amel Chamandy’s work necessarily required an unauthorized copy of such a work to be made.” (Statement of Claim, para 30). Ms Chamandy claims statutory damages of up to $20,000 for the commercial use of her work. Mr Basanta, for his part, argues that there is no display of Ms Chamandy’s work, and therefore no infringement.

AI has been generating much attention in the copyright world. AI algorithms need to be ‘trained’ and this training requires that they be fed a constant supply of text, data or images, depending upon the algorithm. Rights holders argue that the use of their works in this way without consent is infringement. The argument is that the process requires unauthorized copies to be fed into the system for algorithmic analysis. Debates have raged in the EU over a text-and-data mining exception to copyright infringement which would make this type of use of copyright protected works acceptable so long as it is for research purposes. Other uses would require clearance for a fee. There has already been considerable debate in Europe over whether research is a broad enough basis for the exception and what activities it would include. If a similar exception is to be adopted in Canada in the next round of copyright reform, we will face similar challenges in defining its boundaries.

Of course, the Chamandy case is not the conventional text and data mining situation. The copied image is not used to train algorithms. Rather, it is used in an analysis to assess similarities with another image. But such uses are not unknown in the AI world. Facial recognition technologies match live captured images with stored face prints. In this case, the third party artwork images are like the stored face prints. It is AI, just not the usual text and data mining paradigm. This should also raise questions about how to draft exceptions or to interpret existing exceptions to address AI-related creativity and innovation.

In the US, some argue that the ‘fair use’ exception to infringement is broad enough to support text and data mining uses of copyright protected works since the resulting AI output is transformative. Canada’s fair dealing provisions are less generous than U.S. fair use, but it is still possible to argue that text and data mining uses might be ‘fair’. Canadian law recognizes fair dealing for the purposes of research or private study, so if an activity qualifies as ‘research’ it might be fair dealing. The fairness of any dealing requires a contextual analysis. In this case the dealing might be considered fair since the end result only reports on similarities but does not reproduce any of the protected images for public view.

The problem, of course, with fair dealing defences is that each case turns on its own facts. The fact-dependent inquiry necessary for a fair dealing defense could be a major brake on innovation and creativity – either by dissuading uses out of fear of costly infringement claims or by driving up the cost of innovation by requiring rights clearance in order to avoid being sued.

The claim of statutory damages here is also interesting. Statutory damages were introduced in s. 38.1 of the Copyright Act to give plaintiffs an alternative to proving actual damage. For commercial infringements, statutory damages can range from $500 to $20,000 per work infringed; for non-commercial infringement the range is $100 to $5,000 for all infringements and all works involved. A judge’s actual award of damages within these ranges is guided by factors that include the need for deterrence, and the conduct of the parties. Ms Chamandy asserts that Mr Basanda’s infringement is commercial, even though the commercial dimension is difficult to see. It would be interesting to consider whether the enhancement of his reputation or profile as an artist or any increase in his ability to obtain grants would be considered “commercial”. Beyond the challenge of identifying what is commercial activity in this context, it opens a window into the potential impact of statutory damages in text and data mining activities. If such activities are considered to infringe copyright and are not clearly within an exception, then in Canada, a commercial text and data miner who consumes – say 500,000 different images to train an algorithm – might find themselves, even on the low end of the spectrum, liable for $250 million dollars in statutory damages. Admittedly, the Act contains a clause that gives a judge the discretion to reduce an award of statutory damages if it is “grossly out of proportion to the infringement”. However, not knowing what a court might do or by how much the damages might be reduced creates uncertainty that can place a chill on innovation.

Although in this case, there may well be a good fair dealing defence, the realities of AI would seem to require either a clear set of exceptions to clarify infringement issues, or some other scheme to compensate creators which expressly excludes resort to statutory damages. The vast number of works that might be consumed to train an algorithm for commercial purposes makes statutory damages, even at the low end of the scale, potentially devastating and creates a chill.

 

<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 9 of 37

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law