Teresa Scassa - Blog

Displaying items by tag: transparency
Monday, 09 February 2026 07:15

Canada's AI Strategy: Some Reflections

The Department of Innovation Science and Economic Development (ISED) has released the results of the consultation it carried out in advance of its development of the latest iteration of its AI Strategy. The consultation had two components – one was a Task Force on AI – a group of experts tasked with consulting their peers to develop their views. The experts were assigned to specified themes (research and talent; adoption across industry and government; commercialization of AI; scaling our champions and attracting investment; building safe AI systems and public trust in AI, education and skills; infrastructure; and security). The second component was a broad public consultation asking for either answers to an online survey or emailed free-form submissions. This post offers some reflections on the process and its outcomes.

1. The controversy over the consultation

The consultation process generated controversy. One reason for this was the sudden and short timelines. Submissions from the public were sought within a month, and Task Force members were initially expected to consult their peers and report in the month following the launch of the consultation. In the end, the Task Force Reports were not published until early February – the timelines were simply unrealistic. However, there was no extension for the public consultation. The Summary of Inputs on the consultation refers to it as “the largest public consultation in the history of Innovation Science and Economic Development Canada, generating important ideas, questions and legitimate concerns to take into consideration in the drafting of the strategy” (at page 3). The response signals how important the issue is to Canadians and how they want to be heard. One has to wonder how many submissions ISED might have received with longer timelines. Short deadlines favour those with time and resources. Civil society organizations, small businesses, and individuals with full workloads (domestic and professional) find short timelines particularly challenging. Running a “sprint” consultation favours participation from some groups over others.

Another point of controversy was the lack of diversity of the Task Force. The government was roundly criticized for putting together a Task Force with no representation from Canada’s Black communities, particularly given the risks of bias and discrimination posed by AI technologies. A letter to this effect was sent to the Minister of AI, the Prime Minister, and the leaders of Canada’s other political parties by a large group of Black academic and scholars. Following this, a Black representative – a law student - was hurriedly added to the Task Force.

An open letter to the Minister of Artificial Intelligence for civil society organizations and individuals also denounced the consultation, arguing that the deadline should be extended, and that the Task Force should be more equitably representative. The letter noted that civil society groups, human rights experts, and others were absent from the Task Force panel. The group was also critical of the online survey for being biased towards particular outcomes. This group indicated that it would be boycotting the consultation. They have now set up their own People’s Consultation on AI, which is accepting submissions until March 15, 2026.

These controversies highlight a major stumble in developing the AI Strategy. The lack of consultation around the failed Artificial Intelligence and Data Act in Bill C-27 and the criticism that this generated should have been a lesson to ISED on how important the issues raised by AI are to the public and about how they want to be heard. The Summary makes no mention of the controversy it generated. Nevertheless, the criticisms and pushbacks are surely an important part of the outcome of this process.

2. Some thoughts on Transparency

ISED has not only published a summary of the results of its consultation and of the Task Force Reports, it has published in its open government portal the raw data from the consultation, as well as the individual task force reports. This seems to be in line with a new commitment to greater transparency around AI – in the fall of 2025 ISED also published its beta version of a register of AI in use within the federal public service. These are positive developments, although it is worth watching to see if tools like the register of AI are refined, improved (and updated).

ISED was also transparent about its use of generative AI to process the results of the consultation. Page 16 of the summary document explains how it used (unspecified) LLMs to create a “classification pipeline” to “clean survey responses and categorize them into a structured set of themes and subthemes”. The report also describes the use of human oversight to ensure that there was “at least a 90% success rate in categorizing responses into specific intents”. ISED explains that it consulted research experts about their methodology and indicated that the methods they used were in conformity with the recent Treasury Board Guide on the use of generative artificial intelligence. The declaration on the use of AI indicates that the output was used to produce the final report, which is apparently a combination of human authorship and extracts from the AI generated content.

It would frankly be astonishing if generative AI tools have not already been used in other contexts to process submissions to government consultations (but likely without having been disclosed). As a result, the level of transparency about the use here is important. This is illustrated by my colleague Michael Geist’s criticisms of the results of ISED’s use of AI. He ran the Task Force reports through two (identified) LLMs and noted differences in the results between his generated analysis and ISED’s. He argues that “the government had not provided the public with the full picture” and posits that the results were softened by ISED to suggest a consensus that is not actually present. Putting a particular spin on things is not exclusively the result of the use of AI tools – humans do this all the time. However, explaining how results were arrived at using a technological system can create an impression of objectivity and scientific rigor that can mislead, and this underscores the importance of Prof. Geist’s critique.

It is worth noting that it is the level of transparency provided by ISED that allowed this analysis and critique. The immediacy of the publication of the data on which the report was based is important as well. Prolonged access to information request processes were unnecessary here. This approach should become standard government practice.

3. AI Governance/Regulation

The consultation covered many themes, and the AI Strategy is clearly intended to be about more than just how to regulate or govern AI. In fact, one could be forgiven for thinking that the AI Strategy will be about everything except governance and regulation, given the limited expertise from these areas on the Task Force. These focus areas emphasized adoption, investment in, and scaling of AI innovation, as well as strengthening sovereign infrastructure. Among the focus areas only “public trust, skills and safety” gives a rather offhand nod to governance and regulation.

That said, reading between the lines of the summary of inputs, Canadian are concerned about AI governance and regulation. This can be seen in statements such as “Respondents…urged Canada to prioritize responsible governance” (p. 7). Respondents also called for “meaningful regulation” (p. 8) and reminded the government of the need to “modernize regulations” (p. 8). There were also references to “accountable and robust governance”(p. 8) and “strict regulation, penalties for non-compliance and frameworks that uphold Canadian values” (p. 8) when it comes to generative AI. There were also calls for “strict liability laws” (p. 9), and concerns expressed over “lack of regulation and accountability” (p. 9).

One finds these snippets throughout the summary document, which suggests that meaningful regulation was a matter of real concern for respondents. However, the “Conclusions and next steps” section of the report mentions only the need for “regulatory clarity” and streamlined regulatory frameworks – neither of which is a bad thing, but neither of which is really about new regulation or governance. Instead, the report concludes that: “There was general consensus among participants that public trust depends on transparency, accountability, and robust governance, supported by certification standards, independent audits and AI literacy programs” (p. 15, my emphasis). While those tools are certainly part of a regulatory toolkit for AI, on their own and outside of a framework that builds in accountability and oversight, they are basically soft-law and self-regulation. This feels like a rather convenient consensus around where the government was likely heading in the first place.

 

Published in Privacy
Saturday, 29 November 2025 14:42

Canada launches its beta AI Register

Canada’s federal government has just released an early version of the AI Register it promised after its election earlier this year.

An AI Register is an important transparency tool – it will help researchers and the broader public understand what AI-enabled tools are in use in the federal public sector and provides basic information about them. The government also intends the register to be a resource for the public sector – allowing different departments and agencies to better see what others are doing so as to avoid duplication and to learn from each other.

The information accompanying the Register (which is published on Canada’s open government portal) indicates that this is a “Minimum Viable Product”. This means that it is “an early version with only basic features and content that is used to gather feedback.” It will be interesting to see how it develops over time.

One interesting aspect of the register is that it states that it was “assembled from existing sources of information, including Algorithmic Impact Assessments, Access to Information requests, responses to Parliamentary Questions, Personal Information Banks, and the GC Service Inventory.” Since it contains 409 entries at the time of writing, and since there are only a few dozen published Algorithmic Impact Assessments (AIAs), this suggests that the database was compiled largely using sources other than AIAs. The reference to access to information requests suggest that some of the data may have been gathered using the TAG Register Canada laboriously compiled by Joanna Redden and her team at the Western University. The sources for the TAG Register also included access to information requests and responses to questions by Members of Parliament. Prior to the development of the federal AI Register, the TAG Register was probably the most important source of information about public sector AI in Canada. The TAG Register is not made redundant by the new AI Register – it contains additional information about the systems derived from the source materials.

The federal AI Register sets out the name of each system and provides a description. It indicates who the primary users are, and which government organization is responsible for it. Other fields provide data about whether the system is designed in-house or is furnished by a vendor (and if so, which one). It also indicates whether the system is in development, in production, or retired. There is a brief description of the system’s capabilities, some information about the data sources used, and an indication of whether it uses personal data. The register also indicates whether users are given notice of use. There is a brief description of the expected outcomes of the system use.

All in all, it’s a good start, and clearly the developers of this database are open to feedback. (For example, I would like to see a link to the Algorithmic Impact Assessment under the Directive on Automated Decision-Making, if such an assessment has been carried out).

This is an important transparency initiative, and it will be a good source of data for researchers interested in public sector AI. It is also an interesting model that provincial governments might want to consider as they also roll out AI use across their public sectors.

 

Published in Privacy

 

Research for this article was made possible with the support of the Heinrich Boell Foundation Washington, DC.

This piece was originally published by Heinrich Boell Stiftung as part of their series on the broad impacts of the COVID-19 pandemic. The original publication can be found here.

 

 

A strong sense of regional sovereignty in the Canadian health care system may lead to different choices for technologies to track and contain the spread of the coronavirus. A multiplicity of non-interoperable apps could put their effectiveness in question and could create regional differences in approaches to privacy..

By Teresa Scassa

Canada’s national capital Ottawa is located in the province of Ontario but sits on the border with Quebec. As soon as restrictions on movement and activities due to the coronavirus begin to lift, the workforce will once again flow in both directions across a river that separates the two provinces. As with other countries around the world, Canada is debating how to use technology to prevent a second wave of infections. Yet as it stands right now, there is a chance that commuters between Ontario and Quebec could have different contact-tracing apps installed on their phone to track their movements, and that these apps might not be fully interoperable.

Innovation in contact-tracing apps is happening in real time, and amid serious concerns about privacy and security. In Canada, many provinces are on the threshold of adopting contact-tracing apps. Canadian app developers, building on technologies adopted elsewhere, will be offering solutions that rely on decentralized, centralized, or partially centralized data storage. At least one Canadian-built app proposes broader functionalities, including AI-enhancement. And, as is so often the case in Canada, its federal structure could lead to a multiplicity of different apps being adopted across the country. Similar challenges may be faced in the United States.

One app to rule them all?

Canada is a federal state, with 10 provinces and 3 territories. Under its constitution, health care is a matter of provincial jurisdiction, although the federal government regulates food and drug safety. It has also played a role in health care through its spending power, often linking federal health spending to particular priorities. However, when it comes to on-the-ground decision-making around the provision of health care services and public health on a regional level, the provinces are sovereign. Canadian federalism has been tested over the years by Quebec’s independence movement, and more recently by dissatisfaction from Western provinces, particularly Alberta. These tensions mean that co-operation and collaboration are not always top of mind.

When it comes to adoption of contact tracing apps, there is the distinct possibility in Canada that different provinces will make different choices. On May 1 Alberta became the first Canadian province to launch a contact tracing app. There have been reports, for example that New Brunswick is considering a contact tracing app from a local app developer, and the government of Newfoundland and Labrador has also indicated it is considering an app. Other governments contemplating contact tracing apps include Manitoba and Saskatchewan. The possibility that multiple different apps will be adopted across the country is heightened by reports that one municipal entity – Ottawa Public Health – may also have plans to adopt its own version of a contact-tracing app.

Although different contact-tracing apps may not seem like much of an issue with most Canadians under orders to stay home, as restrictions begin to loosen, the need for interoperability will become more acute. If non-interoperable contact-tracing apps were to be adopted in Ontario and Quebec (or even in Ontario, Quebec and Ottawa itself), their individual effectiveness would be substantially undermined. Similar situations could play out in border areas across the country, as well as more generally as Canadians begin to travel across the country.

On May 5, 2020, Doug Ford, the premier of Ontario, Canada’s most populous province, called for a national strategy for contact tracing apps in order to prevent fragmentation. His call for cohesion no doubt recognizes the extent to which Canada’s sometimes shambolic federalism could undermine collective public health goals. Yet with so many provinces headed in so many different directions, often with local app developers as partners, it remains to be seen what can be done to harmonize efforts.

Privacy and contact tracing in Canada

The international privacy debate around contact-tracing apps has centred on limiting the ability of governments to access data that reveals individuals’ patterns of movement and associations. Attention has focused on the differences between centralized and decentralized storage of data collected by contact-tracing apps. With decentralized data storage, all data is locally stored on the app user’s phone; public health authorities are able to carry out contact-tracing based on app data only through a complex technological process that keeps user identities and contacts obscure. This model would be supported by the Google/Apple API, and seems likely to be adopted in many EU states. These apps will erase contact data after it ceases to be relevant, and will cease to function at the end of the pandemic period.

By contrast, with centralized data storage, data about app registrants and their contacts is stored on a central server accessible to public health authorities. A compromise position is found with apps in which data is initially stored only on a user’s phone. If a user tests positive for COVID-19, their data is shared with authorities who then engage in contact-tracing. As an additional privacy protection, express consent can be required before users upload their data to central storage. This is a feature of both the Australian and Alberta models.

Decentralized storage has gained considerable traction in the EU where there are deep concerns about function creep and about the risk that user contact data could be used to create ‘social graphs’ of individuals. The European privacy debates are influenced by the General Data Protection Regulation (GDPR) and its shift toward greater individual control over personal data. In Canada, although the federal privacy commissioner has been advancing a ‘privacy as a human right’ approach to data protection, and although there has been considerable public frustration over the state of private sector data protection, little public sentiment seems to have galvanized around contact-tracing apps. Although Canadians have reacted strongly against perceived overcollection of personal data by public sector bodies in the past, in the pandemic context there seems to be a greater public willingness to accept some incursions on privacy for the public good. What incursions will be acceptable remains to be seen. The federal, provincial and territorial privacy commissioners (with the notable exception of the Alberta commissioner whose hands have been somewhat tied by the launch of the Alberta app) have issued a joint statement on the privacy requirements to be met by contact-tracing apps.

The Alberta contact-tracing app has received the cautious endorsement of the province’s Privacy Commissioner who described it as a “less intrusive” approach (presumably than full centralized storage). She noted that she had reviewed the Privacy Impact Assessment (PIA) (a study done to assess the privacy implications of the app), and was still seeking assurances that collected data would not be used for secondary purposes. She also indicated that the government had committed to the publication of a summary of the Privacy Impact Assessment, although no date was provided for its eventual publication.

Given the attention already paid to privacy in Europe and elsewhere, and given that Australia’s similar app was launched in conjunction with the public release of its full PIA, the Alberta launch should set off both privacy and transparency alarms in Canada. In a context in which decisions are made quickly and in which individuals are asked to sacrifice some measure of privacy for the public good, sound privacy decision-making, supported by full transparent PIAs, and an iterative process for rectifying privacy issues as they emerge, seems a minimum requirement. The release of the Alberta app has also created a gap in the common front of privacy commissioners, and raises questions about the interoperability of contact-tracing apps across Canada. It remains to be seen whether Canada’s federal structure will lead not just to different apps in different provinces, but to different levels of transparency and privacy as well.

 

Published in Privacy

Clearview AI and its controversial facial recognition technology have been making headlines for weeks now. In Canada, the company is under joint investigation by federal and provincial privacy commissioners. The RCMP is being investigated by the federal Privacy Commissioner after having admitted to using Clearview AI. The Ontario privacy commissioner has expressed serious concerns about reports of Ontario police services adopting the technology. In the meantime, the company is dealing with a reported data breach in which hackers accessed its entire client list.

Clearview AI offers facial recognition technology to ‘law enforcement agencies.’ The term is not defined on their site, and at least one newspaper report suggests that it is defined broadly, with private security (for example university campus police) able to obtain access. Clearview AI scrapes images from publicly accessible websites across the internet and compiles them in a massive database. When a client provides them with an image of a person, they use facial recognition algorithms to match the individual in the image with images in its database. Images in the database are linked to their sources which contain other identifying information (for example, they might link to a Facebook profile page). The use of the service is touted as speeding up all manner of investigations by facilitating the identification of either perpetrators or victims of crimes.

This post addresses a number of different issues raised by the Clearview AI controversy, framed around the two different sets of privacy investigations. The post concludes with additional comments about transparency and accountability.

1. Clearview AI & PIPEDA

Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) applies to the collection, use and disclosure of personal information by private sector organizations engaged in commercial activities. Although Clearview AI is a U.S. company, PIPEDA will still apply if there is a sufficient nexus to Canada. In this case, the service clearly captures data about Canadians, and the facial recognition services are marketed to Canadian law enforcement agencies. This should be enough of a connection.

The federal Privacy Commissioner is joined in his investigation by the Commissioners of Quebec, B.C. and Alberta. Each of these provinces has its own private sector data protection laws that apply to organizations that collect, use and disclose personal information within the borders of their respective province. The joint investigation signals the positive level of collaboration and co-operation that exists between privacy commissioners in Canada. However, as I explain in an earlier post, the relevant laws are structured so that only one statute applies to a particular set of facts. This joint investigation may raise important jurisdictional questions similar to those raised in the Facebook/Cambridge Analytica joint investigation and that were not satisfactorily resolved in that case. It is a minor issue, but nonetheless one that is relevant and interesting from a privacy governance perspective.

The federal Commissioner’s investigation will focus on whether Clearview AI complied with PIPEDA when it collected, used and disclosed the personal information which populates its massive database. Clearview AI’s position on the legality of its actions is clearly based on U.S. law. It states on its website that: “Clearview searches the open web. Clearview does not and cannot search any private or protected info, including in your private social media accounts.” In the U.S., there is much less in the way of privacy protection for information in ‘public’ space. In Canada however, the law is different. Although there is an exception in PIPEDA (and in comparable provincial private sector laws) to the requirement of consent for the collection, use or disclosure of “publicly available information”, this exception is cast in narrow terms. It is certainly not broad enough to encompass information shared by individuals through social media. Interestingly, in hearings into PIPEDA reform, the House of Commons ETHI Committee at one point seemed swayed by industry arguments that PIPEDA should be amended to include websites and social media within the exception for “publicly available personal information”. In an earlier post, I argued that this was a dangerous direction in which to head, and the Clearview AI controversy seems to confirm this. Sharing photographs online for the purposes of social interaction should not be taken as consent to use those images in commercial facial recognition technologies. What is more, the law should not be amended to deem it to be so.

To the extent, then, that the database contains personal information of Canadians that was collected without their knowledge or consent, the conclusion will likely be that there has been a breach of PIPEDA. The further use and disclosure of personal information without consent will also amount to a breach. An appropriate remedy would include ordering Clearview AI to remove all personal information of Canadians that was collected without consent from its database. Unfortunately, the federal Commissioner does not have order-making powers. If the investigation finds a breach of PIPEDA, it will still be necessary to go to Federal Court to ask that court to hold its own hearing, reach its own conclusions, and make an order. This is what is currently taking place in relation the Facebook/Cambridge Analytica investigation, and it makes somewhat of a mockery of our privacy laws. Stronger enforcement powers are on the agenda for legislative reform of PIPEDA, and it is to be hoped that something will be done about this before too long.

 

2. The Privacy Act investigation

The federal Privacy Commissioner has also launched an investigation into the RCMP’s now admitted use of Clearview AI technology. The results of this investigation should be interesting.

The federal Privacy Act was drafted for an era in which government institution generally collected the information they needed and used from individuals. Governments, in providing all manner of services, would compile significant amounts of data, and public sector privacy laws set the rules for governance of this data. These laws were not written for our emerging context in which government institutions increasingly rely on data analytics and data-fueled AI services provided by the private sector. In the Clearview AI situation, it is not the RCMP that has collected a massive database of images for facial recognition. Nor has the RCMP contracted with a private sector company to build this service for it. Instead, it is using Clearview AI’s services to make presumably ad hoc inquiries, seeking identity information in specific instances. It is not clear whether or how the federal Privacy Act will apply in this context. If the focus is on the RCMP’s ‘collection’ and ‘use’ of personal information, it is arguable that this is confined to the details of each separate query, and not to the use of facial recognition on a large scale. The Privacy Act might simply not be up to addressing how government institutions should interact with these data-fuelled private sector services.

The Privacy Act is, in fact, out of date and clearly acknowledged to be so. The Department of Justice has been working on reforms and has attempted some initial consultation. But the Privacy Act has not received the same level of public and media attention as has PIPEDA. And while we might see reform of PIPEDA in the not too distant future, reform of the Privacy Act may not make it onto the legislative agenda of a minority government. If this is the case, it will leave us with another big governance gap for the digital age.

If the Privacy Act is not to be reformed any time soon, it will be very interesting to see what the Privacy Commissioner’s investigation reveals. The interpretation of section 6(2) of the Privacy Act could be of particular importance. It provides that: “A government institution shall take all reasonable steps to ensure that personal information that is used for an administrative purpose by the institution is as accurate, up-to-date and complete as possible.” In 2018 the Supreme Court of Canada issued a rather interesting decision in Ewert v. Canada, which I wrote about here. The case involved a Métis man’s challenge to the use of actuarial risk-assessment tests by Correctional Services Canada to make decisions related to his incarceration. He argued that the tests were “developed and tested on predominantly non-Indigenous populations and that there was no research confirming that they were valid when applied to Indigenous persons.” (at para 12). The Corrections and Conditional Release Act contained language very similar to s. 6(2) of the Privacy Act. The Supreme Court of Canada ruled that this language placed an onus on the CSC to ensure that all of the data it relied upon in its decision-making about inmates met that standard – including the data generated from the use of the assessment tools. This ruling may have very interesting implications not just for the investigation into the RCMP’s use of Clearview’s technology, but also for public sector use of private sector data-fueled analytics and AI where those tools are based upon personal data. The issue is whether, in this case, the RCMP is responsible for ensuring the accuracy and reliability of the data generated by a private sector AI system on which they rely.

One final note on the use of Clearview AI’s services by the RCMP – and by other police services in Canada. A look at Clearview AI’s website reveals its own defensiveness about its technologies, which it describes as helping “to identify child molesters, murderers, suspected terrorists, and other dangerous people quickly, accurately, and reliably to keep our families and communities safe.” Police service representatives have also responded defensively to media inquiries, and their admissions of use come with very few details. If nothing else, this situation highlights the crucial importance of transparency, oversight and accountability in relation to these technologies that have privacy and human rights implications. Transparency can help to identify and examine concerns, and to ensure that the technologies are accurate, reliable and free from bias. Policies need to be put in place to reflect clear decisions about what crimes or circumstances justify the use of these technologies (and which ones do not). Policies should specify who is authorized to make the decision to use this technology and according to what criteria. There should be record-keeping and an audit trail. Keep in mind that technologies of this kind, if unsupervised, can be used to identify, stalk or harass strangers. It is not hard to imagine someone use this technology to identify a person seen with an ex-spouse, or even to identify an attractive woman seen at a bar. They can also be used to identify peaceful protestors. The potential for misuse is enormous. Transparency, oversight and accountability are essential if these technologies are to be used responsibly. The sheepish and vague admissions of use of Clearview AI technology by Canadian police services is a stark reminder that there is much governance work to be done around such technologies in Canada even beyond privacy law issues.

Published in Privacy

An interesting case from Quebec demonstrates the tension between privacy and transparency when it comes to public registers that include personal information. It also raises issues around ownership and control of data, including the measures used to prevent data scraping. The way the litigation was framed means that not all of these questions are answered in the decision, leaving some lingering public policy questions.

Quebec’s Enterprise Registrar oversees a registry, in the form of a database, of all businesses in Quebec, including corporations, sole corporations and partnerships. The Registrar is empowered to do so under the Act respecting the legal publicity of enterprises (ALPE), which also establishes the database. The Registrar is obliged to make this register publicly accessible, including remotely by technological means, and basic use of the database is free of charge.

The applicant in this case is OpenCorporates, a U.K.-based organization dedicated to ensuring total corporate transparency. According to its website, OpenCorporates has created and maintains “the largest open database of companies in the world”. It currently has data on companies located in over 130 jurisdictions. Most of this data is drawn from reliable public registries. In addition to providing a free, searchable public resource, OpenCorporates also sells structured data to financial institutions, government agencies, journalists and other businesses. The money raised from these sales finances its operations.

OpenCorporates gathers its data using a variety of means. In 2012, it began to scrape data from Quebec’s Enterprise Register. Data scraping involves the use of ‘bots’ to visit and automatically harvest data from targeted web pages. It is a common data-harvesting practice, widely used by journalists, civil society actors and researchers, as well as companies large and small. As common as it may be, it is not always welcome, and there has been litigation in Canada and around the world about the legality of data scraping practices, chiefly in contexts where the defendant is attempting to commercialize data scraped from a business rival.

In 2016 the Registrar changed the terms of service for the Enterprise Register. These changes essentially prohibited web scraping activities, as well as the commercialization of data extracted from the site. The new terms also prohibit certain types of information analyses; for example, they bar searches for data according to the name and address of a particular person. All visitors to the site must agree to the Terms of Service. The Registrar also introduced technological measures to make it more difficult for bots to scrape its data.

Opencorporates Ltd. C. Registraire des entreprises du Québec is not a challenge to the Register’s new, restrictive terms and conditions. Instead, because the Registrar also sent OpenCorporates a cease and desist letter demanding that it stop using the data it had collected prior to the change in Terms of Service, OpenCorporates sought a declaration from the Quebec Superior Court that it was entitled to continue to use this earlier data.

The Registrar acknowledged that nothing in the ALPE authorizes it to control uses made of any data obtained from its site. Further, until it posted the new terms and conditions for the site, nothing limited what users could do with the data. The Registrar argued that it had the right to control the pre-2016 data because of the purpose of the Register. It argued that the ALPE established the Register as the sole source of public data on Quebec businesses, and that the database was designed to protect the personal information that it contained (i.e. the names and addresses of directors of corporations). For example, it does not permit extensive searches by name or address. OpenCorporates, by contrast, permits the searching of all of its data, including by name and address.

The court characterized the purpose of the Register as being to protect individuals and corporations that interact with other corporations by assuring them easy access to identity information, including the names of those persons associated with a corporation. An electronic database gives users the ability to make quick searches and from a distance. Quebec’s Act to Establish a Legal Framework for Information Technology provides that where a document contains personal information and is made public for particular purposes, any extensive searches of the document must be limited to those purposes. This law places the onus on the person responsible for providing access to the document to put in place appropriate technological protection measures. Under the ALPE, the Registrar can carry out more comprehensive searches of the database on behalf of users who must make their request to the Registrar. Even then, the ALPE prohibits the Registrar from using the name or address of an individual as a basis for a search. According to the Registrar, a member of the public has right to know, once one they have the name of a company, with whom they are dealing; they do not have the right to determine the number of companies to which a physical person is linked. By contrast, this latter type of search is one that could be carried out using the OpenCorporates database.

The court noted that it was not its role to consider the legality of OpenCorporates’ database, nor to consider the use made by others of that database. It also observed that individuals concerned about potential privacy breaches facilitated by OpenCorporates might have recourse under Quebec privacy law. Justice Rogers’ focus was on the specific question of whether the Registrar could prevent OpenCorporates from using the data it gathered prior to the change of terms of service in 2016. On this point, the judge ruled in favour of OpenCorporates. In her view, OpenCorporates’ gathering of this data was not in breach of any law that the Registrar could rely upon (leaving aside any potential privacy claims by individuals whose data was scraped). Further, she found that nothing in the ALPE gave the Registrar a monopoly on the creation and maintenance of a database of corporate data. She observed that the use made by OpenCorporates of the data was not contrary to the purpose of the ALPE, which was to create greater corporate transparency and to protect those who interacted with corporations. She ruled that nothing in the ALPE obligated the Registrar to eliminate all privacy risks. The names and addresses of those involved with corporations are public information; the goal of the legislation is to facilitate digital access to the data while at the same time placing limits on bulk searches. Nothing in the ALPE prevented another organization from creating its own database of Quebec businesses. Since OpenCorporates did not breach any laws or terms of service in collecting the information between 2012 and 2016, nothing prevented it from continuing to use that information in its own databases. Justice Rogers issued a declaration to the effect that the Registrar was not permitted to prevent OpenCorporates from publishing and distributing the data it collected from the Register prior to 2016.

While this was a victory for OpenCorporates, it did not do much more than ensure its right to continue to use data that will become increasingly dated. There is perhaps some value in the Court’s finding that the existence of a public database does not, on its own, preclude the creation of derivative databases. However, the decision leaves some important questions unanswered. In the first place, it alludes to but offers no opinion on the ability to challenge the inclusion of the data in the OpenCorporates database on privacy grounds. While a breach of privacy argument might be difficult to maintain in the case of public data regarding corporate ownership, it is still unpredictable how it might play out in court. This is far less sensitive data that that involved in the scraping of court decisions litigated before the Federal Court in A.T. v. Globe24hr.com; there is a public interest in making the specific personal information available in the Registry; and the use made by OpenCorporates is far less exploitative than in Globe24hr. Nevertheless, the privacy issues remain a latent difficulty. Overall, the decision tells us little about how to strike an appropriate balance between the values of transparency and privacy. The legislation and the Registrar’s approach are designed to make it difficult to track corporate ownership or involvement across multiple corporations. There is rigorous protection of information with low privacy value and with a strong public dimension; with transparency being weakened as a result. It is worth noting that another lawsuit against the Register may be in the works. It is reported that the CBC is challenging the decision of the Registrar to prohibit searches by names of directors and managers of companies as a breach of the right to freedom of expression.

Because the terms of service were not directly at issue in the case, there is also little to go on with respect to the impact of such terms. To what extent can terms of service limit what can be done with publicly accessible data made available over the Internet? The recent U.S. case of hiQ Labs Inc. v. LinkedIn Corp. raises interesting questions about freedom of expression and the right to harvest publicly accessible data. This and other important issues remain unaddressed in what is ultimately an interesting but unsatisfying court decision.

 

Published in Privacy

In October 2016, the data analytics company Geofeedia made headlines when the California chapter of the American Civil Liberties Union (ACLU) issued the results of a major study which sought to determine the extent to which police services in California were using social media data analytics. These analytics were based upon geo-referenced information posted by ordinary individuals to social media websites such as Twitter and Facebook. Information of this kind is treated as “public” in the United States because it is freely contributed by users to a public forum. Nevertheless, the use of social media data analytics by police raises important civil liberties and privacy questions. In some cases, users may not be aware that their tweets or posts contain additional meta data including geolocation information. In all cases, the power of data analytics permits rapid cross-referencing of data from multiple sources, permitting the construction of profiles that go well beyond the information contributed in single posts.

The extent to which social media data analytics are used by police services is difficult to assess because there is often inadequate transparency both about the actual use of such services and the purposes for which they are used. Through a laborious process of filing freedom of information requests the ACLU sought to find out which police services were contracting for social media data analytics. The results of their study showed widespread use. What they found in the case of Geofeedia went further. Although Geofeedia was not the only data analytics company to mine social media data and to market its services to government authorities, its representatives had engaged in email exchanges with police about their services. In these emails, company employees used two recent sets of protests against police as examples of the usefulness of social media data analytics. These protests were those that followed the death in police custody of Freddie Gray, a young African-American man who had been arrested in Baltimore, and the shooting death by police of Michael Brown, an eighteen-year-old African-American man in Ferguson, Missouri. By explicitly offering services that could be used to monitor those who protested police violence against African Americans, the Geofeedia emails aggravated a climate of mistrust and division, and confirmed a belief held by many that authorities were using surveillance and profiling to target racialized communities.

In a new paper, just published in the online, open-access journal SCRIPTed, I use the story around the discovery of Geofeedia’s activities and the backlash that followed to frame a broader discussion of police use of social media data analytics. Although this paper began as an exploration of the privacy issues raised by the state’s use of social media data analytics, it shifted into a paper about transparency. Clearly, privacy issues – as well as other civil liberties questions – remain of fundamental importance. Yet, the reality is that without adequate transparency there simply is no easy way to determine whether police are relying on social media data analytics, on what scale and for what purposes. This lack of transparency makes it difficult to hold anyone to account. The ACLU’s work to document the problem in California was painstaking and time consuming, as was a similar effort by the Brennan Center for Justice, also discussed in this paper. And, while the Geofeedia case provided an important example of the real problems that underlie such practices, it only came to light because Geofeedia’s employees made certain representations by email instead of in person or over the phone. A company need only direct that email not be used for these kinds of communications for the content of these communications to disappear from public view.

My paper examines the use of social media data analytics by police services, and then considers a range of different transparency issues. I explore some of the challenges to transparency that may flow from the way in which social media data analytics are described or characterized by police services. I then consider transparency from several different perspectives. In the first place I look at transparency in terms of developing explicit policies regarding social media data analytics. These policies are not just for police, but also for social media platforms and the developers that use their data. I then consider transparency as a form of oversight. I look at the ways in which greater transparency can cast light on the activities of the providers and users of social media data and data analytics. Finally, I consider the need for greater transparency around the monitoring of compliance with policies (those governing police or developers) and the enforcement of these policies.

A full text of my paper is available here under a CC Licence.

Published in Privacy

As part of Right to Know week, I participated in a conference organized by Canada’s Office of the Information Commissioner. My panel was asked to discuss Bill C-58, an Act to amend the Access to Information Act. I have discussed other aspects of this bill here and here. Below are my thoughts on the Commissioner’s order-making powers under that Bill.

Bill C-58, the Act to amend the Access to Information Act will, if passed into law, give the Information Commissioner order-making powers. This development has been called for repeatedly over the years by the Commissioner as well as by access to information advocates. Order-making powers transform the Commissioner’s recommendations into requirements; they provide the potential to achieve results without the further and laborious step of having to go to the Federal Court. This is, at least the theory. For many, the presence of order-making powers is one of the strengths of C-58, a Bill that has otherwise been criticized for not going far enough to reform a badly outdated access to information regime.

Before one gets too excited about the order-making powers in Bill C-58, however, it is worth giving them a closer look. The power is found in a proposed new s. 36.1, which reads:

36.‍1 (1) If, after investigating a complaint described in any of paragraphs 30(1)‍(a) to (d.‍1), the Commissioner finds that the complaint is well-founded, he or she may make any order in respect of a record to which this Part applies that he or she considers appropriate, including requiring the head of the government institution that has control of the record in respect of which the complaint is made

(a) to disclose the record or a part of the record; and

(b) to reconsider their decision to refuse access to the record or a part of the record.

Although this appears promising, there is a catch. Any such order will not take effect until after the expiry of certain periods of time. The first of these is designed to allow the head of the institution to ask the Federal Court to review “the matter that is the subject of the complaint.” The second time period is to allow third parties (for example, someone whose personal information or confidential commercial information might be affected by the proposed order) or the federal Privacy Commissioner to apply to the Federal Court for a review. (The reason why the Privacy Commissioner might be seeking a review is the subject of an earlier post here).

The wording of these provisions makes it clear that recourse to the Federal Court is neither an appeal of the Commissioner’s order, nor an application for judicial review. Instead, the statute creates a right to request a hearing de novo before the Federal Court on “the matter that is the subject of the complaint”. As we know from experience with the Personal Information Protection and Electronic Documents Act, such a proceeding de novo does not require any deference to be given to the Commissioner’s report, conclusions or order.

One need only compare these order-making powers with those of some of the Commissioner’s provincial counterparts to see how tentative the drafters of Bill C-58 have been. Alberta’s Freedom of Information and Protection of Privacy Act states simply “An order made by the Commissioner under this Act is final.”(s. 73) British Columbia’s statute takes an approach which at first glance looks similar to what is in C-58. Section 59 provides:

59. (1) Subject to subsection (1.1), not later than 30 days after being given a copy of an order of the commissioner, the head of the public body concerned or the service provider to whom the order is directed, as applicable, must comply with the order unless an application for judicial review of the order is brought before that period ends.

Like C-58, s. 59 of B.C.’s Freedom of Information and Protection of Privacy Act provides for a delay in the order’s taking effect depending on whether the head of the institution seeks to challenge it. However, unlike C-58, the head of the institution must seek judicial review of the order (not the matter more generally). Judicial review is based on the record that was before the original adjudicator. It is also a process that requires some deference to be shown to the Commissioner.

A report on the modernization of Canada’s access to information regime compared the current ombuds model with the order-making model. It found that the order making model was preferable for a number of cogent reasons. Two of these were:

  • It gives a clear incentive to institutions to apply exemptions only where there is sufficient evidence to support non-disclosure and then put this evidence before the adjudicator, as judicial review before the Court is based on the record that was before the adjudicator.
  • The grounds on which the order can be set aside are limited and the institution cannot introduce new evidence or rely on new exemptions, as it is the adjudicator’s, and not the institution’s, decision that is under review before the Court.

These are very sound reasons for moving to an order-making model. Unfortunately, the model provided in Bill C-58 does not provide these advantages. Because it allows for a hearing de novo, there is no incentive to put everything before the adjudicator – new evidence and arguments can be introduced before the Federal Court. This will do nothing to advance the goals of accountability and transparency; it might even help to obstruct them.

Published in Privacy

Toronto Star journalist Theresa Boyle has just won an important victory for access to information rights and government transparency – one that is likely to be challenged before the Ontario Court of Appeal. On June 30, 2017, three justices of the Ontario Divisional Court unanimously upheld an adjudicator’s order that the Ministry of Health and Long-Term Care disclose the names, annual billing amounts and fields of medical specialization of the 100 top-billing physicians in Ontario. The application for judicial review of the order was brought by the Ontario Medical Association, along with many of the doctors on the disputed list (the Applicants).

The amount that the Ontario Health Insurance Program (OHIP) pays physicians for services rendered is government information. Under the Freedom of Information and Protection of Privacy Act (FOIPPA), the public has a right of access to government information – subject to specific exceptions that serve competing issues of public interest. One of these is privacy – a government institution can refuse to disclose information if it would reveal personal information. The Ministry had been willing to disclose the top 100 amounts billed to OHIP, but it refused to disclose the names of the doctors or some of the areas of specialization (which might lead to their identification) on the basis that this was the physicians’ personal information. The Adjudicator disagreed and found that the billing information, including the doctors’ names, was not personal information. Instead, it identified the physicians in their professional capacity. FOIPPA excludes this sort of information from the definition of personal information.

The Applicants accepted that the physicians were named in the billing records in their professional capacity. However, they argued that when those names were associated with the gross amounts, this revealed “other personal information”. In other words, they argued that the raw billing information did not reflect the business overhead expenses that physicians had to pay from their earnings. As a result, this information, if released, would be misinterpreted by the public as information about their net incomes. They argued that this made converted it into “other personal information relating to the individual” (s. 2(1)(h)). How much doctors bill OHIP should be public information. The idea that the possibility that such information might be misinterpreted could be a justification for refusal to disclose it is paternalistic. It also has the potential to stifle access to information. The argument deserved the swift rejection it received from the court.

The Applicants also argued that the adjudicator erred by not following earlier decisions of the Office of the Information and Privacy Commissioner (OIPC) that had found that the gross billing amounts associated with physician names constituted personal information. Adjudicator John Higgins ruled that “Payments that are subject to deductions for business expenses are clearly business information.” (at para 18) The Court observed that the adjudicator was not bound to follow earlier OIPC decisions. Further, the issue of consistency could be looked at in two ways. As the adjudicator himself had noted, the OIPC had regularly treated information about the income of non-medical professionals as non-personal information subject to disclosure under the FOIPPA; but for some reasons had treated physician-related information differently. Thus, while one could argue that the adjudicator’s decision was inconsistent with earlier decisions about physician billing information, it was entirely consistent with decisions about monies paid by government to other professionals. The Court found no fault with the adjudicator’s approach.

The Applicants had also argued that Ms Boyle “had failed to establish a pressing need for the information or how providing it to her would advance the objective of transparency in government.” (para 31). The court gave this argument the treatment it deserved – they smacked it down. Justice Nordheimer observed that applicants under the FOIPPA are not required to provide reasons why they seek information. Rather, the legislation requires that information of this kind “is to be provided unless a privacy exception is demonstrated.” (at para 32) Justice Nordheimer went on to note that under access to information legislation, “the public is entitled to information in the possession of their governments so that the public may, among other things, hold their governments accountable.” He stated that “the proper question to be asked in this context, therefore, is not “why do you need it?” but rather is “why should you not have it.”” (at para 34).

This decision of the Court is to be applauded for making such short work of arguments that contained little of the public interest and a great deal of private interest. Transparency within a publicly-funded health care system is essential to accountability. Kudos to Theresa Boyle and the Toronto Star for pushing this matter forward. The legal costs of $50,000 awarded to them make it clear that transparency and accountability often do not come cheaply or without significant effort. And those costs continue to mount as the issues must now be hammered out again before the Ontario Court of Appeal.

Published in Privacy

How does one balance transparency with civil liberties in the context of election campaigns? This issue is at the core of a decision just handed down by the Supreme Court of Canada.

B.C. Freedom of Information and Privacy Association v. Attorney-General (B.C.) began as a challenge by the appellant organization to provisions of B.C.’s Election Act that required individuals or organizations who “sponsor election advertising” to register with the Chief Electoral Officer. Information on the register is publicly available. The underlying public policy goals to allow the public to see who is sponsoring advertising campaigns during the course of elections. The Supreme Court of Canada easily found this objective to be “pressing and substantial”.

The challenge brought by the B.C. Freedom of Information and Privacy Association (BCFIPA) was based on the way in which the registration requirement was framed in the Act. The Canada Elections Act also contains a registration requirement, but the requirement is linked to a spending threshold. In other words, under the federal statute, those who spend more than $500 on election advertising are required to register; others are not. The B.C. legislation is framed instead in terms of a general registration requirement for all sponsors of election advertising. BCFIPA’s concern was that this would mean that any individual who placed a handmade sign in their window, who wore a t-shirt with an election message, or who otherwise promoted their views during an election campaign would be forced to register. Not only might this chill freedom of political expression in its own right, it would raise significant privacy issues for individuals since they would have to disclose not just their names, but their addresses and other contact information in the register. Thus, the BCFIPA sought to have the registration requirement limited by the Court to only those who spent more than $500 on an election campaign.

The problem in this case was exacerbated by the position taken by B.C.’s Chief Electoral Officer. In a 2010 report to the B.C. legislature, he provided his interpretation of the application of the legislation. He expressed the view that it did not “distinguish between those sponsors conducting full media campaigns and individuals who post handwritten signs in their apartment windows.” (at para 19). This interpretation of the Election Act was accepted by both the trial judge and at the Court of Appeal, and it shaped the argument before those courts as well as their decisions.

The Supreme Court of Canada took an entirely different approach. They interpreted the language “sponsor election advertising” to mean something other than the expression of political views by individuals. In other words, the statute applied only to those who sponsored election advertising – i.e., those who paid for election advertising to be conducted or who received such services as a contribution. The Court was of the view that the public policy behind registration requirements was generally sound. It found that a legislature could mitigate the impact on freedom of expression by either setting a monetary threshold to trigger the requirement (as is the case at the federal level) or by defining sponsorship to exclude individual expression (as was the case in B.C.). While it is true that the B.C. statute could still capture organized activities involving expenditures of less than $500, and might thus have some limiting effect, the Court found that this would not be significant for a number of reasons, and that such impacts were easily reconcilable with the benefits of the registration scheme.

The decision of the Supreme Court of Canada will be useful in clarifying the scope and impact of the Election Act and in providing guidance for similar statutes. It should be noted however, that the case traveled to the Supreme Court of Canada at great cost both to BCFIPA and to the taxpayer because of either legislative inattention to the need to clarify the scope of the legislation or because of an over-zealous interpretation of the statute by the province’s Chief Electoral Officer. The situation highlights the need for careful attention to be paid at the outset of such initiatives to the balance that must be struck between transparency and other competing values such as civil liberties and privacy.

 

Published in Privacy

The federal government has just released for public comment its open government plan for 2016-2018. This is the third such plan since Canada joined the Open Government Partnership in 2012. The two previous plans were released by the Conservative government, and were called Canada’s Action Plan on Open Government 2012-2014 and Canada’s Action Plan on Open Government 2014-2016. This most recent plan is titled Canada’s New Plan on Open Government (“New Plan”). The change in title signals a change in approach.

The previous government structured its commitments around three broad themes: Open Data, Open Information and Open Dialogue. It is fair to say that it was the first of these themes that received the greatest attention. Under the Conservatives there were a number of important open data initiatives: the government developed an open data portal, an open government licence (modeled on the UK Open Government Licence), and a Directive on Open Government. It also committed to funding the Open Data Exchange (ODX) (a kind of incubator hub for open data businesses in Canada), and supported a couple of national open data hackathons. Commitments under Open Information were considerably less ambitious. While important improvements were made to online interfaces for making access to information requests, and while more information was provided about already filled ATIP requests, it is fair to say that improving substantive access to government information was not a priority. Open dialogue commitments were also relatively modest.

Canada’s “New Plan” is considerably different in style and substance from its predecessors. This plan is structured around 4 broad themes: open by default; fiscal transparency; innovation, prosperity and sustainable development; and engaging Canadians and the world. Each theme comes with a number of commitments and milestones, and each speaks to an aspirational goal for open government, better articulating why this is an initiative worth an investment of time and resources.

Perhaps because there was so great a backlash against the previous government’s perceived lack of openness, the Liberals ran on an election platform that stressed openness and transparency. The New Plan reflects many of these election commitments. As such, it is notably more ambitious than the previous two action plans. The commitments are both deeper (for example, the 2014-2016 action plan committed to a public database disclosing details of all government contracts over $10,000; the New Plan commits to revealing details of all contracts over $1), and more expansive (with the government committing to new openness initiatives not found in earlier plans).

One area where the previous government faced considerable criticism (see, for example Mary Francoli’s second review of Canada’s open government commitments) was in respect of the access to information regime. That government’s commitments under “open information” aimed to improve access to information processes without addressing substantive flaws in the outdated Access to Information Act. The new government’s promise to improve the legislation is up front in the New Plan. Its first commitment is to enhance access to information through reforms to the legislation. According to the New Plan, these include order-making powers for the Commissioner, extending the application of the Access to Information Act to the Prime Minister and his Ministers’ Offices, and mandatory 5-year reviews of the legislation. Although these amendments would be a positive step, they fall short of those recommended by the Commissioner. It will also be interesting to see whether everything on this short list comes to pass. (Order-making powers in particular are something to watch here.) The House of Commons Standing Committee on Access to Information, Privacy and Ethics has recently completed hearings on this legislation. It will be very interesting to see what actually comes of this process. As many cynics (realists?) have observed, it is much easier for opposition parties to be in favour of open and transparent government than it is for parties in power. Whether the Act gets the makeover it requires remains to be seen.

One of the interesting features of this New Plan is that many of the commitments are ones that go to supporting the enormous cultural shift that is required for a government to operate in a more open fashion. Bureaucracies develop strong cultures, often influenced by long-cherished policies and practices. Significant change often requires more than just a new policy or directive; the New Plan contains commitments for the development of clear guidelines and standards for making data and information open by default, as well as commitments to training and education within the civil service, performance metrics, and new management frameworks. While not particularly ‘exciting’, these commitments are important and they signal a desire to take the steps needed to effect a genuine cultural shift within government.

The New Plan identifies fiscal transparency as an overarching theme. It contains several commitments to improve fiscal transparency, including more extensive and granular reporting of information on departmental spending, greater transparency of budget data and of fiscal analysis, and improved openness of information around government grants and other contributions. The government also commits to creating a single portal for Canadians who wish to search for information on Canadian businesses, whether they are incorporated federally or in one of the provinces or territories.

On the theme of Innovation, Prosperity and Sustainable Development, the New Plan also reflects commitments to greater openness in relation to federal science activities (a sore point with the previous government). It also builds upon a range of commitments that were present in previous action plans, including the use of the ODX to stimulate innovation, the development of open geospatial data, the alignment of open data at all levels of government in Canada, and the implementation of the Extractive Sector Transparency Measures Act. The New Plan also makes commitments to show leadership in supporting openness and transparency around the world.

The government’s final theme is “Engaging Canadians and the World”. This is the part where the government addresses how it plans to engage civil society. It plans to disband the Advisory Panel established by the previous government (of which I was a member). While the panel constituted a broad pool of expertise on which the government could draw, it was significantly under-utilized, and clearly this government plans to try something new. They state that they will “develop and maintain a renewed mechanism for ongoing, meaningful dialogue” between the government and civil society organizations – whatever that means. Clearly, the government is still trying to come up with a format or framework that will be most effective.

The government also commits in rather vague terms to fostering citizen participation and engagement with government on open government initiatives. It would seem that the government will attempt to “enable the use of new methods for consulting and engaging Canadians”, and will provide support and resources to government departments and agencies that require assistance in doing so. The commitments in this area are inward-looking – the government seems to acknowledge that it needs to figure out how to encourage and enhance citizen engagement, but at the same time is not sure how to do so effectively.

In this respect, the New Plan offers perhaps a case in point. This is a detailed and interesting plan that covers a great deal of territory and that addresses many issues that should be of significant concern to Canadians. It was released on June 16, with a call for comments by June 30. Such a narrow window of time in which to comment on such a lengthy document does not encourage engagement or dialogue. While the time constraints may be externally driven (by virtue of OGP targets and deadlines), and while there has been consultation in the lead up to the drafting of this document, it is disappointing that the public is not given more time to engage and respond.

For those who are interested in commenting, it should be noted that the government is open to comments/feedback in different forms. Comments may be made by email, or they can be entered into a comment box at the bottom of the page where the report is found. These latter comments tend to be fairly short and, once they pass through moderation, are visible to the public.

<< Start < Prev 1 2 Next > End >>
Page 1 of 2

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law