Monday, 02 March 2015 08:58

Back to the Future I: What Past Privacy Findings Tell us About the Future of Big Data and Privacy

Written by  Teresa Scassa
Rate this item
(0 votes)

A long past and largely forgotten ‘finding* from the Office of the Privacy Commissioner of Canada offers important insights into the challenges that big data and big data analytics will pose for the protection of Canadians’ privacy and consumer rights.

13 years ago, former Privacy Commissioner George Radwanski issued his findings on a complaint that had been brought against a bank. The complainant had alleged that the bank had wrongfully denied her access to her personal information. The requirement to provide access is found in the Personal Information Protection and Electronic Documents Act (PIPEDA). The right of access also comes with a right to demand the correction of any errors in the personal information in the hands of the organization. This right is fundamentally important, not just to privacy. Without access to the personal information being used to inform decision-making, consumers have very little recourse of any kind against adverse or flawed decision-making.

The complainant in this case had applied for and been issued a credit card by the bank. What she sought was access to the credit score that had been used to determine her entitlement to the card. The bank had relied upon two credit scores in reaching its decision. The first was the type produced by a credit reporting agency – in this case, Equifax. The second was an internal score generated by the bank using its own data and algorithm. The bank was prepared to release the former to the complainant, but refused to give her access to the latter. The essence of the complaint, therefore, was whether the bank had breached its obligations under PIPEDA to give her access to the personal information it held about her.

The Privacy Commissioner’s views on the interpretation and application of the statute in this case are worth revisiting 13 years later as big data analytics now fuel so much decision-making regarding consumers and their entitlement to or eligibility for a broad range of products and services. Credit reporting agencies are heavily regulated to ensure that decisions about credit-worthiness are made fairly and equitably, and to ensure that individuals have clear rights to access and to correct information in their files. For example, credit reporting legislation may limit the types of information and the data sources that may be used by credit reporting agencies in arriving at their credit scores. But big data analytics are now increasingly relied upon by all manner of organizations that are not regulated in the same way as credit-reporting agencies. These analytics are used to make decisions of similar importance to consumers – including decisions about credit-worthiness. There are few limits on the data that is used to fuel these analytics, nor is there much transparency in the process.

In this case, the bank justified its refusal to disclose its internal credit score on two main grounds. First, it argued that this information was not “personal information” within the meaning of PIPEDA because it was ‘created’ internally and not collected from the consumer or any other sources. The bank argued that this meant that it did not have to provide access, and that in any event, the right of access was linked to the right to request correction. The nature of the information – which was generated based upon a proprietary algorithm – was such that was not “facts” that could be open to correction.

The argument that generated information is not personal information is a dangerous one, as it could lead to a total failure of accountability under data protection laws. The Commissioner rejected this argument. In his view, it did not matter whether the information was generated or collected; nor did it matter whether it was subject to correction or not. The information was personal information because it related to the individual. He noted that “opinions” about an individual were still considered to be personal information, even though they are not subject to correction. This view of ‘opinions’ is consistent with subsequent findings and decisions under PIPEDA and comparable Canadian data protection laws. Thus, in the view of the Commissioner, the bank’s internally generated credit score was the complainant’s personal information and was subject to PIPEDA.

The bank’s second argument was more successful, and is problematic for consumers. The bank argued that releasing the credit score to the complainant would reveal confidential commercial information. Under s. 9(3)(b) of PIPEDA, an organization is not required to release personal information in such circumstances. The bank was not arguing so much that the complainant’s score itself was confidential commercial information; rather, what was confidential were the algorithms used to arrive at the score. The bank argued that these algorithms could be reverse-engineered from a relatively small sample of credit scores. Thus, a finding that such credit scores must be released to individuals would leave the bank open to the hypothetical situation where a rival might organize or pay 20 or so individuals to seek access to their internally generated credit scores in the hands of the bank, and that set of scores could then be used to arrive at the confidential algorithms. The Commissioner referred this issue to an expert on algorithms and concluded that “although an exact determination of a credit-scoring model was difficult and highly unlikely, access to customized credit scores would definitely make it easier to approximate a bank’s model.”

The Commissioner noted that under s. 9(3)(b) there has to be some level of certainty that the disclosure of personal information will reveal confidential commercial information before disclosure can be refused. In this case, the Commissioner indicated that he had “some difficulty believing that either competitors or rings of algorithmically expert fraud artists would go to the lengths involved.” He went on to say that “[t]he spectre of the banks falling under systematic assault from teams of loan-hungry mathematicians is simply not one I find particularly persuasive.” Notwithstanding this, he ruled in favour of the bank. He noted that other banks shared the same view as the respondent bank, and that competition in the banking industry was high. Since he had found it was technically possible to reverse-engineer the algorithm, he was of the view that he had to find that the release of the credit score would reveal confidential commercial information. He was satisfied with the evidence the bank supplied to demonstrate how closely guarded the credit-scoring algorithm was. He noted that in the UK and Australia, relatively new guidelines required organizations to provide only general information regarding why credit was denied.

The lack of transparency of algorithms used in the big data environment becomes increasingly problematic the more such algorithms are used. Big data analytics can be used to determine credit-worthiness – and such these determinations are made not just by banks but by all manner of companies that extend consumer credit through loans, don’t-pay-for-a-year deals, purchase-by-installment, store credit cards, and so on. They can also be used to determine who is entitled to special offers or promotions, for price discrimination (where some customers are offered better prices for the same products or services), and in a wide range of other contexts. Analytics may also be used by prospective employers, landlords or others whose decisions may have important impacts on people’s lives. Without algorithmic transparency, it might be impossible to know whether the assumptions, weightings or scoring factors are biased, influenced by sexism or racism (or other discriminatory considerations), or simply flawed.

There may be some comfort to be had that in this case the Commissioner was allowed to have access to the scoring model used. He stated that he found it innocuous – although it is not clear what kind of scrutiny he gave it. After all, his mandate extended only to decisions relating to the management of personal information, and did not extend to issues of discrimination. It is also worth noting that the Commissioner seems to suggest that each case must be decided on its own facts, and that what the complainant stood to gain and the respondent stood to lose were relevant considerations. In this case, the complainant had not been denied credit, so in the Commissioner’s view there was little benefit to her in the release of the information to be weighed against the potential harm to the bank. Nevertheless, the decision raises a red flag around transparency in the big data context.

In the next week or so I will be posting a ‘Back to the Future II’ account of another, not quite so old, PIPEDA finding that is also significant in the big data era. Disturbingly, this decision eats away at Commissioner Radwanski’s conclusion on the issue of “personal information” as it relates to generated or inferred information about individuals. Stay tuned!



* Because the Privacy Commissioner of Canada has no order-making powers, he can only issue “findings” in response to complaints filed with the office. The ‘findings’ are essentially opinions as to how the act applies in the circumstances of the complaint. If the complaint is considered well-founded, the Commissioner can also make recommendations as to how the organization should correct these practices. For binding orders or compensation the complainant must first go through the complaints process and then take the matter to the Federal Court. Few complainants do so. Thus, while findings are non-binding and set no precedent, they do provide some insight into how the Commissioner would interpret and apply the legislation.

 

Last modified on Monday, 02 March 2015 09:01
Teresa Scassa

Latest from Teresa Scassa

Related items (by tag)

back to top