The (Canadian) Social Dilemma: calls for reform to privacy legislation

The (Canadian) Social Dilemma: calls for reform to privacy legislation

If you are anything like me, throughout the pandemic, you have turned to Netflix more than once to provide yourself with a welcomed distraction.

Recently, I watched Netflix’s the “Social Dilemma” which focuses on the impact the rising use of social media has had, and likely will continue to have, on society. In particular, the film delves into how social media algorithms powered by artificial intelligence harvest personal data of social media users. Patterns and behaviours are recognized by AI, and the information is used to target users with not only directed ads, but content in general. An underlying theme of the film is the fact that the data-mining used by most social media platforms has thus far been largely unregulated.

In light of the rapid advancements in technology, the average person today is a user of many different social media platforms: Facebook, Instagram, Uber, Uber Eats, LinkedIn, and Tik Tok, just to name a few.

Out of curiosity, I wondered whether any steps have been taken to update Canadian privacy legislation to account for the changes in the ways in which our society now uses (and arguably, is used by) social media.

As it turns out, the Office of the Privacy Commissioner of Canada recently released key recommendations for regulating artificial intelligence. The recommendations arise out of a public consultation that was launched earlier this year.

The OPC’s news release explains that the OPC is calling for legislation that will “help reap the benefits of AI while upholding individuals’ fundamental right to privacy,” and has recommended amending the Personal Information Protection and Electronic Documents Act (PIPEDA) to, among others:

  • allow personal information to be used for new purposes towards responsible AI innovation and societal benefits;
  • authorize these uses within a rights based framework that would entrench privacy as a human right;
  • strengthen accountability by requiring a demonstration of privacy compliance upon request by the regulator;
  • empower the OPC to issue binding orders and proportional financial penalties to incentivize compliance with the law; and
  • require organizations to design AI systems from their conception in a way that protects privacy and human rights.

The OPC also published a separate report that further informs the recommendations for reform. The separate report by Ignacio Cofone, Assistant Professor at McGill University’s Faculty of Law, can be found here.

While the OPC recognizes AI has the possibility to be used for significant societal good, such as detecting and analyzing patterns in medical images to assist doctors in diagnosing illness, improving energy efficiency, and providing students with individualized learning, it also has recognized that uses of AI which are based in individual’s personal information poses “serious consequences for their privacy”. The technological advancements we have witnessed in the past 20 years have resulted in the need to “develop regulations to curb their dangers wile reaping their benefits.”

Thanks for reading!

Sydney Osmar

Leave a Comment