Five key elements of Canada’s new Online Harms Act

 

Canada’s federal government has released the latest draft of its online harms bill, otherwise known as Bill C-63. Below, Schwartz Reisman Institute researchers take us on a tour through key aspects of the bill, including its taxonomy of harms, new expectations and requirements for social media platforms, and new kinds of protections for children and youth online.


Recognizing that Canadians should expect to be safe in online environments, just as we expect to be safe in our homes and communities, Canada’s federal government has released the latest draft of its online harms bill, otherwise known as Bill C-63. The bill, which culminates several years of consultations with stakeholders, international experts, and Canadian citizens, was presented by Justice Minister Arif Virani on February 26, 2024, following an unsuccessful attempt at legislation in 2021.

The bill proposes changes to the Criminal Code of Canada and the Canadian Human Rights Act, recommending stiffer sentences for hate speech related offenses and taking a new approach to combating hate speech by classifying it as discrimination under the Canadian Human Rights Act. 

However, most of the bill is devoted to the Online Harms Act, an important piece of new legislation that creates a new administrative apparatus, imposes new obligations on social media companies and online platforms, and addresses important issues like non consensual intimate imagery and safety for children online. 

Let’s look at five key elements of the new act. 

1. The act creates three new institutions

The Digital Safety Commission is tasked with administering and enforcing the act, plus investigating complaints. The commission can also conduct investigations into potential violations of the act, issue compliance orders that are enforceable by court order, and impose substantial monetary penalties. Members of the commission are appointed by cabinet, with the chairperson requiring approval by a full vote of Parliament. 

The Digital Safety Ombudsperson performs roles similar to other government ombuds, like providing support to the general public regarding their rights under the act and advocating for the public interest in policy discussions. 

Finally, the Digital Safety Office provides administrative support to both the commission and the ombudsperson. The bill also introduces a new method of funding these new regulatory institutions. Under the bill, cabinet is empowered to levy charges on the operators of social media services to fund the operations of the commission, ombudsperson, and office.

2. The act proposes to regulate seven kinds of online harm

  1. Intimate content shared without consent

  2. Content that sexually victimizes a child or revictimizes a survivor

  3. Content that induces a child to harm themselves

  4. Content used to bully a child

  5. Content that foments hatred

  6. Content that incites violence

  7. Content that incites violent extremism or terrorism 

The first two categories focus on sexual content, attempting to legislate against the creation and distribution of so-called “revenge pornography” and child sexual exploitation material. The third and fourth categories narrow in on children’s physical and mental health online, specifically addressing content that advocates for self-harm, disordered eating, or suicide, and content that aims to threaten, intimidate, or humiliate a child. Finally, the fifth, sixth, and seventh categories revolve around hatred, violence, and terrorism—content such as hate speech or encouraging people to commit an act of physical violence or damage. 

In its previous version of the bill, the government outlined five categories of online harms and proposed a 24-hour time frame for social media companies to remove harmful content. This drew criticism for potential constraints on freedom of expression given that such short timelines might result in the removal of undeserving content. This time around, the revised bill narrows in on nonconsensual sexual content and child sexual exploitation as two categories in which content must be removed within 24 hours of a complaint being filed.

3. The act offers two methods for removal of harmful content

The act defines platforms as social media sites, live-streaming platforms, and “user-uploaded adult content” services. Private communications, such as internet messaging, are generally exempt. The bill places several duties on platforms. For example, operators of social media services have a duty to act responsibly, including  implementing “adequate measures” to mitigate the risk of exposure to harmful content. This involves submitting a digital safety plan to the Digital Safety Commission of Canada. Another major duty imposed on social media operators is the protection of children.

A third duty surrounds the removal of harmful online content by either imposing certain obligations on large online platforms or a complaint to the commissioner. However, these methods only pertain to some, not all, of the harms listed under the act.

Duties of Online Platforms: Another duty of online platforms surrounds making certain content inaccessible. As mentioned above, regarding content that sexually victimizes a child or revictimizes a survivor, or intimate content communicated without consent, the operator has 24 hours after a complaint to make the content inaccessible to anyone in Canada.

Complaints to the commissioner: Anyone can make a complaint to the commissioner about content that sexually victimizes a child or revictimizes a survivor, or intimate content communicated without consent. The commission must either dismiss the complaint or order the online platform operator to make the content inaccessible to anyone in Canada, pending a final determination. 

Companies that don’t comply could face steep penalties, with fines of up to $10 million or six per cent of global revenue. Violating an order of the commission constitutes an offense attracting fines of up to $25 million or eight per cent of global revenue. 

4. The act creates new protections for children online

Bill C-63 responds to longstanding calls to make the digital environment safer for children, and identifies three types of content-based harms that impact children. 

The first harm is content that induces a child to harm themselves, defined as content that encourages self-harm, eating disorders, or suicide, or that counsels a child to commit or engage in these acts.

The second harm is content that sexually victimizes a child or revictimizes a survivor. Seven detailed examples are provided, including visual representations, written material, or audio recordings explicitly depicting, describing, or representing sexual activity, suggested sexual touching or exposure, and degrading physical violence.

The third harm is content used to bully a child, referring to content that can seriously impact a child’s mental or physical health, or otherwise threaten, intimidate or humiliate the child. It must be “reasonably suspected” that this kind of content was communicated for these purposes.  

One particularly interesting thing is that the bill imposes on platforms a duty to protect children through design features, although we have yet to see what these design features would be. Beyond parental controls, it’s expected this will cover  design and algorithms that stimulate ‘addictive’ behaviours through persuasive design features created for inherently commercial purposes. These include reward and gamification strategies (notifications, nudges, summonses, ‘likes”) and stickiness features that supply an endless array of autoplayable content. These protections would work alongside the privacy protections in Bill C-27, assuming both become law. 

5. The act addresses the unique harms posed by deepfakes and automated communication

The act recognizes the harms posed by deepfakes—synthetic, hyper-realistic audio or visual content that is created through artificial intelligence. Deepfakes have raised unique concerns including the authenticity of online content, the potential to defame reputations and steal identities, and the undermining of national security and democratic integrity. The bill does restrict the spread of deepfakes, but only in the context of pornography as a form of nonconsensual intimate content communication. 

The bill also requires operators of social media services to label harmful content that has been repeatedly communicated by automated systems, like spambots. Spambots can cause serious harm by communicating inaccurate information to large numbers of people. By imposing a labelling requirement on this, the government is taking steps to combat this harm but also recognizing that a more drastic remedy—such as mandatory content removal—could constitute censorship and infringe rights to free speech and expression. 

Where do we go from here?

The Online Harms Act builds substantial regulatory infrastructure aimed at bolstering online safety. It recognizes that harms and violence—including incitement to violence, bullying, discriminatory harassment, and sexual violence—can and do proliferate on the internet. Specifically, this bill offers multiple forms of recourse for people who have experienced  these kinds of harms, and attempts to proactively limit the amount of harmful content distributed over the internet. 

However, some important questions about the bill remain unanswered. Minister Virani made clear that the bill seeks to appropriately balance very real concerns about online harms with the need to uphold fundamental freedoms of speech and expression. For example, the question of what constitutes “adequate measures” to mitigate the risk of exposure to harmful content will need to be balanced with users' rights to express themselves. 

Finally, it is worth noting that though the law is called the Online Harms Act, its subject matter is largely pre-existing real-world harms—like incitement to violence, bullying, harassment, and sexual violence—that just happen to be taking place online. The bill does not attempt to comprehensively address the wide range of harms posed by artificial intelligence and other emerging digital technologies. As discussed above, the harmful impacts of deepfakes extend beyond the coverage of the act—such as the spread of disinformation around political elections and world historical events. Additionally, the act’s focus on online harms to individuals offers little clarity on how aspects of collective and public harms, like online threats to democratic discourse, will be addressed.

Though the bill is a step forward for regulating existing harms that happen to occur online, it does relatively less to address novel kinds of harms that have emerged specific to the online space. 

Want to learn more?


About the authors

David Baldridge is a policy researcher at the Schwartz Reisman Institute for Technology and Society. A recent graduate of the JD program at the University of Toronto’s Faculty of Law, he has previously worked for the Canadian Civil Liberties Association and the David Asper Centre for Constitutional Rights. His interests include the constitutional dimensions of surveillance and AI regulation, as well as the political economy of privacy and information governance.

Michael Beauvais is a doctoral (SJD) candidate at the University of Toronto’s Faculty of Law. His dissertation develops a conception of the informational privacy of children from their parents. He is interested in thinking through privacy issues for vulnerable groups from a variety of perspectives, including law, media studies, surveillance studies, bioethics, and political theory. He also writes about legal and ethical issues in biomedical research.

Alicia Demanuele is a policy researcher at the Schwartz Reisman Institute for Technology and Society. Following her BA in political science and criminology at the University of Toronto, she completed a Master of Public Policy in Digital Society at McMaster University. Demanuele brings experience from the Enterprise Machine Intelligence and Learning Initiative, Innovate Cities, and the Centre for Digital Rights where her work spanned topics like digital agriculture, data governance, privacy, interoperability, and regulatory capture. Her current research interests revolve around AI-powered mis/disinformation, internet governance, consumer protection and competition policy.

Leslie Regan Shade is a professor in the University of Toronto’s Faculty of Information and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society. Shade’s research focus since the mid-1990s has been on the social and policy aspects of information and communication technologies (ICTs), with particular concerns towards issues of gender, youth, and political economy. Her research promotes the notion of the public interest in ICT policy; publications, community outreach, and student supervision have as their goal the promotion of a wider popular discourse on information and communication policy issues and media reform in Canada and internationally for a diverse public and policy audience. This includes an ongoing commitment to building participatory scholar-activist networks.


Browse stories by tag:

Related Posts

 
Previous
Previous

Automated decision-making in courts of law: A conversation between Nathalie Smuha and Abdi Aidid

Next
Next

The terminology of AI regulation: Preventing “harm” and mitigating “risk”