A Game Changer for Social Media Accountability?
Tech companies and social media have generally avoided legal liability for the content that users distribute through their networks. But a case now before the U.S. Supreme Court could change that – with far reaching consequences for their business models and bottom lines.
Limited liability
To what extent is an online network like YouTube or Facebook responsible for the content that users post online? Are they like a newspaper (responsible for all published content) or a telephone service (with no responsibility for the content of the conversations they enable)?
The networks themselves argue that they are more like a telephone company than a newspaper. If this is true, Facebook, Twitter, and TikTok are no more accountable for the content being spread using their platforms than Verizon is when someone commits a crime using their cell phone.
Since 1996, the law has agreed and protected online freedom of speech — especially in the U.S. Specifically, in the Communications Decency Act, “Section 230 says that ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’”
Changing perceptions
While the law may still be on social media platforms’ side, the public perception of their role is changing. As Harvard Business Review writes, “we’ve also learned just how much social devastation these platforms can cause, and that has forced us to confront previously unimaginable questions about accountability.” From facilitating the planning of the January 6, Capitol Insurrection in the U.S. to terrorist recruiting, social media platforms have been the breeding grounds for a number of illegal activities, as well as an amplification device for conspiracy theories and misinformation.
According to The Verge, U.S. President Joe Biden has proposed repealing Section 230 altogether: “It should be revoked because [Facebook] … is not merely an internet company. It is propagating falsehoods they know to be false.” While this has not been a main focus of the Biden administration, the courts have been busy addressing this issue.
The public vs. social media
Governments may be hesitant — or even powerless — to act, but individuals have been testing the limits of social media giants’ legal immunity. For instance, a number of lawsuits have been filed against Facebook, Twitter, and Google by families of people killed in ISIS attacks. While the suits have been largely unsuccessful, some have produced results. As HBR reported, “In a June 25, 2021 decision, for example, the Texas Supreme Court ruled that Facebook is not shielded by Section 230 for sex-trafficking recruitment that occurs on its platform.”
Hosting vs. recommending
While attempts to hold networks responsible for hosting content have had limited success, cases targeting the actions of their recommendation algorithms are a different matter. Hosting content is arguably a neutral process, while recommending content based on individuals’ browsing behavior is undeniably an active intervention.
Now the matter has made its way to the Supreme Court in the case known as Gonzalez v. Google.
Covington’s Inside Privacy blog explains the case this way: “In Gonzalez, the estate of Nohemi Gonzalez, a victim of the November 2015 terrorist attacks in Paris, sought to hold Google liable under the Anti-Terrorism Act for recommending videos through its YouTube algorithms that were posted by the terrorist group ISIS.
A court in the Northern District of California found that Section 230(c)(1) protected Google against the claims, and the U.S. Court of Appeals for the Ninth Circuit affirmed.” But the case has been appealed all the way to the highest court in the land.
Algorithms in the dock
Interestingly, the algorithms at the heart of the case are those that platforms like Facebook and YouTube rely on to recommend content and, therefore, generate revenue. Politico says, “the court could rule that platforms would not be allowed to use computer algorithms to recommend content to users….” This kind of ruling could cripple the business model used by almost every social media platform.
The Gonzalez petition essentially acknowledges that Section 230 protects Google against liability for the videos that people post. Instead, it takes aim at the act of recommending content. The case alleges that Google recommended ISIS videos to users and therefore bears responsibility for the violence that those videos incite.
As Politico puts it: “The question before the court is whether Section 230 grants immunity for recommendations made by algorithms pushing certain content for users or if it only applies to editorial changes — like content removal — made by the platforms.”
Algorithms everywhere
The case has enormous implications for the whole online universe. As the Washington Post puts it: “Recommendation algorithms underlie almost every interaction people have online, from innocuous song suggestions on Spotify to more nefarious prompts to join groups about conspiracy theories on Facebook.”
Google defends the recommending of content as fundamental to the use of the Internet. The company disputes that recommendations are endorsements. “Recommendation algorithms are what make it possible to find the needles in humanity’s largest haystack,”
The push to extremes
But as well as providing useful signposts to users, the recommendation mechanism also has the potential to be the world’s greatest ‘radicalizer’. New York Times journalist Zeynep Tufekci describes how, after researching Trump rallies on YouTube, the platform “started to recommend and ‘autoplay’ videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.”
The algorithm constantly suggested content that was increasingly more extreme than the videos last viewed: “Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.”
Threading the needle
The matter in front of the Supreme Court is a momentous question with far-reaching implications. Against the social and economic benefits of digital network technologies, developed behind the protection of ‘freedom of speech’ legislation like Section 230, their capacity for causing serious harm has to be acknowledged. Raising accountability without ‘ruining the Internet”, as online news site Vox recently put it, will require exceptional finesse.
Stakeholders are watching the Supreme Court closely.