Kevin Reed
On February 21, the US Supreme Court heard oral arguments in the case of Gonzalez v. Google. The lawsuit seeks to hold Google’s YouTube responsible for the death of Nohemi Gonzalez, a 23-year-old college student who was killed during a terrorist attack in Paris in November 2015.
The suit—which was dismissed by the Northern District Court of California and the dismissal then upheld by the Ninth Circuit Court of Appeals—was brought by the Gonzalez family in 2016. The family asserted that videos produced by ISIS and posted on YouTube were promoted by the platform’s algorithms and, therefore, violated US laws against aiding and abetting terrorists.
The lawsuit asserts that YouTube helped to spread the ISIS video content, contributed to the radicalization of users and their recruitment as terrorists and, therefore, assisted the deadly attack in Paris that killed Nohemi Gonzalez.
For its part, Google has argued that the Gonzalez family’s claims that YouTube gave support to terrorists are based on “threadbare assertions” and “speculative” arguments. The Electronic Frontier Foundation and the American Civil Liberties Union have filed amicus briefs supporting Google on the grounds that the lawsuit represents a threat to First Amendment rights and freedom of speech online.
The legal issue at the heart of the case is the federal law known as Section 230 of the Communications Decency Act—part of the Telecommunications Act of 1996—which protects online services from liability for the content posted by users of their platforms. The 1996 law was an update to the Communications Act of 1934 that created the Federal Communications Commission (FCC) and regulated telephone, telegraph and radio communications in that era.
The core language of Section 230 is as follows: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” To the extent that the courts have adhered to this aspect of the law, Section 230 has functioned as a shield that protects internet companies from being liable for establishing the legal or illegal character of speech on their platforms.
While the law has significant First Amendment implications, its original intent was to ensure that the opportunity of online and internet technical innovation did not become stifled by costly litigation. The law was passed at a time when the World Wide Web was in its infancy, and proprietary messaging boards and online services such as CompuServe, Prodigy and America Online (AOL) dominated the internet. In 1996, there were 36 million users of online services or 0.9 percent of the world’s population.
The 1996 law was influenced by an environment when information and news distribution were still dominated by print media. In that era, a liability line had been drawn between “publishers” and “distributors” of content such that a publisher was legally responsible for the material being printed while a distributor would be unaware of it and immune from any liability.
During the ensuing 27 years, new forms of online communications have revealed significant contradictions within Section 230. For example, the categories of “interactive computer service,” “publisher or speaker” and “information content provider” have undergone a profound transformation brought on by the wireless and mobile technologies used by more than 5.5 billion people or 69 percent of the world’s population.
In this environment where nearly every individual on earth is an online consumer as well as a “publisher” or “content provider”—with added facilities for “sharing” and/or “liking” the content of others—the rules established by Section 230 have become obsolete. Meanwhile, the demarcation between an “internet computer service” and a “publisher” has been blurred by algorithms that recommend content to users and accelerate circulation, or throttle it, based on what brings in the most advertising revenue.
What the transformation of global online activity and technology since 1996 has demonstrated—and this can never be addressed by the US Supreme Court or Congress—is the need for platforms such as Google, Facebook, YouTube and Twitter to be made public utilities. The continued ownership of these advanced technologies by a handful of billionaires for the purpose of increasing their personal wealth threatens both free speech and the transformation of the platforms into tools of authoritarianism.
The provisions of Section 230 that are the subject of conflict within the political establishment and being argued by the Supreme Court are what are known as “Good Samaritan” protections. Contradicting the shield portion of the law, this requirement demands that online services “remove” or “moderate” content that is deemed “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
In other words, according to this broad definition of what is “objectionable” material, online services are expected to violate the First Amendment and censor content on their platforms—with the proviso that they act “in good faith”—without fear of being prosecuted for acts against free speech.
Along these lines, an aspect of the Gonzalez v. Google case before the Supreme Court is the assertion that YouTube failed to find and remove the objectionable ISIS content. The case says that the platform recommended the terrorist videos through its “user-persuasion” algorithm. These attention-getting-and-holding techniques are not based on an evaluation of the content itself but preoccupied with the advertising revenue they generate.
While the lower courts have upheld the immunity shield of Section 230 in Gonzalez v Google, the decision of the right-wing dominated Supreme Court to hear the case comes at a time when political censorship and control of online content is being sought by all factions of the political establishment.
On this question, one faction of the ruling establishment considers the content moderation rules part of the old, buccaneering, freewheeling and early “Wild West” internet that have become insufficient and need to be abolished in favor of a more effective regime of censorship.
Another faction of the political establishment that is more closely aligned with the tech giants—especially the massive profits they generate for billionaires on Wall Street—is saying that Section 230 can and should be utilized more effectively for censorship. They are arguing that the law does not need to be abolished because the tech platforms are more than capable of doing the job of imposing the regulation and control being demanded by the entire ruling class. These objectives are also behind the various congressional and regulatory initiatives aimed at “taking down big tech.”
The political offensive against both Section 230 and the technology monopolies is directed, above all, against the growth of anti-war, anti-imperialist, left-wing and socialist politics online. It is additionally focused on blocking the working class from using the social media platforms to organize their struggles against the capitalist system.
The major reasons for the ongoing public campaigns against “big tech” are that sections of the intelligence and political establishment are dissatisfied with the progress of self-imposed censorship and fear that large numbers of their employees are sympathetic to left and socialist politics.
Significant in this regard is the censorship by Google, beginning in the spring of 2017, that suppressed socialist, left-wing and alternative news sources. After a campaign was mounted by the WSWS against it, the CEO of Google Sundar Pichai admitted during congressional testimony that the number one search engine was indeed censoring socialists online.
Meanwhile, in the atmosphere of the “fake news” hysteria whipped up during the first year of the coronavirus pandemic and the 2020 presidential elections, far-right Supreme Court Justice Clarence Thomas said it “behooves” the court to find a case to review Section 230.
Thomas said the courts have broadly interpreted the law to “confer sweeping immunity on some of the largest companies in the world.” In a 2021 opinion, Justice Thomas suggested that Donald Trump’s Twitter account, shut down by the platform after he used it to attempt the overthrow of the US Constitution on January 6, 2021, resembles “a constitutionally protected public forum.”
The news coverage of the arguments before the Supreme Court on February 21 emphasize that it is difficult to determine how the majority will decide on the crucial case or if it will rule at all. As with the original intent of Section 230, several of the justices expressed concerns about the financial impact on the corporations of lifting the liability shield.
A report on CNBC entitled, “Supreme Court justices in Google case express hesitation about upending Section 230,” said, “Justices across the ideological spectrum expressed concern with breaking the delicate balance set by Section 230,” and some justices suggested, “a narrower reading of the liability shield could sometimes make sense.”
Eric Goldman, a professor at Santa Clara University School of Law, told CNBC he felt more optimistic that the high court would uphold Section 230 while he was concerned for the future of the law. “I remain petrified that the opinion is going to put all of us in an unexpected circumstance,” Goldman said.