Defense Counsel Journal

Social Media Immunity in 2021 and Beyond: Will Platforms Continue to Avoid Litigation Exposure Faced by Offline Counterparts

Volume 88, No. 3

October 20, 2021

Pizzi_Peter Peter J. Pizzi
Pizzi_Peter

Peter J. Pizzi

Peter J. Pizzi is a partner at Walsh Pizzi O’Reilly Falanga LLP, a majority women-owned firm with offices in Newark, Philadelphia, and Manhattan. A business litigator with more than 40 years of experience in commercial litigation, class action defense, internal investigations, technology litigation, and employee mobility law, Peter holds the CIPP/US certification and is a Certified Civil Trial Attorney, a designation of the Supreme Court of New Jersey. Peter represents corporate clients in a broad array of industries, including healthcare, food distribution, pharma, financial services, and information technology. Peter received the IADC’s Yancey Award in 2022 and is past Chair of the IADC Cyber Security, Data Privacy and Technology Committee. Currently, Peter serves as a Vice Chair of IADC Corporate Compliance and Government Investigations Committee and its Amicus Committee. He is also Co-Chair of the NYSBA’s Privacy, Data Security, and Information Technology Litigation Committee of the NYSBA Commercial & Federal Litigation Section.

DURING the 2020 election cycle, both the United States presidential candidates called for changes to Section 230 of the Communications Decency Act of 1996, which protects online service providers like social media companies from being held liable for transmitting or taking down user-generated content (UGC). Twice in 2020, the Senate held hearings, with both sides demanding change to the statute and to platform moderation practices.11Cat Zakrzewski and Rachel Lerman, The election was a chance for Facebook and Twitter to show they could control misinformation. Now lawmakers are grilling them on it, Wash. Post (Nov. 17, 2020), https://www.washingtonpost.com/technology/2020/11/17/tech-hearing-dorsey-zuckerberg/; Sara Morrison, Republicans accuse Twitter’s Jack Dorsey and other Big Tech CEOs of violating their free speech rights, Recode (October 28, 2020), https://www.vox.com/recode/2020/10/28/21536780/facebook-twitter-google-zuckerberg-dorsey-pichai-senate-section-230-hearing. Then, following the January 6, 2021 Capitol Hill riot, Donald Trump and others within his circle were de-platformed by Twitter, Facebook, and YouTube.22Following a decision supporting the de-platforming by the Facebook Oversight Board, the company extended the ban through January 2023. See Nick Clegg, In Response to Oversight Board, Trump Suspended for Two Years; Will Only Be Reinstated if Conditions Permit, Facebook (June 4, 2021), https://about.fb.com/ news/2021/06/facebook-response-to-oversight-board-recommendations-trump/. Participants used these same platforms, as well as others, to plan the incursion into the Capitol, which millions around the world watched in real time.33Rebecca Heilweil and Shirin Ghaffary, How Trump’s internet built and broadcast the Capitol insurrection, Recode (Jan. 8, 2021), https://www.vox.com/recode/22221285/trump-online-capitol-riot-far-right-parler-twitter-facebook. As we move further into the 117th Congress, a continuation of the status quo hardly seems possible, as demands for change to Section 230 appear continue to intensify, especially with the large platforms advertising in favor of change.

This article will describe the elements of the statutory immunity granted by Section 230 and summarize complaints against the current scope of Section 230 as interpreted by the courts. We will also explore whether the problems observed with social media have resulted from Section 230 itself. In other words, would the world have looked a lot different today had online industries faced litigation, and the threat of litigation, in the same way as traditional businesses?

I. Origins

Defamation actions brought in the New York courts in the 1980s against two different online platforms, Prodigy and CompuServe, created concern that the internet would be awash in litigation before it had achieved its full potential. One court granted CompuServe summary judgment based upon that service’s showing that CompuServe did not police its bulletin boards and therefore had no “publisher” or “distributor” liability for the defamatory utterances in question.44Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135, 139 (S.D.N.Y. 1991). A second court, however, found that because Prodigy portrayed its service as “actively utilizing technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and ‘bad taste,’” the service was “clearly making decisions as to content” and therefore was a “publisher” of the offending speech for the purpose of libel law.55Stratton Oakmont, Inc. v. Prodigy Servs. Co., No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995).

The two lawsuits, each of which involved instances of actionable speech, led lawmakers to include in the soon-to-be-misnamed Communications Decency Act of 199666The provisions of the CDA which sought to prohibit obscene and like content were struck down in Reno v. ACLU, 521 U.S. 544 (1997); Section 230 was not.  a provision that would protect internet service providers from liability. This provision did not eliminate lawsuits, but certainly made an adverse outcome far less likely. According to its sponsors, the principal goal of Section 230 was to establish “Good Samaritan” protection so that those platforms which chose to exercise some degree of editorial control over content on their platforms would not thereby be subject to publisher liability. The statute contains several “findings” and “policy” statements that extol the virtues of the internet and the desire to promote its “continued development”.7747 U.S.C. § 230 (a) and (b).

II. The Communications

Decency Act of 1996 As enacted, Section 230 contains two primary provisions that create immunity from liability. Section 230(c)(1) states that interactive service providers and users may not be held liable for publishing access to material posted by another information content provider. Section 230(c)(2) states that no ISPs which voluntarily restrict access to content will be held liable for its own “moderation” decisions. The discretion granted by the latter provision is broad, immunizing actions “taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”88Id. at § 230(c)(2)(A) (emphasis added).

There are several exceptions to Section 230 immunity. Section 230 does not bar liability for lawsuits: (1) under federal criminal laws; (2) under intellectual property laws; (3) based upon any state law that is “consistent with” Section 230; and (4) under the Electronic Communications Privacy Act of 1986. In 2018, Congress created an exception to immunity for certain civil actions or state prosecutions where the underlying conduct violates specified federal laws prohibiting sex trafficking, which was added by a 2018 statutory change called the Allow States to Fight Online Sex Trafficking Act, known as SESTA/FOSTA. The 2018 amendment demonstrates the challenge of revising Section 230, as many asserted that SESTA/FOSTA did little to help solve the problem of sex trafficking and only served to further marginalize an already at-risk community.99In the 116th Congress, the SAFE SEX Workers Study Act sought to study the unintended effects of SESTA/FOSTA.  See H.R. 5444. The “Findings” discusses those adverse impacts. See id. at Sec. 2, available at https://www.congress.gov/bill/116th-congress/house-bill/5448/text.

III. Critiques of Section 230

Attacks on Section 230 by political leaders break down according to party lines. Critics on the Left fault platforms for inadequate moderation measures leading to the spread of hate speech, misinformation, and calls to violence including against political leaders.

Republicans in the Senate accuse the platforms of using the statutory immunity to “censor” conservative voices.1010See supra note 1. In May 2020, Twitter added labels that read “Get the facts about mail-in ballots” to two tweets by President Trump predicting mass ballot fraud, causing Trump to issue an executive order calling the labels “selective censorship that is harming our national discourse.”1111See Preventing Online Censorship, 85 Fed. Reg. 34,079 (May 28, 2020).  A just-released study from NYU’s Stern School of Business attempts to test the conservative complaint of excessive moderation aimed at conservative voices and found that statistics do not bear out the claim. See Paul M. Barrett and J. Grant Sims, False Accusation: The Unfounded Claim that Social Media Companies Censor Conservatives, NYU Stern Center for Business and Human Rights (Feb. 2021), https://static1.aquarespace.com/static/5b6df958f8370af3217d4178/t/60187b5f45762e708708c8e9/1612217185240/NYU+False+Accusation_2.pdf. The report collected the results of numerous studies of Facebook, Twitter, and YouTube content. The report found that, on Facebook, “right-leaning U.S. Facebook pages dominate the list of sources producing the most-engaged-with posts containing links.” Id. at 14.

Of course, the latter criticism – that platforms “censor” speech and that practice somehow violates free speech rights of those affected – distorts, rather than advances, understanding of Section 230. The First Amendment proscribes interference with speech by government actors, not by private individuals or businesses, so neither the First Amendment nor “free speech” rights enter into the equation.1212“Congress shall make no law … abridging the freedom of speech, or of the press ….” See also Mary Anne Franks, The Free Speech Black Hole: Can the Internet Escape the Gravitational Pull of the First Amendment?  Knight First Amendment Institute at Colum. Univ. (Aug. 21, 2019), https://knightcolumbia.org/content/the-free-speech-black-hole-can-the-internet-escape-the-gravitational-pull-of-the-first-amendment. A vast array of courts have held that the removal of a user from a platform is not a violation of the user’s First Amendment rights, because the platforms are private corporations and not state actors.1313See generally cases collected at Technology & Marketing Law Blog, available at https://blog.ericgoldman.org/archives/2021/01/google-and-twitter-defeat-lawsuit-over-account-suspensions-terminations-delima-v-google.htm.

The First Amendment would apply, however, if Section 230 were altered to mandate specific content moderation outcomes. By virtue of the First Amendment, neither the state nor the courts may interfere in decisions by private actors regarding the speech that they choose to display.1414Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241 (1974) (invalidating a Florida statute granting a political candidate a right to equal space to reply to newspaper’s editorial criticism). At least one court has held that platform search results are protected from court interference by the First Amendment and, therefore, a lawsuit for damages based upon search results could not proceed.1515Jian Zhang v. Baidu.com Inc., 10 F. Supp.3d 433, 438 (S.D.N.Y. 2014) (Baidu.com’s refusal to display Chinese dissident content in search engine is protected speech under the First Amendment). Platform decisions about which messages to leave on a platform and which to take down is an exercise of First Amendment rights. The so-called “Good Samaritan” protections provided by Section 230(c)(2), therefore, replicate protections already granted by the First Amendment, although those same provisions make it procedurally more expedient for platforms to exit misguided lawsuits.1616See Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921, 1930 (2019) and Mary Anne Franks, Section 230 and the Anti-Social Contract, Lawfare (Feb. 2021), https://assets.documentcloud.org/documents/20489870/section-230-and-the-anti-social-contract.pdf.

IV. Legislative Proposals

Literally dozens of proposed changes to Section 230 have surfaced over the past several years.1717Eric Goldman, While Our Country Is Engulfed by Urgent Must-Solve Problems, Congress Is Working Hard to Burn Down Section 230, Technology & Marketing Law Blog (Aug. 4, 2020), https://blog.ericgoldman.org/archives/2020/08/while-our-country-is-engulfed-by-urgent-must-solve-problems-congress-is-working-hard-to-burn-down-section-230.htm Some were proposed as acts of political theater; a few are more well thought out.

The SAFE TECH Act (Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms Act), sponsored by three Democratic senators, would create a series of exceptions to Section 230. According to the authors, the legislation would eliminate Section 230 protection for “ads or other paid content,” injunctive relief, actions based upon civil rights laws or laws that address stalking/cyber-stalking or harassment and intimidation based upon protected classes, wrongful death actions, or actions under the Alien Tort Claims Act.1818Sen. Mark R. Warner, Press Release, “Warner, Hirono, Klobuchar Announce the SAFE TECH Act to Reform Section 230,” (Feb. 5, 2021), https://www.warner.senate.gov/public/index.cfm/2021/2/warner-hirono-klobuchar-announce-the-safe-tech-act-to-reform-section-230

During the last Congress, the Platform Accountability and Consumer Transparency Act1919Sen. Brian Schatz, Press Release, “Schatz, Thune Introduce New Legislation to Update Section 230, Strengthen Rules, Transparency on Online Content Moderation, Hold Internet Companies Accountable for Moderation Practices,” (June 24, 2020), https://www.schatz.senate.gov/press-releases/schatz-thune-introduce-new-legislation-to-update-section-230-strengthen-rules-transparency-on-online-content-moderation-hold-internet-companies-accountable-for-moderation-practices. drew bipartisan support of Sens. Brian Schatz (D-HI) and John Thune (R-SD). Key provisions include: (i) imposing a requirement that large online platforms have a defined complaint system that processes reports and notifies users of moderation decisions within 14 days, and allows consumers to appeal online platforms’ content moderation decisions within the relevant company; (ii) Amending Section 230 to require large online platforms to remove court-determined illegal content and activity within 24 hours, with illegal content defined as any content determined by a state or federal court to violate federal criminal or civil law or a state libel law; (iii) requiring online platforms to explain their content moderation practices in an acceptable use policy that is easily accessible to consumers; and (iv) calling for quarterly reporting.

The PACT Act’s biggest impact likely would be its designation of Section 230 as an affirmative defense which the platform bears the burden of proving by a preponderance of the evidence. That language change means that Section 230 would no longer be a ticket out of litigation at the motion to dismiss stage, enabling plaintiffs to gain the fruits of discovery and, thereby, peek behind the curtain to examine intermediary business practices. Justice Thomas advocated for such a change in Section 230 jurisprudence in a “statement” accompanying a denial of review during the recently concluded Supreme Court term.2020Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 208 L. Ed. 2d 197 (2020).  Such a change alone could have a transformative impact.2121See Jayne Ponder and Madeline Salinas, SAFE TECH Act Would Limit Scope and Redesign Framework of Section 230 Immunity, Covington & Burling (Feb. 14, 2021), https://www.insideprivacy.com/ united-states/congress/safe-tech-act-would-limit-scope-and-redesign-framework-of-section-230-immunity/.

One ill-conceived provision which would deprive platforms of immunity against enforcement “by the Federal Government’’ of any “Federal criminal or civil statute, or any regulations of an Executive agency as defined in section 105 of title 5, United States Code.” Even apart from the challenge of trying to understand how this immunity carve-out would work,2222Daphne Keller, CDA 230 Reform Grows Up: The PACT Act Has Problems, But It’s Talking About The Right Things, Stanford Law School Center for Internet and Society (July 16, 2020), https://cyberlaw.stanford.edu/blog/ 2020/07/cda-230-reform-grows-pact-act-has-problems-it%E2%80%99s-talking-about-right-things one has to be concerned about the potential chilling effect on internet content if a government official exploited this exemption to achieve political goals, especially when most government agencies are led by political appointments or susceptible to political pressure.2323Laurenz Ennser-Jedenastik, The Politicization of Regulatory Agencies: Between Partisan Influence and Formal Independence, J. of Pub. Adm. Research and Theory  507–518 (2016), https://watermark.silverchair.com/muv022.pdf

While the PACT Act would allow small online platforms more flexibility in responding to user complaints, removing illegal content, and acting on illegal activity, there is no doubt that the statute would impose burdens on a vast number of sites that rely upon user content or postings to generate traffic. Imagine a local restaurant which encourages visitors to post reviews or the many retailers which seek product reviews. Both kinds of sites are currently protected by Section 230, and both would be within the scope of legislation like the PACT Act.

V. Algorithmic boosting as the villain

The use of algorithms by major platforms for microtargeting segments of users they want to reach has become a major focus for those seeking to reform Section 230. At a recent industry conference, Apple’s Tim Cook decried what he called Facebook’s “a theory of technology” that prizes engagement and algorithms which help spread disinformation and conspiracy theories in order to collect user data for advertising.2424Tim Higgins, Apple, Facebook Trade Barbs Over Privacy-Focused Business Models, Wall St. J. (Jan. 28, 2021), https://www.wsj.com/articles/apple-to-roll-out-privacy-measures-despite-facebook-objections-11611810002?mod=hp_lead_pos11. Through an update to its operating system, Apple recently implemented a system change that enables users to block the ability of platforms to track online activity for the purpose of microtargeting.2525Id.

Section 230’s protections for content posted by others does not apply if the site becomes an “information content provider” through the “creation or development” of the content in question.262647 U.S.C. § 230(f)(3). In one of the most famous decisions under Section 230, an online roommate matching service which asked discriminatory questions of users forfeited immunity because the questionnaire was its own content and caused the development of discriminatory responses from users.2727Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1170 (9th Cir. 2008).

Some have argued that Facebook’s personalization of content through machine-learning algorithms should be deemed the “development” of content and, as such, should not qualify for immunity:

Many of the most successful internet companies, … design their applications to collect, analyze, sort, reconfigure, and repurpose user data for their own commercial reasons, unrelated to the original interest in publishing material or connecting users.2828Olivier Sylvain, Intermediary Design Duties, 50 Conn. L. Rev. 204, 217 (2017).  

The Second Circuit declined to impose liability on Facebook in a case involving victims of an act of terrorism who sued Facebook for connecting perpetrators with each other. The majority held that depriving the site of Section 230 immunity for “organizing and displaying content exclusively provided by third parties” would eviscerate protection for an “essential result of publishing.” Judge Katzmann criticized the majority for holding that Section 230 “immunizes … providers for allegedly connecting terrorists to one another.”2929Force v. Facebook, Inc., 934 F.3d 53, 66, 74 (2d Cir. 2019), cert. denied, 140 S. Ct. 2761, 206 L. Ed. 2d 936 (2020).

Multiple bills in Congress seek to alter Section 230’s protections for sites which rely upon algorithms to deliver to users the kind of content calculated to promote engagement. Examples include the Protecting Americans from Dangerous Algorithms Act (H.R. 8638) and the Biased Algorithm Deterrence Act of 2019 (H.R. 492, Gohmert). The former would eliminate Section 230 protection for a site which “used an algorithm, model, or other computational process to rank, order, promote, recommend, amplify, or similarly alter the delivery or display of information (including any text, image, audio, or video post, page, group, account, or affiliation) provided to a user of the service if the information is directly relevant to the claim.”

The constitutionality of such legislation is questionable light of First Amendment issues around search discussed earlier, although the pernicious effect of algorithms is not really debatable. Companies like Facebook and Google spend spectacular sums on content moderation, and employees performing that function suffer from the work itself,3030Casey Newton, Bodies in Seats, The Verge (June 19, 2019),  https://www.theverge.com/2019/6/19/18681845/facebook-moderator-interviews-video-trauma-ptsd-cognizant-tampa. but that aspect of the business is at war with the other side of the business – advertising revenue driven by user engagement.

VI. Welcoming Litigation

Among many others, a Professor of Law at the University of Miami Law School, Dr. Mary Anne Franks, would restructure Section 230 to right a perceived colossal imbalance created by the statute.3131Mary Anne Franks, Reforming Section 230 and Platform Liability, Stanford Cyber Policy Center (2021), https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/cpc-reforming_230_mf_v2.pdf. Professor Franks posits that the 230(c)(1) protection is vastly overbroad as it has been “read to provide the same immunity to providers who do nothing at all to stop harmful conduct – and, even more perversely, extends that same immunity to providers who actively profit from or solicit harmful conduct.”3232Id. at 10. Thus, Dr. Franks writes, “Section 230(c)(1) has been invoked to protect message boards like 8chan (now 8kun), which provide a platform for mass shooters to spread terrorist propaganda, online firearms marketplaces such as Armslist, which facilitate the illegal sale of weapons used to murder domestic violence victims, and to classifieds sites like Backpage (now defunct), which was routinely used by sex traffickers to advertise underage girls for sex.”3333Id. By protecting platforms which glean revenue from illegal and harmful conduct, Section 230(c)(1) creates a classic “’moral hazard,’ ensuring that the multibillion-dollar corporations that exert near-monopoly control of the Internet are protected from the costs of their risky ventures even as they reap the benefits.”3434Id.

Professor Franks argues that the absence of litigation, and the threat of it, has spawned behaviors by social media platforms that has given rise to the examples she cites above and created a two-track system of liability whereby online conduct receives vastly greater protection than afforded to businesses in the physical space. In the latter, if operators fail to take care of their products, customers, or premises, lawsuits get filed and businesses may be held liable. Social media platforms face no such exposure. Indeed, Section 230 demands precisely the opposite outcome. It is refreshing, to say the least, to come across a commentator explaining how exposure to litigation risk may achieve a societal good, whereas the complete freedom from adverse lawsuit outcomes can be shown to have contributed societal harms, e.g., mass shootings, pandemic misinformation, an attempted government insurrection, and others.

Professor Franks would modify Section 230(c)(1) to make sure only speech of others falls within the immunity, not other behaviors that do not qualify as speech.

(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information speech wholly provided by another information content provider, unless such provider or user intentionally encourages, solicits, or generates revenue from this speech.3535Id. at 11.

The language added at the end of the above sentence would also withdraw liability for platforms which shape or profit from the offending content in question, thereby creating exposure for sites which produce harm or injury through algorithmic boosting.

VII. Whither antitrust?

Where do recently filed antitrust actions fit into the analysis? The state antitrust plaintiffs argue that Facebook’s gargantuan size likely contributed to its outsized impact on the 2016 election and disinformation efforts in advance of the 2020 election. Had it not acquired or snuffed out actual and potential competitors, a greater number of credible sources of information and social media discourse might have dissipated Facebook’s singular impact. One commentator summed up the analysis as follows:

The first part of the problem is that Google and Facebook compete against news publishers for user attention, data, and ad dollars. They both have business incentives to keep users within their digital walls.

The second part is that because Google and Facebook lack competition, two dominant algorithms control the flow of information. So, purveyors of fake news only have to exploit the weaknesses of one algorithm to potentially deceive hundreds of millions of people. Facebook has 2 billion active monthly users. Google accounts for 80 percent of internet searches worldwide.3636Sean Illing, Why “fake news” is an antitrust problem,  Vox (July 18, 2018), https://www.vox.com/technology/2017/9/22/16330008/eu-fines-google-amazon-monopoly-antitrust-regulation, quoting Sally Hubbard, Director of Enforcement Strategy, Open Markets Institute.

On the other hand, the concentration of platforms made de-platforming Trump and others following the events of January 6, 2021, immediately impactful. There were not two dozen social media outlets to be concerned about. Instead, there were Facebook, Twitter, Google, YouTube, and AWS. Given the choice, a proliferation of credible platforms would appear preferable over having one gargantuan platform 2.8 billion users.3737Statista, Number of daily active Facebook users worldwide as of 1st quarter 2021 (April 2021), https://www.statista.com/statistics/346167/facebook-global-dau/

VIII. Time for a Digital Regulatory Agency?

The PACT Act, described above, calls for disclosure of content moderation policies and practices and imposes sanctions for a failure to adhere to those requirements. NYU’s Center for Business and Human Rights issued a proposal that applauds the requirement of disclosure in exchange for Section 230 immunity, then goes several steps further, requiring platforms to “explain publicly how their content moderation policies work and provide far more detailed statistics than they do now on items removed, down-ranked, or demonetized.”3838Paul M. Barrett, Regulating Social Media: The Fight Over Section 230 — and Beyond, NYU Stern Center for Business and Human Rights, at 16 (Sept. 2020), https://bhr.stern.nyu.edu/section-230-report-release-page?_ga=2.260451469.1686657636.1612015447-581250165.1612015447

The Center’s report calls upon platforms to disclose “what content is being promoted to whom and more about how platform advertising works in practice.” Finally, immunity should be granted only to platforms that “remove, rather than merely label or down-rank, content that their fact-checkers have determined is demonstrably false.”3939Id. To track compliance with these directives, the Center advocates the creation of a Digital Regulatory Agency whose job would be to “oversee and enforce the new platform responsibilities just mentioned.”4040Id.

IX. Application of International Human Rights Principles

The foregoing discussion has a decidedly United States-centric cant. As creatures of U.S. law, the major platforms have flourished under the U.S. protectionist regime’s hands. Each platform operates globally, which raises the question whether international law might suggest a regulatory approach worthy of examination. Non-U.S. democracies may not have a First Amendment, but these democracies certainly value speech rights of every citizen and organization operating within their borders. One author, Evelyn Douek, marries the international law norm of proportionality4141See, e.g., David Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, at 4, U.N. Doc. A/HRC/38/35 (Apr. 6, 2018) (identifying “proportionality” as one of the requirements for “State limitations on freedom of expression”). with digital age capabilities in assessing probabilities to argue for emphasizing continued advancements in the use of artificial intelligence (AI) tools to improve content moderation outcomes, coupled with transparency imposed by focused adjustments to the regulatory scheme.4242Evelyn Douek, Governing Online Speech: From “Posts-As-Trumps” To Proportionality and Probability, 121 Colum. L. Rev. 759 (2021). Because of the unfathomable scale of online content, individualistic decision-making is impossible, the author argues. She calls for “systemic balancing,” the central point of which “demands transparency and candor so that trade-offs can be meaningfully debated, experimented with, and ultimately accepted, or at least acquiesced to, including by those who disagree with substantive outcomes.”4343Id. at 821. Towards that end, the PACT Act, among other pending legislative proposals, would mandate greater disclosures by platforms of algorithmic decision structures.

X. Conclusion

It is difficult to address Section 230 without the exchange becoming a political debate. Eliminating Section 230 would not eliminate misinformation or illegal or otherwise actionable speech, nor would it bridge the political chasm which has characterized the United States in recent years. The forces giving rise to these phenomena would persist.

Outright elimination of platform immunity would expose sites of all kinds to lawsuits based upon content of users and moderation decisions made by the platforms with respect to that content. Perhaps the larger platforms could withstand any resulting litigation pressure, but smaller sites hosting user content likely could not. Consider, for example, a small-town newspaper’s comments section,4444Tim Wu, Why Both Liberals and Conservatives Are Completely Wrong About Section 230ProMarket (Dec. 13, 2020),  https://promarket.org/2020/12/13/liberals-conservatives-wrong-section-230-reform-repeal/. a restaurant site encouraging patron responses, or a retailer inviting users to comment about product offerings.

Professor Franks’ language changes – which seek to limit immunity to genuine “speech” and not to acts of wrongdoing which happen to be carried out with words – strike at the heart of Section 230’s overbreadth by attempting to reduce the protection for internet companies, which are now gargantuan and nothing like what Congress had in mind in 1996. Professor Franks’ changes seek to use recourse through the judicial system as a motivational tool to force online platforms to police their sites and refrain from seeking to monetize actionable content. In this manner, litigation, and the threat of litigation, can serve as an ameliorative force, just as the plaintiffs’ bar would argue that threats of product liability actions lead to safer consumer goods. Courts will play a central role in policing platforms, a responsibility that the legislative branch has declined to undertake.

View Article

Back

Close