Can Twitter and Other Online Platforms Legally Moderate Content?
By Julian Oscar Alvarez, Litigation Department and Nathan M. Shaw, Corporate Department
In recent months, a growing number of commentators have suggested that an internet platform such as Twitter commits censorship when it chooses to remove content a user posts on one of its webpages, and/or bans that user.[1] These critics, which now include Supreme Court Justices Samuel Alito and Clarence Thomas, suggest that such content moderation might be illegal under the First Amendment.[2]
Although the First Amendment constrains the government from “abridging the freedom of speech,”[3] private companies such as Twitter are not obligated to guarantee free speech on their platforms. In fact, courts have found that they enjoy their own constitutional rights, including free speech. Recent disclosures by Twitter that it was pressured by government agencies to withhold (or post) information are not the subject of this article, but raise questions about whether or not the government may be abridging First Amendment rights by proxy.
A company like Twitter enjoys free speech protections when moderating its platform by removing user content, or making it less visible. Before this year, judges have consistently ruled that such activity is similar to a newspaper exercising editorial judgment over its articles. As the law stands now, Twitter rarely (if ever) must host content or users it does not want on its platform.
Circuit Split on Two Laws Regulating Content Moderation
Two separate federal circuit courts of appeal recently came out on opposite sides of this debate. This circuit split creates a now-unresolved issue of law. Two of the country’s most populous states, Florida and Texas, adopted laws aimed at regulating how a social media company may moderate its platforms and apply its terms of service. Between the two laws, the states for the first time attempted to curtail when such a company may ban a user or delete, hide, or add disclaimers to a user’s post, among other constraints.
In NetChoice, LLC v. Moody, two technology industry trade associations, NetChoice and the Computer & Communications Industry Association (CCIA), promptly sued the State of Florida to stop its relevant law, SB 7072, from being enforced against technology companies. In June 2021, a federal trial court in Tallahassee sided with the trade associations and struck down certain provisions of that law as unconstitutional on free speech grounds.[4] Florida appealed to the Eleventh Circuit, which sided with the technology companies and their trade associations on most of the law’s provisions for the same free-speech reasons. The parties are now asking the Supreme Court to hear the case.
Around the same time, NetChoice and the CCIA sued the State of Texas (NetChoice & CCIA V. Paxton) to prevent a similar social media moderation law, HB 20, from being used against their members. In December 2021, a federal trial court in Austin sided with the trade associations.[5] The sitting judge, citing the opinions from Moody, held that the First Amendment grants social media platforms editorial discretion to moderate content on their websites. Everything changed a few months ago, however, when the Fifth Circuit reversed that court, ruling that the Texas law was constitutional because it sought to chill censorship. At the technology companies’ behest, the Fifth Circuit stayed its decision from going into effect pending the Florida litigation’s appeal at the Supreme Court.
To summarize, disparate arguments persuaded the Fifth and Eleventh Circuits to different conclusions regarding whether the First Amendment protects a social media company’s content moderation. The Eleventh Circuit found that it did in Moody, comparing content moderation to a newspaper’s exercise of editorial discretion. In contrast, the Fifth Circuit held in Paxton that the First Amendment did not apply to a company’s content moderation, concluding instead that it is often a pretext for illegal censorship.
Update: At its October 2023 session, the Supreme Court recently granted certiorari in Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton. [6]
Update 2: On July 1, 2024, the Supreme Court handed down its decision in Moody v. NetChoice, LLC, and NetChoice, LLC v. Paxton. The majority vacated the dispositions of both the Eleventh Circuit and the Fifth Circuit for failing to account for more possible instances where the challenged laws would apply. However, most of the justices sided with the Eleventh Circuit’s freedom of speech holdings and criticized the Fifth Circuit for incorrectly applying the First Amendment. [7]
Communications Decency Act
Beyond Moody and Paxton, the Supreme Court recently took up two cases dealing with Section 230 of the Communications Decency Act (CDA). Section 230(c)(1) immunizes an interactive website from any civil liability accruing from a user’s creation or development of content. Section 230(c)(2), on the other hand, immunizes an interactive website from liability that may result from taking down content that (a) was posted to one of its pages and that (b) it finds objectionable. Both complementary immunities have their exceptions codified in Section 230(e) – for example, Section 230 might not grant immunity for liability resulting from aiding and abetting terrorism.
In October 2022, the Supreme Court granted certiorari in Gonzalez v. Google and Twitter v. Taamneh, two California cases heard by the Ninth Circuit dealing with the potential terrorism exception to Section 230. In both cases, the plaintiffs sued the respective tech giant for prioritizing and recommending terrorist recruitment content after one of their family members died in an attack waged by the Islamic State of Iraq and Syria. The Ninth Circuit, deciding both cases, held that the plaintiffs could hold them liable for aiding and abetting terrorism on the facts in Taamneh, but not in Gonzalez.[8] This ruling contrasts with Force v. Facebook, where the Second Circuit found in 2019 that similar causes of action did not pierce the tech companies’ Section 230 immunity.[9] The Supreme Court now has an opportunity to mold the law of internet speech platform regulation, at least as it pertains to CDA Section 230.[10]
Content Moderation’s Future
Content moderation on social media platforms could change depending on how the Supreme Court handles these recent cases concerning free speech on the internet. Before the Fifth Circuit came out the way it did, internet platforms had almost unfettered discretion under the First Amendment to take down content and ban users. However, future developments in the law of internet speech platform regulation may force social media companies to host or carry content and users they find “objectionable” on their platforms. This possibility risks creating a wide variety of new legal issues for such companies to navigate.
Authors:
Julian Oscar Alvarez, Associate, Litigation Department
Nathan M. Shaw, Associate, Corporate Department
[9] Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019), https://casetext.com/case/force-v-facebook-inc, cert. denied, No. 19-859 (U.S. May 18, 2020).
Jeffer Mangels Butler & Mitchell LLP is a full-service law firm committed to providing clients with outstanding results. Our corporate lawyers serve numerous middle-market companies, large publicly traded corporations and emerging entrepreneurial businesses with a full range of financing, transactional and operational counsel, as well as in all aspects of mergers and acquisitions.