Last week, President Trump signed an executive order on preventing online censorship. While the order is explicitly aimed at curbing “selective censorship” of content by “large, powerful social media companies,” the effects could be far broader.
The order’s stated justification is that “[o]nline platforms are engaging in selective censorship that is harming our national discourse.” Specifically, the order addresses online platforms flagging content as inappropriate although the content does not violate the stated terms of service; changing company policies to disfavor certain viewpoints; deleting content or accounts without warning, explanation, or recourse; and demonstrating “political bias” through labels placed around certain user content. In other words, the order argues that certain online platforms are not moderating content in “good faith.” The order specifically pertains to certain provisions within Section 230(c) of the Communications Decency Act, which provides protections for online platforms when restricting access to or availability of content.
Communications Decency Act Section 230
In October 1994, an internet forum user created a post defaming an investment bank. The investment bank sued the user and the owner of the forum, Prodigy Communications Corporation, for defamation. The New York Supreme Court held that, because Prodigy actively moderated the forum — deleting certain users’ posts on the basis of offensiveness or bad taste — it could be held liable as a publisher of the defamatory post. Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 BL 129 (N.Y. Sup. Ct. May 24, 1995).
For lawmakers hoping to reduce internet obscenity, the Prodigy ruling revealed a policy problem. If a website allowed all user-generated content with no moderation, it bore no liability for that content. But once a website removed or otherwise moderated objectionable content, as Prodigy did, it became liable for anything that it missed.
In 1996, Congress passed Section 230 of the Communications Decency Act to eliminate this dilemma. Specifically, Section 230(c)(2) states:
No provider or user of an interactive computer service shall be held liable on account of —
- Any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected …
This key provision allows websites to moderate user-generated content without being held liable for all user-generated content.
Executive Order Preventing Online Censorship
The executive order first states that Section 230(c)(2) protections do not apply to an online platform that moderates content in a deceptive or pretextual way, aiming to stifle viewpoints with which it disagrees. To advance this position, the order directs the secretary of commerce to file a petition for rulemaking with the FCC, asking the FCC to propose regulations clarifying Section 230.
Second, the order directs executive departments and agencies to review their marketing spending on online platforms to assess whether those platforms are “problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.”
Third, the order states that “large online platforms, such as Twitter and Facebook” are equivalent to the modern public square and should not restrict protected speech.
Fourth, the order suggests that online platforms may be misrepresenting their moderation policies to the public and encourages the FTC to consider taking action to prohibit these “unfair or deceptive acts or practices” under 15 USC § 45.
Fifth, for large online platforms, the FTC is directed to consider whether reports of political viewpoint-based moderation “allege violations of law.”
Sixth, the attorney general is directed to establish a working group regarding the potential enforcement of state statutes that prohibit online platforms from engaging in unfair or deceptive acts or practices. This group shall collect publicly available information regarding a variety of alleged moderation practices.
Seventh and finally, the attorney general is directed to develop proposed federal legislation to further promote the objectives of the order.
What Will Change?
The executive order is ambitious, but sweeping changes are fairly unlikely as many of the order’s objectives could prove difficult to achieve. The order’s interpretation of Section 230(c)(2), which would revoke protections for online platforms that moderate content in a deceptive or pretextual way to stifle certain viewpoints, does not bind any court. The FCC may not be particularly helpful. The FCC has not traditionally issued regulations regarding Section 230, and even its role in interpreting the statute is uncertain. Moreover, any suggestion that large online platforms cannot restrict protected speech because they function as the modern public square would require major changes in First Amendment jurisprudence.
In the near term, however, executive departments and agencies may reduce marketing spending on large social media platforms that are viewed as censoring certain political viewpoints. The other provisions of the order, encouraging prosecution of certain online platforms for unfair and deceptive trade practices (specifically, moderating in ways that do not comport with their stated moderation policies) will likely be more effective as a political tool than a legal one, discouraging what could be considered overtly political moderation decisions. Finally, while the stated purpose of the order is to target large social media companies, its stated interpretation of Section 230, and any regulations that arise in the future, broadly apply to all companies that have an interactive service, forum, or interface online that generates user content.
In light of the potential confusion surrounding the executive order, it is helpful to remember the core of Section 230. A provider of an interactive computer service — i.e. a website or app — is generally not liable for what other users post. The provider is, of course, liable for what they themselves post. Good faith moderation of user-submitted content to restrict access to or availability of material that the provider considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable does not increase a provider’s liability.
With the order’s increased scrutiny on content moderation, companies should take inventory of whether they host user-generated content and, if so, where. Is there a comment section beneath the company blog? Can reviews be posted within the company app? Does anyone moderate this user-generated content and, if so, who? Is that person familiar with the company’s moderation policy, which should reflect the company’s user-facing terms and conditions?
Any company that hosts user-submitted content should ensure that their moderation practices match the policies outlined in their terms of service. Removing discrepancies between stated policies and actual practices is the best way to demonstrate good faith and reduce the risk of an unfair and deceptive trade practices lawsuit.
As directed by the order, further guidance and clarifications of Section 230(c) are forthcoming. We will continue to monitor any guidance and proposed changes. Although sweeping changes are unlikely, it will be important for companies who host user-submitted content to be aware of potential legal implications when moderating such content.