Image for High Court Reviews Role of Content Moderation
Supreme Court hears arguments on whether state laws in Texas and Florida should into effect limiting the ability of social media platforms to remove content.

Case Pits Constraining Conservative Voices Versus Deleting Harmful Content

After four hours of oral arguments, U.S. Supreme Court justices appeared reticent to let laws in Texas and Florida go into effect that would prevent social media companies from moderating content on their platforms. The court’s ruling, expected in June, will have significant impact on First Amendment rights.

A comment by Chief Justice John Roberts seemed to reflect the prevailing court’s view. The First Amendment, Roberts said, prohibits the government from censoring speech, not private entities.

“The First Amendment restricts what the government can do, and what the government [in Texas and Florida] is doing here is saying, you must do this, you must carry these people; you’ve got to explain if you don’t,” Roberts explained. “That’s not the First Amendment.”

Backers of the Texas and Florida statutes claim social media content moderation silences conservative voices, denying the public access to different points of view. They compared technology companies to phone companies that are barred from denying service to anyone. Texas Solicitor General Aaron Nielson said removing content diminishes the role of social media as a public square.

“The First Amendment prohibits the government
from censoring speech, not private entities.”

Moderation or Censorship
Justice Samuel Alito appeared sympathetic and asked whether Gmail has a First Amendment right to delete email accounts of conservative commentator Tucker Carlson or liberal commentator Rachel Maddow. Justice Ketanji Brown Jackson raised similar concerns about Facebook’s messaging feature.

However, a majority of justices seemed to agree with technology companies that content moderation is a form of “editorial discretion”, similar to newspapers that decide what to publish, bookstores that decide what to promote in their windows and theaters that decide what films to show.

A spokesperson for NetChoice, a coalition opposed to the Texas and Florida laws, said content moderation is important to “protect youth online.” Justice Elena Kagan picked up that theme in her comments centered on misinformation. When platforms remove misinformation, Kagan said, they are exercising judgments “about the kind of speech they want on the site and the kinds of speech they think is intolerable.”

Alito quizzed NetChoice’s representative to define content moderation by asking whether the term was “anything more than a euphemism for censorship.” “If the government’s doing it, then content moderation might be a euphemism for censorship,” said NetChoice attorney Paul Clement. “If a private party is doing it, content moderation is a euphemism for editorial discretion.”

Alito and Clement had a follow-up exchange regarding Section 230 protection for technology companies. “Either it’s your message or it’s not your message. I don’t understand how it can be both,” Alito said. “It’s your message when you want to escape state regulation, but it’s not your message when you want to escape liability.”

Clement disputed the characterization, claiming Section 230 protects companies from lawsuits over their decisions to remove content from their websites. The whole point of the provision, Clement said, was to allow online platforms to “essentially exercise editorial discretion” in removing harmful content without fear that it would expose them to liability as a publisher of user speech they don’t moderate.

Section 230 and Protecting Youth
If the Texas and Florida laws were to take effect, Clement added, platforms would be forced to carry the type of content that Congress was trying to prevent when it drafted Section 230 nearly 30 years ago.

Section 230 was enacted as part of the Communications Decency Act of 1996 and generally provides immunity for online computer services regarding third-party content generated by users. The section also has been construed to allow web operators to moderate user speech and content under First Amendment protection.

Social media platforms have come under increasing pressure to block harmful content, especially aimed at young users. States and Congress have debated a range of proposals to address youth access, parental control and heightened content moderation. Critics have accused major platforms such as Facebook and Instagram of balking at controls that would cut into their profits.

Biden Administration Advice
Justices seemed interested in a suggestion from Solicitor General Elizabeth B. Prelogar, representing the Biden administration. She urged justices to rule narrowly on the unconstitutionality of the Texas and Florida laws, while not commenting on other aspects of the laws.

“It’s not like the government lacks tools,” Prelogar said, pointing to “a whole body of government regulation” that could target conduct through antitrust laws, data privacy and consumer protection that wouldn’t conflict with First Amendment concerns.

The Supreme Court agreed to hear the case after two federal appellate courts issued conflicting rulings. The appellate court in Florida concluded that state’s law likely violated the First Amendment. The appellate court in Texas upheld that state’s law barring removal of content based on political ideology.

Judge Kevin Newsom of the 11th Circuit Court in Florida said social media platforms are distinct from other communications services and utilities that carry data from point A to point B, and their “content-moderation decisions constitute the same sort of editorial judgments” entitled to First Amendment protections when made by a newspaper or other media outlet.

Judge Andrew Oldham of the 5th  Circuit in Texas ruled the opposite. He wrote social media companies had turned the First Amendment on its head by suggesting that a corporation has an “unenumerated right to muzzle speech” by banning users or removing certain posts. Oldham compared social media platforms to “common carriers” such as telephone companies.

Newsom and Oldham were both appointed to the federal bench by former President Donald Trump.

EU and Content Moderation
The European Union’s Digital Services Act (DSA), which went into effect last August, aims to hold tech companies responsible for content posted on their platforms. They also are under pressure from news media that complain their some of their content has been removed.

The EU’s goal is to foster safer online environments by preventing and removing posts containing illegal goods, services or content while simultaneously giving users the means to report this type of content.

The DSA bans advertising targeted to a person’s sexual orientation, religion, ethnicity or political beliefs and puts restrictions on ads targeting children. It requires more transparency on how social media algorithms work and forces large platforms to give users the right to opt out of recommendation systems and profiling. DSA provisions apply to 19 platforms including Facebook, Instagram, LinkedIn, Twitter, Google Search, Snapchat and TikTok.

Meta, TikTok and Snapchat have given users the ability to opt out of their algorithms or personalized feeds. Snapchat also has scrapped ads targeting users in the 13-17 age range.