Image for Senate Panels to Explore Content Moderation on Social Media

Election day is near, presidential debates are over and Amy Coney Barrett is seated on the Supreme Court. All that’s left are congressional hearings where the heads of Facebook and Twitter will be grilled over moderating election-related content.

Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg will appear once and maybe twice before Senate committees to address the issue of content moderation, which some critics equate with censorship and others view as necessary to blunt the impact of disinformation campaigns.

Facebook’s Mark Zuckerberg, Twitter’s Jack Dorsey and Google’s Sundar Pichai appear today before the Senate Commerce, Science and Transportation Committee, Senate Judiciary Chair Lindsey Graham and the panel’s Republicans voted last week to subpoena Zuckerberg and Dorsey to appear before the election next week. Committee Democrats have asked for the two to testify after the election.

Congressional Republicans want to press Zuckerberg and Dorsey on their allegedly biased handling of the Hunter Biden story published in the New York Post. Democrats want to probe existing law that shields social media platforms from lawsuits over what is posted on them by users.

The hearings come in the lingering shadow of the 2016 presidential election and the warning lights in the 2020 presidential election. Because social media is highly accessible and generally free, it can be used to spew disinformation, as was the case in 2016, according to US Intelligence sources. Content moderation, often performed by computers and backed up by human monitors, is intended to block offensive or intentionally wrong posts.

Content moderation is used to block posts with nudity, offensive symbols, hate speech, violence and weapons on social media. In response to criticism following the 2016 election, Facebook and Twitter have taken steps also to moderate political content, especially just before and just after the 2020 election. They also have placed restrictions on paid political advertisements.

You could say the two largest social media sites in the politisphere want to avoid being the cannon that shoots off an October surprise that turns out to be a dud.

That brings the conversation to the Hunter Biden story. Republicans, seemingly spearheaded by Rudy Giuliani on behalf of President Trump, have been scavenging for proof of Hunter Biden corruption in Ukraine and China that rubbed off his dad, Joe Biden, the Democratic presidential nominee. Based on a laptop that allegedly contains incriminating emails, the New York Post ran a story that circulated widely in the conservative media universe, but didn’t get coverage in mainstream media.

No less than Donald Trump Jr. and Trump attorneys pitched the same story to The Wall Street Journal, which wasn’t granted access to the laptop and declined to publish a story. Trump’s dust-up with Leslie Stahl on a CBS’ 60 Minutes pre-election interview occurred in part because Stahl said the show wouldn’t air unsubstantiated claims about the Biden family.

Twitter chose to block users who tried to share the New York Post story, which prompted Republican Senators Ted Cruz and Josh Hawley to call foul and claim censorship. Facebook also limited distribution of the story before it had been fact-checked.

Those actions served to tee up congressional review of Section 230, which was adopted in the 1996 Communications Decency Act and became part of the federal Communications Act of 1934. What Section 230 says is, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. There is widespread agreement this freedom from lawsuits enabled the internet generally and social media specifically to flourish.

Frustration with social media has led some, including the king of tweeters Donald Trump, to call for repeal of Section 230. Republican Senate Judiciary Chair Lindsey Graham and Ranking Democrat Dianne Feinstein have cosponsored the EARN It Act, which would require social media platforms to earn their legal immunity under Section 230 by complying with new government rules intended to combat child-trafficking and exploitation. 

You could say the two largest social media sites in the politisphere want to avoid being the cannon that shoots off an October surprise that turns out to be a dud.

The push for some form of content moderation isn’t just an American preoccupation. France enacted this spring the “Fighting Hate on the Internet” law, based broadly on Germany’s 2017 Network Enforcement Act, which imposes stringent intermediary liability provisions. Germany’s law requires social network companies to take down material immediately that is deemed “obviously illegal” or face heavy fines without judicial oversight or safeguards.

The French Constitutional Court struck down the “Fighting Hate” measure, while German lawmakers toughened their law by requiring social media platforms to report to police any material they remove.

Brazil waded into similar water, approving a controversial and ultimately watered-down version that requires social media platforms to issue transparency reports and ensure due process for content moderation decisions.

One of the best descriptions of Section 230 came in a 1990s lawsuit involving Compuserve and Prodigy. A judge dismissed a lawsuit against Compuserve, calling it the equivalent of a newsstand or bookstore that provides material without editing it. The same judge let the lawsuit continue against Prodigy, likening the online information service that moderated content to the editor of an editorial page. The message of sorts back then being that immunity is strongest when editing or moderation is least. That view appears to be changing, though to what is not yet clear.

Content moderation is not all the same. It can occur before a post goes live on social media, after something offensive is posted or in reaction to a post. Moderation can limit or prevent paid or viral distribution. It can remain online with an accompanying warning or a link to a source of unbiased information. There are even companies that specialize in content moderation for websites, social media sites and technology companies.

Some observers believe trying to legislate content moderation rules is a hopeless quagmire. A better approach, they contend, is to break up large technology companies. A step in that direction has just been taken with a Trump administration antitrust action against Google and its parent company, Alphabet.