This post was originally published on this site
Facebook Inc. Chief Executive Mark Zuckerberg has been a lightning rod of criticism for his company’s handling of content during the 2020 and 2016 presidential elections.
Election watchers bracing for a meltdown on Facebook Inc. and Twitter Inc. were relieved to find no smoking guns. Alphabet Inc.’s YouTube, however, had some major cleanup post-election.
Social media platforms generally acquitted themselves reasonably well during national election week, avoiding the scathing criticism they endured four years ago.
But they needed to work overtime to flag or outright remove misinformation — especially about results of the presidential race. YouTube in particular continued to allow videos filled with outright falsehoods.
The first major test came at approximately 3 a.m. ET Wednesday, when Twitter TWTR, -1.35% flagged a tweet from President Donald J. Trump that falsely claimed victory. Twitter later tagged a Trump campaign tweet prematurely declaring a win in Pennsylvania. Biden was called the winner on Saturday.
Shortly before Biden was declared the winner by the Associated Press and major television networks, a Trump tweet was flagged. It said, “I WON THIS ELECTION, BY A LOT!”
Twitter permanently suspended an account belonging to former White House chief strategist Steve Bannon after he suggested the beheadings of Dr. Anthony Fauci and FBI Director Christopher Wray in a video posted to his Twitter, Facebook, and YouTube accounts on Thursday.
The video was live on Bannon’s Facebook page for about 10 hours Thursday before it was removed. Earlier Thursday night, YouTube removed the video for violating its policy against “inciting violence.”
It wasn’t all positive for Twitter, however. The company hastily removed at least three fake Associated Press accounts Wednesday that declared Biden the winner in Michigan. Hours later, Biden was announced as the winner.
The most alarming example of dereliction of duty was on Google’s YouTube, which continued to show videos to its 2 million subscribers of a Trump press conference on Thursday riddled with falsehoods aimed at undermining the democratic process. “If you count all the legal votes, I easily win,” Trump falsely claimed.
Another video, from Trump-leaning news site OANN declaring his re-election, remained on the site for more than a day.
It was an exhausting, trying stretch for the major social-media platforms to put a spigot on a firestorm of disinformation: Twitter flagged more than a dozen Trump tweets, and Facebook Inc. FB, -0.43% took down a group that made baseless claims about a stolen election.
“Twitter did the best of the three, and YouTube did the worst job,” John Marcinuk a social media expert at digital agency Blue Fountain Media, told MarketWatch. “Facebook is trying, but it should have addressed these problem years ago. Its actions almost seem like platitudes.”
It was that type of day(s) for Twitter and Facebook, both of whom suspended or tagged left- and right-leaning news accounts posting information about voting in the hotly contested U.S. election for violating their policies.
On Thursday, as results continued to trickle in from several battleground states, Facebook removed the fast-growing group ‘Stop the Steal,’ whose organizers alleged widespread voter fraud. In less than 24 hours, it had accumulated more than 360,000 members.
“The group was organized around the delegitimization of the election process, and we saw worrying calls for violence from some members of the group,” a Facebook spokesman told MarketWatch.
Consumer advocates, however, counter Facebook “knew this was coming, and still did next to nothing,” Emma Ruby-Sachs, executive director of consumer watchdog advocacy group SumOfUs, told MarketWatch. “Facebook’s pattern of negligence has continued to be a threat to our lives, our safety and our democracy.”
Indeed, headed into the election, company officials were holding their collective breath. Facebook Chief Executive Mark Zuckerberg all but admitted his trepidation last week when he started the company’s earnings conference call with analysts with a warning.
“I’m worried with election results taking up to days and weeks, that there may be civil unrest in our country,” Zuckerberg said. “Next week will certainly be a test for Facebook,” he said. “I know our work does not stop on Nov. 3.”
The mood was uniformly twitchy at Alphabet Inc.’s GOOGL, -0.15% GOOG, -0.09% Google, which also prepared for worst-case scenarios of misinformation. Yet for a day its YouTube video channel allowed a video that falsely claimed Trump won the Presidential election and that Democrats committed voter fraud.
The “Trump won” video, posted by conservative cable network One American News on Wednesday morning, made unsubstantiated claims of “rampant voter fraud” against Republican ballots while urging viewers to “take action” against Democrats. The video had more than 300,000 views before it was tagged with a warning note. This was not supposed to happen.
Anxiety over the spread of misinformation led to special election-day features prominently displayed at the top of each service. Twitter’s “Election Hub” showed tweets and information from reliable news sources. Facebook and Instagram placed confirmed results and warnings at the tops of their apps and Web pages. Google search displayed an in-depth information panel above relevant search results, which relayed election information from the Associated Press; YouTube showed election results from AP when searching for election-related terms.
Those features illustrated what went well for social media. The downside, however, was the rampant extremist content and advertising that coursed over their platforms in the weeks leading up to Nov. 3.
Zuckerberg’s ominous foreshadowing led several analysts, including Needham’s Laura Martin, to do some serious handwringing over the company’s near-term prospects, sending Facebook shares down 6% on Friday.
Read more: Facebook sales were great, but ‘headwinds’ spook analysts
In the months leading to election day on Tuesday, the company seemingly adopted a Whack-A-Mole strategy that both appeased and alarmed security experts.
Facebook claimed it helped an estimated 4.4 million people register to vote across Facebook, Instagram, and Messenger; directed more than 39 million people to its Voting Information Center; and helped about 100,000 people sign up to be poll workers.
“They react. They never proactively do the right thing,” Kiersten Todt, managing director of the Cyber Readiness Institute, told MarketWatch. “They don’t have a strategy. It’s a pinball machine approach. “
If a major post-election flare up eventually surfaces, critics contend, it will be because of Facebook.
“The policies about ads are good, but the nuts and bolts of the system [in weeding out hate speech and misinformation] hasn’t meaningfully changed,” Angelo Carusone, president of Media Matters for America, told MarketWatch.
In the weeks leading up to Nov. 3, Facebook said it would ban new political advertising the week before the election. It removed misleading Trump ads after the first presidential debate on Sept. 29 and weeded out content from QAnon and Holocaust deniers. It did everything, it seemed, but ban President Trump’s comments on his page for 31.9 million followers.
Despite its repeated efforts, however, Facebook appeared to be an unwitting or tolerant enabler of groups associated with President Trump and conservative groups with vast online followings. Everything from a pro-Trump super PAC to Donald Trump Jr. sharing misinformation on Facebook’s platform, according to a Washington Post analysis of several months of posts and ad spending, as well as internal company documents.
Last month, Sen. Edward Markey, D-Mass., pressed Zuckerberg during a Senate Commerce Committee to take down pages that recruit extremist groups until U.S. elections results are certified. Zuckerberg vowed Facebook was in the process of doing that.