As Trump challenges social-media companies, how Twitter, Facebook and YouTube deal with misinformation and glorification of violence

This post was originally published on this site

As President Trump seeks to limit social-media companies’ power, the internet’s biggest platforms have come under scrutiny yet again for how they deal with controversial content — whether it’s posted by the president or the average user.

Trump signed an executive order last week challenging protections afforded by Section 230 of the 1996 Communications Decency Act, which says that online platforms shouldn’t be held liable for content provided by users, claiming it was not intended to afford large social-media platforms “blanket immunity when they use their power to censor content and silence viewpoints that they dislike.” Legal experts have deemed the order mostly toothless, but say that it could pave the way for legislation.

The order came two days after Twitter TWTR, +2.97% affixed fact-check labels to two Trump tweets that made unsubstantiated claims about mail-in ballots. Mark Zuckerberg, the CEO of rival Facebook FB, +3.03%, responded by claiming that “Facebook shouldn’t be the arbiter of truth of everything that people say online,” echoing his previous comments on the matter. (As MarketWatch has pointed out, Facebook removed a Trump campaign ad in March to prevent confusion over the 2020 Census.)

Twitter made another unprecedented move Friday morning after the president wrote that “when the looting starts, the shooting starts” in a tweet about protests over the death of George Floyd, a black man who died in Minneapolis police custody. The tweet, while still viewable because “it may be in the public’s interest for the Tweet to remain accessible,” is shielded by a message noting that it violates Twitter’s rules about glorifying violence.

Trump attempted to clarify his comments in a tweet Friday afternoon.

“Looting leads to shooting, and that’s why a man was shot and killed in Minneapolis on Wednesday night – or look at what just happened in Louisville with 7 people shot. I don’t want this to happen, and that’s what the expression put out last night means….” he wrote. “It was spoken as a fact, not as a statement. It’s very simple, nobody should have any problem with this other than the haters, and those looking to cause trouble on social media. Honor the memory of George Floyd!”

‘Where they have gotten into difficulty is around the issues of how they will enforce their rules when you’re talking about elected officials.’

— Henry Fernandez, a senior fellow at the Center for American Progress’s Action Fund

When it comes to judging content, Twitter, Facebook and Google-owned YouTube GOOG, +0.20% are all essentially trying to predict the harm that might be associated with various pieces of content and then moderate off of that particular harm, said Cliff Lampe, a professor of information at the University of Michigan.

“[They’re] looking at things that pretty much any reasonable person would agree would harm society, whether that be a crime, violence against people, or misinformation,” he said. “Of course, misinformation is the most contentious of those — because in this current context, one person’s misinformation might be another person’s truth.”

Though all of these platforms are wrestling with these issues, their approaches to them — and how those actually play out on the platforms — differ slightly, said Henry Fernandez, a senior fellow at the Center for American Progress’s Action Fund and the co-chair of the Change the Terms Coalition, a group of 40 organizations working to combat hateful content on technology platforms.

For example, Twitter allows white supremacists and white nationalists to have Twitter accounts, whereas Facebook and YouTube generally do not. (All three will typically remove a user for repeatedly inciting violence.) In addition, the platforms also have policies that regulate content promoting misinformation, particularly around voting.

“Where they have gotten into difficulty is around the issues of how they will enforce their rules when you’re talking about elected officials,” Fernandez said. “All of the platforms have drawn distinctions on elected officials.” In other words, their content is typically not fact-checked or removed in the same way a similar sentiment would be if it came from a regular user.

Twitter’s steps this week to alert users when content from elected officials either provides misinformation about voting or incites violence contrast with how the other platforms treat this content, Fernandez said. The move “represents a remarkable effort by Twitter and its leadership to protect the First Amendment,” he said.

Of course, moderation efforts can have their shortcomings, Lampe said. For example, Twitter will get push back on flagged content as people argue about what is and isn’t true, he said; plus, “there’s just too much content for them to do it evenly across the board, so it’s going to feel unfair to a lot of people.” And Facebook’s approach of shutting down groups helps to take “a big node out of a network,” he said, but it’s easy enough to create new groups in their place — “and, of course, you don’t have to create groups to have bad content and share misinformation.”

Here’s what the world’s biggest social-media companies have said in recent months about harmful content, misinformation and hate speech:

Twitter

Earlier this month, Twitter announced it would introduce new labels and warnings to combat “potentially harmful and misleading content” related to COVID-19. Labels on such content would link users to information from an “external trusted source” or a page curated by Twitter, the company said.

The company may also apply a warning that a tweet conflicts with public-health guidance before users are allowed to view it, it said, “depending on the propensity for harm and type of misleading information.”

Twitter also has a ‘glorification of violence’ policy that prohibits celebrating, praising or condoning violent crimes, violent events that targeted people because they’re part of a protected group, and perpetrators of such violence.

Twitter outlined in a chart how it would (or wouldn’t) act on false or misleading content, depending on the propensity for harm: Misleading information with a severe propensity for harm warrants removal, for example, while disputed information with a severe propensity for harm receives a warning. Meanwhile, the company said it would take no action against unverified claims.

Twitter had earlier updated its rules for “synthetic and manipulated media” in February, laying out new criteria for labeling and removing such posts that might impact public safety or cause serious harm. Harms under consideration include threats to a person or group’s physical safety; threats to privacy, freedom of expression or civic participation; and risk of mass violence.

“You may not deceptively share synthetic or manipulated media that are likely to cause harm,” head of site integrity Yoel Roth and group product manager at Ashita Achuthan wrote in an official Twitter blog post. “In addition, we may label Tweets containing synthetic and manipulated media to help people understand the media’s authenticity and to provide additional context.”

The company also has a “glorification of violence” policy that prohibits celebrating, praising or condoning violent crimes, violent events that targeted people because of their protected-group status, and perpetrators of such violence.

A Twitter spokeswoman declined to comment for this story.

Facebook

Facebook, which has come under fire in the past for allowing political misinformation to proliferate on its platform, says in its “false news” policy that it wants to keep users informed “without stifling productive public discourse.”

“There is also a fine line between false news and satire or opinion,” the company says. “For these reasons, we don’t remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed.”

On the COVID-19 front, Facebook said in April that it was connecting users to credible public-health resources and stemming the spread of “misinformation and harmful content” by enlisting a growing army of fact-checking organizations.

Facebook has said it will provide News Feed messages to people who previously interacted with harmful, since-removed COVID-19 misinformation, and connect them with information from authoritative sources.

“Once a piece of content is rated false by fact-checkers, we reduce its distribution and show warning labels with more context,” Facebook said. “Based on one fact-check, we’re able to kick off similarity detection methods that identify duplicates of debunked stories.” The company later said it had applied warning labels to some 50 million pieces of COVID-19-related content in April based on about 7,500 articles by fact-checking partners.

Facebook has also said it will provide News Feed messages to people who had previously interacted with harmful, since-removed COVID-19 misinformation, and connect them with information from reliable sources.

Facebook, alongside YouTube, removed a viral video this month billed as a trailer for a conspiracy theory-fueled film called “Plandemic.” But by the time Facebook took it down, it had already accumulated 1.8 million views and almost 150,000 shares, Digital Trends reported.

The company is also cracking down on event pages that promote the violation of public-health guidance, telling news outlets last month that “events that defy governments’ guidance on social distancing aren’t allowed on Facebook.”

Facebook’s community standards claim the site will “remove content that glorifies violence or celebrates the suffering or humiliation of others because it may create an environment that discourages participation” and “remove language that incites or facilitates serious violence.” Facebook and its subsidiary Instagram had left Trump’s “shooting” post intact without any label.

Facebook did not immediately return a MarketWatch request for comment about its policies.

YouTube

YouTube has instituted policies to combat the spread of COVID-19 medical misinformation “that poses a serious risk of egregious harm,” including content that contradicts public-health officials’ guidance on medical treatment, prevention methods, diagnostic methods and transmission information.

The company has removed thousands of COVID-19-related videos in violation of its misinformation policies, YouTube chief product officer Neal Mohan told Axios in April, adding that it is promoting content from health officials and news outlets. The company also removes content that provides misinformation about voting — for example, an incorrect voting date or content that provides false information about a candidate’s eligibility for office.

Meanwhile, the video platform says it removes “content promoting violence or hatred against individuals or groups” based on race, sex, gender identity, disability status, religion, sexual orientation and many other attributes.

“This means we don’t allow content that dehumanizes individuals or groups with these attributes, claims they are physically or mentally inferior, or praises or glorifies violence against them,” the company says. “We also don’t allow use of stereotypes that incite or promote hatred based on these attributes.”

In some cases, concerning content directed at an individual that isn’t considered hate speech could be removed under YouTube’s harassment or violence policies, according to its website.

As is the case with most social-media companies, YouTube’s policies on these topics are evolving. The company updated its hate-speech policy in June to specifically ban videos that glorify Nazi ideology or claim that well-documented events such as the Holocaust or the Sandy Hook shooting didn’t take place.

To enforce its hate-speech policy, the company uses machine learning to help identify content for human review, according to its website. “The overwhelming majority of content that’s removed on YouTube is removed by automation,” said Fernandez, “whereas flagging is a much more important tool on Twitter, and Facebook kind of falls more in the middle of those.”