This post was originally published on this site
The insurrection at the U.S. Capitol has led to the biggest reckoning in the history of the internet, and will likely be a tipping point for new regulation of American social-media companies.
The unprecedented suspension of President Trump from platforms owned by some of the biggest companies in technology — including Alphabet Inc.’s GOOG, -0.23% GOOGL, -0.19% YouTube, Twitter Inc. TWTR, -1.33% and Facebook Inc. FB, +2.33% — has fueled new calls for laws to hold social media companies accountable for what they publish and what they won’t publish.
U.S. legislators have avoided creating new legislation for years, while Big Tech has had, up until now, a mostly hands off content moderation policy. We have seen the results: dangerous groups using these platforms to misinform, attract new conspirators, and plan attacks.
Read also: Twitter permanently suspends President Trump’s account
The question that U.S. legislators and leaders have avoided for years is what does useful reform look like? It is a central issue that the incoming Biden Administration is going to have to confront, along with Congress.
Section 230 is focus of proposed reform
As animosity toward big tech reached a crescendo, attention has centered on the main legislation that governs the internet, knowns as Section 230, a provision of the Communications Decency Act that was enacted 25 years ago. It remains the only major law regarding the internet that the U.S. has managed to pass.
Section 230 provides wide protections for companies that publish third-party content, and is described as the “most valuable tool for protecting freedom of expression and innovation on the internet,” by the Electronic Frontier Foundation, a San Francisco nonprofit focused on digital rights.
Under Section 230, no online service or intermediary shall be held legally responsible “for information provided by another information content provider.”
As social-media companies have become a venue for hate speech, bullying and, most recently, mounting an insurrection on the U.S. government, the ire and frustration with these companies is renewing calls for revising or rescinding Section 230. And it is a political hot potato, with the Democrats wanting more moderation of content, while Republicans say there is too much moderation and even censorship.
“There is a moment now, and an inflection point, because the companies have been suddenly stepping up in a way that they had not before,” said Irina Raicu, internet ethics program director at the Markkula Center for Applied Ethics at Santa Clara University. “There is this looming threat that is much more real and immediate than we have seen in past years. Then maybe this is the right time for those conversations to happen.”
More from Therese: Regulating Big Tech will be hard, and California is proving it
Raicu said recent demands in Congress for better content moderation and legal accountability by the companies are unfortunately being propelled by misinformation touted by conservatives, who claim there is bias against their point of view in social media but there is not.
Social-media giants have been doing a far better job at fact-checking on their sites, as the calls and actual proposals to remove or rewrite Section 230 increased. Just because one side of the aisle posts and spreads more false information does not mean that there is a bias against them; the bias is against misinformation.
After misinformation, foreign interference, and the Cambridge Analytica scandal had a big impact on the 2016 U.S. presidential election, social-media companies began taking a more active role in content moderation than ever before.
At the onset of the coronavirus pandemic, Facebook , Twitter, Google and others joined forces to fight misinformation about the pandemic.
In May 2020, for the first time, Twitter began labeling some of President Trump’s tweets as “containing potentially misleading information,” after allowing him to tweet propaganda and lies for years. Raicu also believes that internal efforts by employees with a social conscious began having some effects on overall corporate behavior.
In response, Trump and other Republican politicians began calling for the removal of the legal immunity companies have from repercussions for falsehoods, hate speech and violent behavior on their platforms. Trump signed an executive order in May to limit the legal protections of companies under Section 230, but it did not have any teeth to amend existing law. It was described as political theater by some, such as Eric Goldman, a law professor at Santa Clara University Law School.
“I think people appreciate that the wild wild west is not working,” said Christa Ramey, a Los Angeles-based attorney who includes cyberbullying as one of the focuses of her practice. “If these companies are held responsible, their content will change.” But therein lies the problem.
The problem with rescinding Section 230
Social media traffic thrives on acrimony, arguments, and rage, which drives increased interactions, so for years they have had no incentives to moderate much of the content on their sites, because it was extremely profitable, and they had no liability for hate speech or other scary behavior.
But as these companies have seemingly grown more of a conscience, and also in anticipation of possible regulation that might take their immunity away, more moderation has been preferable to potential never-ending litigation.
If their immunity is removed by rescinding or a major revision of Section 230 and social media companies become as responsible for their content as newspapers, broadcasters and other media companies, the internet could be in grave danger of becoming a series of walled content gardens.
“Think of all the ways internet services allow us to talk to each other,” said SCU professor Goldman, who is also a co-director of its High Tech Law Institute. “That’s why I keep saying we could lose the internet that we love. People want to talk to each on the internet. Section 230 has been the engine that permits those conversations. Without Section 230, those conversations are gone. What we love most about the internet could go away.”
“People want to burn down Section 230 but whatever problem they are hoping to solve, they will be making it worse,” he added. “For many of the problems [of the internet] people complain about, Section 230 is the solution, not the problem.”
See also: Biden inherits a tech Cold War with China
Twitter was able to remove Trump and his incendiary comments about the 2020 presidential election because of its ability to moderate content under Section 230, he added.
Those actions, though, begged the question of whether Twitter, Facebook and other social-media companies are in fact publishers — which are subject to libel and defamation litigation — instead of internet platforms, which have immunity from ramifications of third-party content.
That debate is an old one that never looked more relevant in the last two weeks after what looked like editorial decisions by Twitter and Facebook.
Goldman said that the platform versus publisher debate – which also came up during last year’s congressional grillings of the chief executives of Facebook FB, +2.33%, Twitter TWTR, -1.33%, and Alphabet GOOGL, -0.19% GOOG, -0.23% – is largely irrelevant. He noted that when media companies publish the work of third parties over the internet, they too enjoy legal immunity.
As the Electronic Frontier Foundation explained in a blog post last month: “There is no legal significance to labeling an online service a ‘platform’ as opposed to a ‘publisher.’ Yes. That’s right. There is no legal significance to labeling an online service a ‘platform.’ Nor does the law treat online services differently based on their ideological ‘neutrality’ or lack thereof. ”
David Greene, the EFF’s civil liberties director, continued, “Section 230 explicitly grants immunity to all intermediaries, both the ‘neutral’ and the proudly biased. It treats them exactly the same, and does so on purpose.”
What is the best way to reform Section 230?
If the current tide is going to lead to some legal reform, getting rid of Section 230 completely would be disastrous.
Goldman also argues that during the pandemic, it is the special immunity under Section 230 that let companies like Zoom Video Inc. ZM, +0.34% become the de-facto host for online education, and online marketplaces for ecommerce to thrive, and other digital services that have helped many during the pandemic. “Section 230 literally helped save lives—and our country,” Goldman wrote in a published letter to President Elect Biden.
If section 230 cannot be rescinded without completely destroying the core of what consumers love and need about the internet, what can be done?
In the last year, more than two dozen bills have been introduced in Congress to reform or repeal Section 230. But each bill must be studied carefully and congress, as the Biden Administration need to proceed cautiously.
Read: Can Intel’s ‘boy wonder’ pull a Steve Jobs?
Since the creation of the CDA in 1996, there has been one major revision to Section 230, known by the acronym FOSTA-SESTA, to stop enabling sex trafficking over the internet in 2018. Many believe that the bill has been ineffective, because sex workers are now more vulnerable because they are back on the streets, versus finding their work via the internet.
“It’s clear that Section 230 is going to be subject to some reform, not because it should be, but because so many people believe there is a problem,” Goldman said.
One of the more interesting bills has been sponsored by Congresswoman Anna Eshoo, D.- Calif., and Congressman Tom Malinowski, D.-N.J. that seeks to hold companies liable if their algorithms amplify harmful, radicalizing content. They described their bill as a “narrow amendment,” aimed at creating the first legal incentive for companies to fix the underlying architecture of their services.”
Many are also watching what is happening in the European Union, which is debating a proposal for a Digital Services Act. In its current state, this new act would require companies to describe how their algorithms work, disclose decisions on how content is moderated or removed, and disclose how advertisers target users, while maintaining legal immunity, unless they actually know the content is illegal. That could also be a minefield that could lead to a quick content removal, just because someone says it is.
“I do feel there is a need for some reform,” Raicu said. “People have been calling for some form of reform for years, feeling the protection is too broad.”
But be careful what you wish for. Goldman fears the internet of the future “will look a lot like Quibi,” with a huge library of short, professionally produced videos behind paywalls. This would also create a massive digital divide, where all content would have to be paid for by consumers.
Everyone is watching what the U.S. — home to both the biggest tech behemoths and the land of free speech — will do in this major conundrum. With a new Congress and administration coming into office, let’s hope that any reform is handled with precision and care; otherwise, it could have dire consequences beyond political fallout.