This post was originally published on this site
Cybersecurity researchers warn that misinformation campaigns from convincing bots are fueling much of the social media debate about lifting coronavirus lockdowns and reopening the economy.
A Carnegie Mellon University team analyzed more than 200 million tweets discussing coronavirus or COVID-19 since January. And they found that between 45% and 60% of them came from bots, aka automated user accounts that mimic human interactions on Twitter TWTR, +4.47%, such as tweeting content and retweeting anything posted by a specific set of users or featuring a specific hashtag.
What’s more, 82% of the top 50 most influential coronavirus/COVID-19 retweeters were bots, as were 62% of the top 1,000 retweeters, according to the report.
“We’re seeing up to two times as much bot activity as we’d predicted based on previous natural disasters, crises and elections,” wrote Kathleen Carley, a professor in the School of Computer Science’s Institute for Software Research and director of the Center for Informed Democracy & Social – Cybersecurity (IDeaS.)
The research team analyzed the hundreds of millions of tweets by using artificial intelligence and network analysis techniques to identify the accounts that were likely bots, such as looking at the number of followers, the frequency of tweeting and an account’s mentions network.
“Tweeting more frequently than is humanly possible, or appearing to be in one country and then another a few hours later, is indicative of a bot,” Carley explained.
“When we see a whole bunch of tweets at the same time or back to back, it’s like they’re timed,” she added. “We also look for use of the same exact hashtag, or messaging that appears to be copied and pasted from one bot to the next.”
The researchers also found that many of the bot accounts tweeting about the coronavirus were created in February, and have been spreading more than 100 types of inaccurate pandemic stories, including phony medical advice and potential cures.
These accounts have also amplified conspiracy theories, such as those speculating about the origins of the virus, claims that hospitals are being filled with mannequins instead of actual patients, as well as linking the coronavirus to 5G towers.
And these conspiracy theories are referenced in many of the tweets pushing to reopen America ASAP.
Last month, Twitter announced it was taking down posts that spread coronavirus conspiracy theories that could “incite people to engage in harmful activity,” which followed reports of attacks on 5G telecom masts in the U.K. as conspiracy theorists linked the coronavirus with 5G cellular service — a myth that has been widely debunked.
The social media site claimed to have removed more than 2,230 tweets containing “misleading and potentially harmful content” between March 18 and April 22 of this year, and added that its automated systems have challenged more than 3.4 million accounts “targeting manipulative discussions around COVID-19.”
Fake news travels fast. A 2018 MIT report found that false news stories are 70% more likely to be retweeted than true ones. And it takes true stories about six times as long to reach 1,500 people as it takes for false stories to reach the same number of people.
While the report can’t name a definite motive for this, Carley noted that “conspiracy theories increase polarization in groups. It’s what many misinformation campaigns aim to do. People have real concerns about health and the economy, and people are preying on that to create divides.”
The researchers have also begun to analyze Facebook FB, -0.35%, Reddit and YouTube GOOG, +0.89% posts to better understand how misinformation spreads between the platforms.
Unfortunately, the CMU report cannot identify who or what entity is behind such “orchestrated attempts” to influence the online conversation about the pandemic and reopening the economy. “We do know that it looks like it’s a propaganda machine, and it definitely matches the Russian and Chinese playbooks, but it would take a tremendous amount of resources to substantiate that,” said Carley.
There’s also depressingly little that can be done to stop this, the report admits, as blocking bot accounts doesn’t stop new ones from popping up in their place. Carley told the MIT Technology review that an oversight group made up of the government, corporations and researchers would be needed to come up with policies to combat the constant barrage of bots. “No one group can do it alone,” she said.
But individuals can still protect themselves from being influenced by bots online by knowing what to watch out for while scrolling. That includes looking for many tweets coming out quickly from one account, or a username and a profile photo that don’t match, or tweets that share links with subtle typos.
Also, bots tend to just retweet and post links rather than create any original content. And their profiles are often impersonal to the point of having no photo, along with usernames that are just a string of random numbers and letters.
“Even if someone appears to be from your community, if you don’t know them personally, take a closer look, and always go to authoritative or trusted sources for information,” Carley said. “Just be very vigilant.”