It’s hard to ignore the warning signs: retweets on the edge of spamming, nonsensical Twitter handles and profiles missing personal information.
Nearly one-fifth of election-related tweets came from bots, according to a new study by University of Southern California researchers Alessandro Bessi and Emilio Ferrara. The duration of the study covered all three presidential debates and lasted from September 16 to October 21.
The researchers used hashtags to aggregate 20 million tweets related to the presidential election as well as identify Hillary Clinton or Donald Trump supporters. Bessi and Ferrara then used a machine-learning framework called BotOrNot to distinguish between bots and humans. Social bots made up nearly 15 percent of the population accordion to the study.
“There are millions of people if not hundreds of millions who log in to social networks,” Ferrara told Heavy. “These people are exposed to hundreds of sources. Some are reliable, but some may be malicious.”
Using sentiment analysis software, they were also able to gauge the attitudes of bots toward either candidate. The Pro-Trump discussion driven by both bots and humans was significantly more positive than the pro-Clinton discussion, which included an equal number of positive and negative tweets.
The skewed nature of bot discussions could create misperceptions on social media, said the study’s authors. While Twitter users were less likely to respond to bots than to other humans, they retweeted both bots and humans at the same rate.
“It seems like there’s a strong bias built in by design,” said Ferrara, who noted how supportive pro-Trump bots were of the candidate. The study posted on peer-review journal first monday, highlighted the dangers of biased bots.
The fact that bots produce systematically more positive content in support of a candidate can bias the perception of the individuals exposed to it, suggesting that there exists an organic, grassroots support for a given candidate, while in reality it’s all artificially generated.
How to Spot a Bot
It’s very difficult to identify the ‘bot masters’ because they go out of their way to hide their identity. Key indicators of a bot account include a default profile setup, lack of location metadata and activity statistics.
Here are some of the behaviors Twitter bots engage in, according to the study.
- Searching Twitter for Hashtags and Retweeting Them
- Automatically Replying to Tweets That Meet a Certain Criteria
- Automatically Following Back Users
- Adding Users Tweeting About a Specific Topic to a List
- Searching Google for News to Post
The pervasiveness of automated tweeting comes as advancements in machine learning make it easy for anyone to set up a Twitter bot. A quick Google search brings up Twitter bot tutorials for users with little or no programming experience.
While Twitter’s rules prohibit spamming and spreading misinformation, Ferrara said he personally doesn’t think Twitter has cracked down hard enough on bot accounts. He added that unless regulation is enacted to counter the rise of malicious bots, social media may continue to foster misinformation.
“We need as a society to come together and come up with rules to prevent spreading rumors and false claims,” Ferrara said.
Discuss on Facebook