By Tauhid Zaman
Even a few bots can shift public opinion in big ways, according to new research from Tauhid Zaman.
Nearly two-thirds of the social media bots with political activity on Twitter before the 2016 U.S. presidential election supported Donald Trump. But all those Trump bots were far less effective at shifting people’s opinions than the smaller proportion of bots backing Hillary Clinton. As my recent research shows, a small number of highly active bots can significantly change people’s political opinions. The main factor was not how many bots there were – but rather, how many tweets each set of bots issued.
My work focuses on military and national security aspects of social networks, so naturally I was intrigued by concerns that bots might affect the outcome of the upcoming 2018 midterm elections. I began investigating what exactly bots did in 2016. There was plenty of rhetoric – but only one basic factual principle: If information warfare efforts using bots had succeeded, then voters’ opinions would have shifted.
I wanted to measure how much bots were – or weren’t – responsible for changes in humans’ political views. I had to find a way to identify social media bots and evaluate their activity. Then I needed to measure the opinions of social media users. Lastly, I had to find a way to estimate what those people’s opinions would have been if the bots had never existed.
Finding Tweeters and bots
To narrow the research a bit, my students and I focused our analysis on the Twitter discussion around one event in the lead-up to the election: the second debate between Clinton and Trump. We collected 2.3 million tweets that contained keywords and hashtags related to the debate.
Then we made a list of the roughly 78,000 Twitter users who posted those tweets and constructed the network of who followed whom among those users. To identify the bots among them, we used an algorithm based on our observation that bots often retweeted humans but were not themselves frequently retweeted.
This method found 396 bots – or less than 1 percent of the active Twitter users. And just 10 percent of the accounts followed them. I felt good about that: It seemed unlikely that such a small number of relatively disconnected bots could have a major effect on people’s opinions.
A closer look at the people
Next we set out to measure the opinions of the people in our data set. We did this with a type of machine learning algorithm called a neural network, which in this case we set up to evaluate the content of each tweet, determining the extent to which it supported Clinton or Trump. Individuals’ opinions were calculated as the average of their tweets’ opinions.
Once we had assigned each human Twitter user in our data a score representing how strong a Clinton or Trump backer they were, the challenge was to measure how much the bots shifted people’s opinions – which meant calculating what their opinions would have been if the bots hadn’t existed.
Fortunately, a model from as far back as the 1970s had established a way to gauge people’s sentiments in a social network based on connections between them. In this network-based model, individuals’ opinions tend to align with the people connected to them. After slightly modifying the model to apply it to Twitter, we used it to calculate people’s opinions based on who followed whom on Twitter – rather than looking at their tweets. We found that the opinions we calculated from the network model matched well with opinions measured from the content of their tweets.
Life without the bots
So far we had shown that the follower network structure in Twitter could accurately predict people’s opinions. This now allowed to us to ask questions such as: What would their opinions have been if the network were different? The different network we were interested in was one that contained no bots. So for our last step, we removed the bots from the network and recalculated the network model, to see what real people’s opinions would have been without bots. Sure enough, bots had shifted human users’ opinions – but in a surprising way.
Given much of the news reporting, we were expecting the bots to help Trump – but they didn’t. In a network without bots, the average human user had a pro-Clinton score of 42 out of 100. With the bots, though, we had found the average human had a pro-Clinton score of 58. That shift was a far larger effect than we had anticipated, given how few and unconnected the bots were. The network structure had amplified the bots’ power.
We wondered what had made the Clinton bots more effective than the Trump bots. Closer inspection showed that the 260 bots supporting Trump posted a combined 113,498 tweets, or 437 tweets per bot. However, the 150 bots supporting Clinton posted 96,298 tweets, or 708 tweets per bot. It appeared that the power of the Clinton bots came not from their numbers, but from how often they tweeted. We found that most of what the bots posted were retweets of the candidates or other influential individuals. So they were not really crafting original tweets, but sharing existing ones.
It’s worth noting that our analysis looked at a relatively small number of users, especially when compared to the voting population. And it was only during a relatively short period of time around a specific event in the campaign. Therefore, they don’t suggest anything about the overall election results. But they do show the potential effect bots can have on people’s opinions.
A small number of very active bots can actually significantly shift public opinion – and despite social media companies’ efforts, there are still large numbers of bots out there, constantly tweeting and retweeting, trying to influence real people who vote.
It’s a reminder to be careful about what you read – and what you believe – on social media. We recommend double-checking that you are following people you know and trust – and keeping an eye on who is tweeting what on your favorite hashtags.
This article was originally published on The Conversation and has been republished under a creative commons license. For the original, click here.
Tauhid Zaman is an Associate Professor of Operations Management at MIT Sloan School of Management.
Disclaimer: The ideas expressed in this article reflect the author’s views and not necessarily the views of The Big Q.
You might also like:
Russian trolls and bots: what are they, what they do and how to craft effective responses
Are hacking, fake news, and paid trolls destroying democracy? 🔊