Far-right political groups have been using fake accounts and co-ordinated behaviour on Twitter to amplify pro-Brexit views, according to new research.
The findings were part of a study released Tuesday by Finnish cybersecurity company F-Secure, which analysed 24 million tweets about Brexit from 1.65 million users between Dec. 4 and Feb. 13, 2019.
F-Secure analysed tweets which included the word “Brexit” for “evidence of botnets, disinformation campaigns, astroturfing, or other inorganic phenomenon”.
This research revealed that many of the 18 million retweets came from a very small number of sources. Those sources demonstrated a number of activities that were suspicious and were not UK-based accounts.
One thing that aroused suspicion was the rate at which some accounts were tweeting. Some of this was original content but a significant amount was retweeting a core set of other accounts.
‘Far more prominent in Leave conversations’
The firm found that the manipulation of social media debates on Twitter was “far more prominent in Leave conversations” than on Remain’s side.
They found this by examining tweets made between 4 December 2018 and 13 February 2019, well after the referendum, but during a critical time of parliamentary debate.
Users who shared tweets from pro-Brexit accounts displayed suspicious behavior that signaled to the researchers that the “inorganic” Twitter activity might be coming from fake accounts or bots.
The diagram above shows spread of a single tweet over a 24 hour period. 1027 users interacted with this tweet but four stood out to researchers. They are all high-volume accounts associated with pro-leave and right wing messages. One, Ivnancy, is a US-based alt-right account. A number of these accounts have since been suspended by Twitter for their activity.
Amplification or Astroturfing?
Researchers stopped short of concluding it was part of an ‘astroturfing’ campaign (manipulating online debates with fake comments to make it seem like the social media activity is coming from an organic grassroots movement).
“Inorganic activity, in relation to political movements and events, can sometimes be indicative of astroturfing, or the spread of disinformation,” Andy Patel, a senior researcher with F-Secure’s Artificial Intelligence Center of Excellence, said in a statement. “At the very least, our research shows there’s a global effort amongst the far-right to amplify the ‘leave’ side of the debate.”
Tech companies, including Twitter, Facebook and Google, have been under mounting pressure to do more to stop the spread of disinformation especially in the wake of the 2016 US presidential election. Twitter saw its monthly active users fall from 326 million to 321 million in the fourth quarter and partly blamed the drop on its crackdown of automated accounts. Last week, Facebook pulled down 137 accounts, pages and groups from the UK that misrepresented their identities and were used to spread hate speech and divisive political comments.
While researchers also spotted suspicious activity tied to Twitter accounts that were against Britain’s withdrawal from the European Union, they found the behavior was “more pronounced” among Brexit supporters.
Accounts that post a high volume of tweets, develop a huge following in a short time or have a similar number of followers and friends may be automated or part of a disinformation campaign, but it’s hard to know for sure.
“It’s clear that an internationally coordinated collective of far-right activists are promoting content on Twitter (and likely other social networks) in order to steer discussions and amplify sentiment and opinion towards their own goals, Brexit being one of them,” the study stated.