Fast track: how to check if a story is worth investigating with WeVerify tools?

  • home
  • /
  • Blog Posts
  • /
  • Fast track: how to check if a story is worth investigating with WeVerify tools?
By on November 6th, 2020 in Blog Posts

A common difficulty faced by journalists and fact-checkers – beyond analytics and forensics – is assessing as early as possible the newsworthiness of a potential story to understand the gains of a further investigation. In this blogpost, we use an example of 5G conspiracy content from the UK to show how the Twitter SNA module of WeVerify can be a useful tool to make these assessments more rapidly.

COVID-19 and 5G conspiracy theories

In the past months, many narratives linking 5G development and the spread of COVID-19 have flourished online. These narratives have been so present that platforms have announced multiple policies to curb down the reach of these pieces of disinformation.

As a journalist or disinformation investigator, you may want to check whether these narratives still circulate. One of the first steps you could take would be to check the content published on Twitter with keywords such as “5G” and “COVID” in tweets published from 1st of May 2020 until 20 October 2020.

This query has returned 778 tweets and 400 retweets in the Twitter SNA tool – a significantly low number of occurrences. We attribute this low number to the automated filters Twitter has applied to curb down the impact of 5G and COVID-19 conspiracy theories.

Looking further in the tool, we could see that a specific URL has been posted 565 times during the studied period. 

Visiting this URL, we do observe a link to a website called Brighteon, hosting a video of a tree that is allegedly described as sick because of 5G Radiations in England.

In order to investigate this specific piece of disinformation, we launched a new query in the Social Network Analysis component of WeWerify plugin. In the same period we could find this URL has been posted 565 times on Twitter, gathering only 19 retweets and 91 likes, which is a significantly low audience.

Investigating further, we could see a pattern for the time publication of these tweets, which look to be majorly posted everyday between 13:00 and 15:00, suggesting this posting might be automated, and thus signal potential Inauthentic Behaviour.

Looking more closely into Twitter accounts that are actively engaged with this content, we can see that only one account has been active in tweeting or retweeting this URL.

At this stage, our hypothesis would be that this specific Twitter account has been repeatedly pushing this URL to many other Twitter accounts. To verify this hypothesis, from this query, we’re going to launch the Social Network visualisation tool on two scales:

  • a user network, to see how different users are indeed interacting with each other;
  • a socio-semantic graph, which will display how users and content, such as URLs or hashtags, are linked together.
User-graph of Twitter interactions around

User-graph of Twitter interactions

The user graph generated by the Twitter SNA tool shows the central role of the Twitter account we previously noticed. We also notice there is no circulation of this piece of disinformation outside the community. 

The socio-semantic graph of this query also shows that, on top of the already identified actor spreading this disinformation item, we do not see any wider circulation of this URL to other communities. Indeed, the micro-communities detected by the Social Network visualisation tool are only generated because of other Twitter accounts being mentioned in specific Twitter replies, meaning they did not actually engage with the content.

Moreover, a meta analysis of the communities detected in this graph shows the Community 0 is the only one having relationships with other communities, confirming our analysis of no circulation of this specific piece of disinformation.


At first glance, it could seem newsworthy to report on the Inauthentic Behaviour noticeable around the posting of this specific media linking 5G to sick trees in the UK. However, the WeVerify Social Network Analysis tool has been helpful to grasp first-hand forensics elements around this CIB. The very low impact it had as well as the capacity to circle down this behaviour to a single Twitter account as well as the very low virality, impact and spread of this disinformation should convince fact-checkers or journalists not to report specifically on this case, in order to avoid this piece of disinformation gaining a wider audience.

In this case, within 5-7 minutes, a fact-checker or journalist had enough elements to come to a decision to allocate more time or not to this specific story.

  • Share:

Leave a Comment

sing in to post your comment or sign-up if you dont have any account.