In late February 2019, I (Jochen Spangenberg) conducted an exclusive interview for WeVerify with Sam Dubberley, an acknowledged expert in the verification sphere. Sam works as a manager for Amnesty International’s so-called Digital Verification Corps, which is part of Amnesty’s Crisis Response Team / Evidence Lab. He is furthermore a research consultant at the Human Rights, Big Data and Technology Project, based at Essex University’s Human Rights Centre, that researches the rights and implications of big data, AI & associated technologies. Here’s what Sam had to say about challenges and approaches when it comes to verifying digital content and eyewitness media.
Jochen Spangenberg (JS): What are your difficulties with the current set of verification tools?
Sam Dubberley (SD): The fact that we rely upon the biggest internet companies for our tools – which can be changed at any moment at a whim. This is totally frustrating, increased by the fact that the companies say they are there to help, but in reality don’t really help much at all. Nobody to talk to, no real interface etc.
JS: What are the most time-consuming tasks for you in your debunking process?
SD: Geolocation is always tricky – this is due to satellite imagery from conflict areas never being the highest resolution, or data about an event not being available. When we need to be sure that a piece of content came from a precise location, this always takes longer – a few hours – and I wish there were better tools.
JS: What tasks are still cumbersome today in your debunking process?
SD: Not that many tasks are cumbersome – thanks in a great deal to InVid…! I think, when doing a large investigation, being able to connect pieces of content is still hard – i.e. create timelines or narratives around a lot of content.
JS: What are your wishes for the future of debunking – sort of realistic ones?
SD: Better databases of landmarks, a conversation with the internet giants. Oh, and a tool to help with having 100 tabs open and not sure where you started or finished.
JS: What are your dreams for the future of debunking – please also include your wildest dreams here :)?
SD: Deep fake spotters – a tool to tell us when content has been synthetically manufactured. This is a wild dream, however.
JS: What was a recent content you have debunked? What was the workflow? Which problems did you face?
SD: Attacks in Venezuela this weekend [23-24 Feb 2019]. It started with one video that was geolocated to the wrong location. My first step was to find more content and then establish at a minimum that it was from the right town (mostly done by the mountain range visible in the videos). We were attempting to create a quick narrative around events.
JS: Thank you very much!
Here’s another interview with Sam Dubberley, conducted in September 2018. Focus: video verification
A few months earlier, in late September 2018, I had conducted another interview with Sam. It was part of my research for a publication dealing with video verification, carried out in the context of the InVID project. As none of the material was used anywhere directly, I feel this is a good opportunity to also share the contents of this interview with an interested audience. As stated, the focus of the interview lies on the verification of videos.
Jochen Spangenberg (JS): What are, in your view, the biggest challenges when it comes to the verification of video (primarily video from unknown sources, such as eyewitness media, user-generated content etc.)?
Sam Dubberley (SD): For a single video, the challenges are to make sure you have gathered all the possible information that is contained in the video. This includes asking – and ideally answering – questions such as:
a) Is metadata available? What can I derive from it?
b) What visual information is contained in the video to assess the place, time, location etc.?
c) Is there enough information to geolocate it?
When it comes to video verification, the challenges are not uniform or identical. Depending on the type of video, challenges vary, sometimes significantly. So there is no “one size fits all” approach when it comes to verification. It really depends on the properties of the video, where it has been shot, what visual information is contained therein etc. One can almost say that every case is different
JS: What are, in your view, the biggest obstacles when it comes the verification of video (primarily video from unknown sources, such as eyewitness media, user-generated content etc.)?
SD: There are a number of obstacles:
a) to start with, an issue is finding videos in the first place. YouTube is really bad for searching and finding videos one is after.
b) It is very important to balance ethical needs and requirements. On the one hand, one wants as much information as possible about who took the video where and how etc. On the other hand, one aims to preserve the person’s privacy.
c) then, there is the pressure of governments on social media platforms (like Facebook, YouTube etc.). They often ask them to remove / delete material from e.g. terrorist organisations, images depicting violence or abuse etc., which makes it harder for those aiming to find and document atrocities and such likes (e.g. crimes against humanity)
d) then, there is scale again. Sometimes, there is simply too much material. And at other times, there is none at all.
e) a final point with regards to obstacles: poor quality, compression issues, and resolution.
JS: What, in your opinion, are the most useful tools or platforms for the verification of user-generated video?
SD: These are, in my view and for what I do:
a) Google Earth Pro,
b) the InVID verification plug-in,
c) Annotation tools (e.g. tools with which one can take screenshots and add annotations, such as “Snag It”),
d) Video Editing Software (professional software that allows for things like frame-by-frame analysis, slow motion, reverse play etc.). Professional versions because they are more powerful than free versions.
JS: What keeps you awake at night and worries you in the context of verification of user-generated video?
SD: Deep fakes. And the fact that the people who build these tools are doing so without an ethical framework.
JS: If you could have a wish (or a whole wish list) of things you would like fulfilled in the context of verification of user-generated video, what would be on that list? And why?
SD: Here’s my list:
a) a “deep fake spotter”,
b) tools that help with geolocation, e.g. artificial intelligence that spots and identifies geographic areas (e.g. a mountain range),
c) AI tools that spot and analyse and provide information about other visual elements contained in a video, e.g. in which country or region particular plants, trees or bushes grow, and such like.
d) a kind of “magic metadata machine”, meaning something that allows only “good people” to get certain bits of information from the metadata (this is impossible though – but after all, this is a wish list!)
JS: What are the differences in the verification of user-generated video between work as a journalist to somebody working in the human rights sector, when it comes to the verification of user-generated video?
SD: The big difference is: how to decide what to publish. I think people working in the human rights sector are more careful, or reluctant, before they publish anything. Also, human rights people usually need far more information before they go out with anything than, for example, evidence from a video only. Overall, I think the human rights sector is generally more cautious before anything goes out, and there is a higher threshold than in journalism, partly because of the possible consequences.
JS: Sam, thanks ever so much for taking your time and answering my questions. Much appreciated!
Disclaimer: Sam Dubberley was also one of three InVID project reviewers, selected by the European Commission, to assess and evaluate the project work and its achievements.
Further reading / topically related texts:
Sam Dubberley: Facebook just blindfolded war crimes investigators. Newsweek, 17 June 2019.
Ron Synovitz: Amnesty International Toils To Tell Real Videos From Fakes. Radio Free Europe / Radio Liberty, 19 September 2018.
Sam Dubberley: In the Firing Line: How Amnesty’s Digital Verification Corps changed official narratives through open source investigation. Medium, 18 May 2017.
Video interviews with members of Amnesty Internation’s Digital Verification Corps. Published on 23 August 2017
Author: Jochen Spangenberg
Date of publication / Last updated: 3 July 2019 / 5 July 2019