Public release of InVID datasets

Public release of InVID datasets

We are happy to announce the public release of three InVID datasets.

The first one, called “InVID Fake Video Corpus” is a small collection of verified fake videos. It was developed in the context of the InVID project with the aim of gaining a perspective of the types of fake video that can be encountered in the real world. Currently the Corpus consists of 59 videos. For each video, information is provided describing the fake, its original source, and the evidence proving it is a fake. As we do not own the videos, the dataset only provides the video URLs and metadata, in the form of a tab-separated value (TSV) file.

The second one is the first version of the “InVID TV Logo Dataset” and was created with the purpose of providing a training and evaluation benchmark for TV logo detection in videos. It contains the results from the segmentation and annotation of 2,749 YouTube videos originating from a large number of news TV channels. The videos have been annotated with respect to the TV channel logos they contain -specifically, by the name of the organization to which the logo belongs- and with shot boundary information. Furthermore, a set of logo templates has been extracted from the videos and organized alongside the corresponding channel information. As we do not own the rights to the videos, the dataset only contains the YouTube video IDs alongside the corresponding annotations. It further contains 503 logo template files and the corresponding metadata information (channel name, wikipedia link).

The third one, termed “Concept detection scores for the IACC.3 dataset (TRECVID AVS Task)”, contains the concept detection scores for the IACC.3 dataset (600 hours of internet archive videos), which is used in the TRECVID Ad-hoc Video Search (AVS) task.

Further details about the specifications and use of these datasets can be found on the InVID community on Zenodo.

First issue of the InVID Newsletter

InVID Newsletter, First Issue, , November 2016

We are pleased to inform you that the first issue of the InVID Newsletter has been published online! This issue introduces the project’s vision and goals, and informs the community, our readers and supporters, of what has been achieved and produced in the project so far! The readers of this issue will find some key articles reporting on the developed tools and services of the InVID platform, and will be informed about the already performed activities for disseminating the project’s aims and results. Finally, details about the project consortium and the online presence of the InVID project are also included!

Please find the InVID Newsletter online at:

InVID announced as NEM exhibitor

InVID announced as NEM exhibitor

We are glad to announce that InVID will be one of the exhibitors at the upcoming NEM Summit 2016. Dr. Lyndon Nixon from MODUL Technology, a member of the InVID consortium, will participate at the event in Porto, Portugal, on 23-24 November 2016. During these days the InVID technologies for finding user generated videos about news events online and for verifying them for their authenticity will be exhibited by Dr. Nixon to the attendees of this annual meeting of the New European Media (NEM) European Technology Platform (ETP), attracting media professionals and researchers from across Europe. Through the presentation of these technologies, the main concept of the InVID project that is about detecting emerging stories and assessing the reliability of newsworthy video files and content spread via social media for use in any media organization or creative industry, will be promoted.

InVID technologies at TEDx talk organized by MODUL University

InVID technologies at TEDx talk organized by MODUL University

Arno Scharl from webLyzard technology, a member of the InVID consortium gave a presentation at the TEDx Modul University talk on October 6th, 2016. His presentation, entitled “Analyzing the Digital Talk: Visual Tools for Exploring Global Communication Flows”, discussed how recent technologies can assist professionals in analysing, understanding and exploiting information extracted from big data repositories. His talk also featured novel InVID visualisation components that reveal emerging stories in the collected data and can support professionals to monitor the latest trends and explore global communication flows (please note that TEDx talks generally do not include any project logos or URLs).

The video of the TEDx talk can be seen below;  the slides can be downloaded from SlideShare.

InVID and First Draft News partnership started

InVID and First Draft News partnership started

InVID and First Draft News partnership started!

We are thrilled to announce the participation of the InVID project to the partner network of First Draft News, that aims to tackle issues of trust and truth in reporting information that emerges online. The members of the InVID consortium will join their efforts for developing technologies for video verification, with a group of over thirty major news and technology organizations including (but not restricted to) Facebook, Twitter, YouTube, The New York Times, The Washington Post, BuzzFeed News, CNN, ABC News (Australia), AJ+, ProPublica, Agence France-Presse, Channel 4 New and The Telegraph.

Check the First Draft News public announcement for further details about this collaboration!

InVID video demo on similarity search

InVID video demo on similarity search

Check out the new InVID video demo on similarity search for video verification, made by Denis Teyssou of AFP.

Starting from a video uploaded to YouTube on March 7th 2016, Denis demonstrates a verification process based on reverse image search using keyframes and the YouTube API. The outcome of this process can assist journalists in making a decision about the originality of the video. Other InVID technologies for visual analysis that are currently under development, such as sub-shot segmentation and logo detection, can enhance the efficiency and the robustness of video similarity search.

For further news, stay tuned on the InVID YouTube channel!

InVID at REVEAL workshop in Athens

InVID at REVEAL workshop in Athens

A one-day workshop on user-generated content verification in news is organized by the REVEAL project in Athens, Greece, on September 16th, 2016.

The InVID project will be there, represented by the project coordinator Vasileios Mezaris (CERTH-ITI), Arno Scharl (webLyzard), Denis Teyssou (AFP) and Rolf Fricke (Condat). The project work, the achievements to date and the steps ahead will be presented during the InVID session of the workshop (see the full agenda of the workshop here).

More specifically, Vasileios will give an introductory presentation about the InVID project, describing its motivation through a number of use cases for UGV verification, explaining the envisaged approach, and outlining the expected outcomes. Subsequently, Arno will present the developed InVID multimodal visual analytics dashboard that can be used for finding UGV, Denis will discuss various aspects of hands-on video verification and Rolf will introduce the InVID verification application and its UI.

Demos related to the InVID multimodal visual analytics dashboard and the hands-on video verification process will be presented during the demo sessions of the workshop.

Further details regarding the workshop can be found here.

Video Verification Workflow Analysis

In the InVID project we are working hard on solutions to make verification of User Generated Video (UGV) easier and more trustworthy. Right now there are only a few tools to help journalists verify the authenticity of video materials from  Social Networks. InVID aims to fill that gap, providing the means with which verification of UGV becomes easier and therefore allows journalists to use (more) trustworthy UGV in their reporting.

Before we can start developing UGV verification tools we need requirements. Those requirements are derived from the challenges  journalists face in UGC verification today. In order to find out about the existing workflows for verification of video, the InVID project organised an observatory study in collaboration with researchers from the REVEAL project. In this study we closely monitored two Deutsche Welle journalists and one researcher in their daily UGV verification work. Here is what they had to say about video verification:

After the analysis of the verification activities performed by the three participants, Stefanie Wiegand from The IT Innovation Centre created a schematic overview of the verification activities that she identified, which you can see below. Does it also look chaotic to you? The complexity of the activities show that verification of UGC is a nonlinear process that is different from person to person.

Verification activities schema

More insights from the research will be provided in the full report on the observatory study that will be published on the REVEAL website soon, so stay tuned.

Now that we understand the workflows of video verification a bit better, the InVID consortium will be able to assess whether verification of UGV could be (partly) automated and how innovative technologies can help journalists with their tasks.

Note: if you have any questions about this research please contact Ruben Bouwmeester, innovation manager at Deutsche Welle.