News

InVID project at Futur en Seine digital festival in Paris

InVID project at Futur en Seine digital festival in Paris

The early prototypes of InVID will be exposed at the Futur en Seine digital festival in Paris, at the Grande halle de la Villette (practical info available here), from the 8th to the 10th of June, where professionals and the public will be able to see and test how journalists can debunk fake videos on social networks with examples taken from recent breaking and social media emerging news.

Partner AFP will demo the InVID Discovery platform (a.k.a. the InVID Multimodal Analytics Dashboard), the InVID Verification Application and an InVID browser plugin: a verification toolbox soon to be released in open source.

The browser plugin, tested over the last few weeks by the video and social media team at AFP, allows to quickly debunk a fake video by extracting thumbnails from the corresponding web platform, or by fragmenting the video into keyframes (see screenshot bellow) before searching those images on a reverse image search engine like Google Images to retrieve previous copies of the same video if any available. And this works for Facebook, Youtube, Twitter or any video file the journalist chooses to upload to InVID platform.

InVID-keyframes

Recently when the news of a Manila resort Casino in Philippines being attacked on 1st of June 2017 evening broke, a fake video (first screenshot bellow)  started to circulate on Twitter, claiming to be a raw footage of the attack from a CCTV camera while as debunk by an AFP social media journalist, it was a copy of previous videos on another attack perpetrated at a hotel in Suriname at the end of December 2011 (second screenshot bellow).

Fake video about a robbery take place in a casino in Manila

Fake video claiming to show a robbery at a Manila resort Casino on 1th of June 2017.

The real video

The original video showing an attack at the Savanah hotel in Suriname on 27th-28th of December 2011.

InVID organizes the 1st International Workshop on Multimedia Verification

InVID organizes the 1st International Workshop on Multimedia Verification

We are pleased to announce that InVID organizes the 1st International Workshop on Multimedia Verification (MuVer2017) at the ACM Multimedia Conference that will take place on October 23 – 27, 2017 at Mountain View, CA, USA. The tentative date for paper submission is 19 July 2017.

For further details about the topics of the workshop, the submission of scientific papers and the program committee please visit the webpage of the MuVer2017 workshop.

InVID project at ICMR2017

InVID project at ICMR2017

InVID project will have a strong presence at the ACM International Conference on Multimedia Retrieval (ICMR) that will take in Bucharest, Romania on June 6-9, 2017. The scientific results and developments of CERTH (a technology provision and coordinating partner of the InVID consortium) are reported in four scientific papers that have been accepted for publication, and will be presented and disseminated to the attendees of this widely known and well appreciated conference during its oral, demo and poster sessions. The list of accepted papers is the following:

  • C. Boididou, S. Papadopoulos, L. Apostolidis, Y. Kompatsiaris, “Learning to Detect Misleading Content on Twitter” (oral session)
  • C. Collyda, E. Apostolidis, A. Pournaras, F. Markatopoulou, V. Mezaris, I. Patras, “VideoAnalysis4ALL: An on-line tool for the automatic fragmentation and concept-based annotation, and the interactive exploration of videos” (demo session).
  • F. Markatopoulou, D. Galanopoulos, V. Mezaris, I. Patras, “Query and Keyframe Representations for Ad-hoc Video Search” (poster session)
  • D. Galanopoulos, F. Markatopoulou, V. Mezaris, I. Patras, “Concept Language Models and Event-based Concept Number Selection for Zero-example Event Detection” (poster session)

Furthermore, the InVID project is among the supporters of the 2nd International Workshop on Multimedia Forensics and Security (MFSec 2017) that will be held in conjuction with ICMR 2017. At this workshop, the work of CERTH on a method for verifying Web videos by analyzing their online context, will be reported through the following accepted paper for publication.

  • O. Papadopoulou, M. Zampoglou, S. Papadopoulos, Y. Kompatsiaris, “Web Video Verification using Contextual Cues”

We will look forward to meeting you in ICMR 2017!

TUNGSTÈNE Technology is used by Middlebury Institute of Internation Studies at Monterey

TUNGSTÈNE Technology is used by Middlebury Institute of Internation Studies at Monterey

TUNGSTENE technology, a core component of the InVID Verification Application, is used since 2016 by the Middlebury Institute of Internation Studies at Monterey (MIIS). In particular, this technology is used by scientists who aim to assess the digital photographs related to weapons of mass destructions, such as the missiles and nuclear weapons that North Korea is working on. The capabilities of TUNGSTENE technology are utilized for evaluating the authenticity of the photographs provided by North Korea and for extracting physical information from the digital images on the missiles themselves. In March 6th 2017, North Korea proceeded to a test launch of several missiles. The scientists from the MIIS analyzed the content and reached the conclusion that, very likely, the photographs were authentic and the launch was real. Further details about this investigation can be found here.

Public release of InVID datasets

Public release of InVID datasets

We are happy to announce the public release of three InVID datasets.

The first one, called “InVID Fake Video Corpus” is a small collection of verified fake videos. It was developed in the context of the InVID project with the aim of gaining a perspective of the types of fake video that can be encountered in the real world. Currently the Corpus consists of 59 videos. For each video, information is provided describing the fake, its original source, and the evidence proving it is a fake. As we do not own the videos, the dataset only provides the video URLs and metadata, in the form of a tab-separated value (TSV) file.

The second one is the first version of the “InVID TV Logo Dataset” and was created with the purpose of providing a training and evaluation benchmark for TV logo detection in videos. It contains the results from the segmentation and annotation of 2,749 YouTube videos originating from a large number of news TV channels. The videos have been annotated with respect to the TV channel logos they contain -specifically, by the name of the organization to which the logo belongs- and with shot boundary information. Furthermore, a set of logo templates has been extracted from the videos and organized alongside the corresponding channel information. As we do not own the rights to the videos, the dataset only contains the YouTube video IDs alongside the corresponding annotations. It further contains 503 logo template files and the corresponding metadata information (channel name, wikipedia link).

The third one, termed “Concept detection scores for the IACC.3 dataset (TRECVID AVS Task)”, contains the concept detection scores for the IACC.3 dataset (600 hours of internet archive videos), which is used in the TRECVID Ad-hoc Video Search (AVS) task.

Further details about the specifications and use of these datasets can be found on the InVID community on Zenodo.

First issue of the InVID Newsletter

InVID Newsletter, First Issue, , November 2016

We are pleased to inform you that the first issue of the InVID Newsletter has been published online! This issue introduces the project’s vision and goals, and informs the community, our readers and supporters, of what has been achieved and produced in the project so far! The readers of this issue will find some key articles reporting on the developed tools and services of the InVID platform, and will be informed about the already performed activities for disseminating the project’s aims and results. Finally, details about the project consortium and the online presence of the InVID project are also included!

Please find the InVID Newsletter online at: http://www.invid-project.eu/newsletters/

InVID announced as NEM exhibitor

InVID announced as NEM exhibitor

We are glad to announce that InVID will be one of the exhibitors at the upcoming NEM Summit 2016. Dr. Lyndon Nixon from MODUL Technology, a member of the InVID consortium, will participate at the event in Porto, Portugal, on 23-24 November 2016. During these days the InVID technologies for finding user generated videos about news events online and for verifying them for their authenticity will be exhibited by Dr. Nixon to the attendees of this annual meeting of the New European Media (NEM) European Technology Platform (ETP), attracting media professionals and researchers from across Europe. Through the presentation of these technologies, the main concept of the InVID project that is about detecting emerging stories and assessing the reliability of newsworthy video files and content spread via social media for use in any media organization or creative industry, will be promoted.