News

InVID announced as NEM exhibitor

InVID announced as NEM exhibitor

We are glad to announce that InVID will be one of the exhibitors at the upcoming NEM Summit 2016. Dr. Lyndon Nixon from MODUL Technology, a member of the InVID consortium, will participate at the event in Porto, Portugal, on 23-24 November 2016. During these days the InVID technologies for finding user generated videos about news events online and for verifying them for their authenticity will be exhibited by Dr. Nixon to the attendees of this annual meeting of the New European Media (NEM) European Technology Platform (ETP), attracting media professionals and researchers from across Europe. Through the presentation of these technologies, the main concept of the InVID project that is about detecting emerging stories and assessing the reliability of newsworthy video files and content spread via social media for use in any media organization or creative industry, will be promoted.

InVID technologies at TEDx talk organized by MODUL University

InVID technologies at TEDx talk organized by MODUL University

Arno Scharl from webLyzard technology, a member of the InVID consortium gave a presentation at the TEDx Modul University talk on October 6th, 2016. His presentation, entitled “Analyzing the Digital Talk: Visual Tools for Exploring Global Communication Flows”, discussed how recent technologies can assist professionals in analysing, understanding and exploiting information extracted from big data repositories. His talk also featured novel InVID visualisation components that reveal emerging stories in the collected data and can support professionals to monitor the latest trends and explore global communication flows (please note that TEDx talks generally do not include any project logos or URLs).

The video of the TEDx talk can be seen below;  the slides can be downloaded from SlideShare.

InVID and First Draft News partnership started

InVID and First Draft News partnership started

InVID and First Draft News partnership started!

We are thrilled to announce the participation of the InVID project to the partner network of First Draft News, that aims to tackle issues of trust and truth in reporting information that emerges online. The members of the InVID consortium will join their efforts for developing technologies for video verification, with a group of over thirty major news and technology organizations including (but not restricted to) Facebook, Twitter, YouTube, The New York Times, The Washington Post, BuzzFeed News, CNN, ABC News (Australia), AJ+, ProPublica, Agence France-Presse, Channel 4 New and The Telegraph.

Check the First Draft News public announcement for further details about this collaboration!

InVID video demo on similarity search

InVID video demo on similarity search

Check out the new InVID video demo on similarity search for video verification, made by Denis Teyssou of AFP.

Starting from a video uploaded to YouTube on March 7th 2016, Denis demonstrates a verification process based on reverse image search using keyframes and the YouTube API. The outcome of this process can assist journalists in making a decision about the originality of the video. Other InVID technologies for visual analysis that are currently under development, such as sub-shot segmentation and logo detection, can enhance the efficiency and the robustness of video similarity search.

For further news, stay tuned on the InVID YouTube channel!

InVID at REVEAL workshop in Athens

InVID at REVEAL workshop in Athens

A one-day workshop on user-generated content verification in news is organized by the REVEAL project in Athens, Greece, on September 16th, 2016.

The InVID project will be there, represented by the project coordinator Vasileios Mezaris (CERTH-ITI), Arno Scharl (webLyzard), Denis Teyssou (AFP) and Rolf Fricke (Condat). The project work, the achievements to date and the steps ahead will be presented during the InVID session of the workshop (see the full agenda of the workshop here).

More specifically, Vasileios will give an introductory presentation about the InVID project, describing its motivation through a number of use cases for UGV verification, explaining the envisaged approach, and outlining the expected outcomes. Subsequently, Arno will present the developed InVID multimodal visual analytics dashboard that can be used for finding UGV, Denis will discuss various aspects of hands-on video verification and Rolf will introduce the InVID verification application and its UI.

Demos related to the InVID multimodal visual analytics dashboard and the hands-on video verification process will be presented during the demo sessions of the workshop.

Further details regarding the workshop can be found here.

Video Verification Workflow Analysis

In the InVID project we are working hard on solutions to make verification of User Generated Video (UGV) easier and more trustworthy. Right now there are only a few tools to help journalists verify the authenticity of video materials from  Social Networks. InVID aims to fill that gap, providing the means with which verification of UGV becomes easier and therefore allows journalists to use (more) trustworthy UGV in their reporting.

Before we can start developing UGV verification tools we need requirements. Those requirements are derived from the challenges  journalists face in UGC verification today. In order to find out about the existing workflows for verification of video, the InVID project organised an observatory study in collaboration with researchers from the REVEAL project. In this study we closely monitored two Deutsche Welle journalists and one researcher in their daily UGV verification work. Here is what they had to say about video verification:

After the analysis of the verification activities performed by the three participants, Stefanie Wiegand from The IT Innovation Centre created a schematic overview of the verification activities that she identified, which you can see below. Does it also look chaotic to you? The complexity of the activities show that verification of UGC is a nonlinear process that is different from person to person.

Verification activities schema

More insights from the research will be provided in the full report on the observatory study that will be published on the REVEAL website soon, so stay tuned.

Now that we understand the workflows of video verification a bit better, the InVID consortium will be able to assess whether verification of UGV could be (partly) automated and how innovative technologies can help journalists with their tasks.

Note: if you have any questions about this research please contact Ruben Bouwmeester, innovation manager at Deutsche Welle.

Dealing with UGC and its ownership – Interview with DW’s Head of Social Media News

Dealing with UGC and its ownership – Interview with DW’s Head of Social Media News

How do news providers deal with user-generated content? What are current challenges, especially with regards to content ownership and copyright? Deutsche Welle’s Head of Social Media News, Kristin Zeier, tells us about current practices and respective issues.

Question: What are the major copyright challenges you have when thinking about using content sourced from social media?

Answer: The major issue with copyright is definitely video. We have come to rely on social media – and in particular User Generated Content – in breaking news situations when it is the fastest source for eyewitness accounts. In this day and age, hardly an event takes place without someone recording it on their smartphones and posting it to one of their social networks. Still photos from news events are good too, but video – even vertical video – is preferable as it works best when integrated into our TV news report formats.

However, using UGC video from eyewitnesses poses several challenges; the first of which is verification (which I won’t go into at this point, as it has been discussed more thoroughly in other places on this site). Once we have verified that content is legitimate, we need to contact the original owner of the video for final confirmation and permission to publish / broadcast the video. The copyright owner is not always the uploader or the person who shared the video, so we need to contact the person who originally shot the video in the first place. This is the most time-consuming part of the process because we have to wait for the owner to respond to our initial contact request. Sometimes the owner may be in a different time zone or not have the technical capabilities to respond to us quickly. That’s time we don’t necessarily have in a breaking news situation. It’s also a frustrating part of the process because while we wait for a response, a couple of things may happen at the same time:

The person who originally shot and shared the UGC content may have already agreed with another media outlet or a licensing agency to provide content exclusively to them. All other requests must then go through the new owners, which slows the process even more and could ultimately end in a rebroadcasting fee. The original owner may have agreed to let several other media outlets publish the content free of charge, which then means the content is no longer exclusive and the “scoop” diminishes. It’s also possible that a user may retroactively remove initially posted content out of fear of attracting too much attention or because the user is tired of being hounded by press requests. Particularly with regard to sensitive issues, increased press attention can lead to negative consequences for the eyewitness who posted exclusive content.

Another factor we may face once we establish content with the copyright holder is that they want to be paid for allowing us to use their content. As a public broadcaster, we at DW don’t pay for UGC content a user uploads to their personal accounts. In this we regard we are following recommendations established within the ARD verification network.

An additional concern for an international broadcaster like DW is to make sure the owner of the copyright material understands that his/her content will be broadcast globally on TV, embedded in online articles and shared across social media platforms, possibly also in various languages. The owner of the material needs to agree to this. Particularly in cases of sensitive material the owner must be made aware of these possible distribution platforms. Examples of this could be photos or video recordings of controversial issues in repressive media markets where it is clear that the eyewitness was at or near the events and possibly partaking of them (i.e. demonstrations). Owners of the copyright material should also be asked how they want to be identified as a source. Many in repressive media markets wish to remain anonymous.

Question: Do you employ the argument of fair use or fair dealing when using content sourced from social media? How does this work in your jurisdiction?

Answer: When social media content is directly related to a breaking news story and is central to understanding a story’s development, then we assume it falls under fair use dealing as our legal department has defined it with us. Examples of this could be smartphone video recordings of a police shooting or footage of the tanks rolling during a coup – content that is intended to help the public better understand and appreciate the dimensions of a news story. The fair use argument also applies to any content published by official public institutes, government agencies or NGOs on social media with the intention of being consumed by the public. When it is clear that the user who uploaded content is interested in having the message spread, we also consider it to fall under the terms of fair use – in the case that it is directly connected to a breaking news event.

By applying the fair use terms, we are able to momentarily forgo obtaining explicit permission to use social media content. This speeds up the process in a breaking news situation, but it does not eliminate the need for identifying the owner of the material, a step that is still crucial in the verification process.

All content that is not related to an immediate breaking news situation and was not intended for public dissemination does not fall under the fair use terms as we have interpreted them. In these cases we need to seek permission from the copyright holder to publish / broadcast the content.

Question: What is your recommendation for news organisations struggling with social media content usage and copyright?

Answer: The first step is getting faster and more accurate in identifying the original owner of the video material. The verification process can be quite time-consuming, but forgoing this can also lead down the wrong paths and cost additional time in contacting the correct owner.

It’s really crucial for media outlets to establish a list of basic copyright guidelines with their legal departments so they can act quickly and efficiently whenever the need arises. Otherwise getting legal approval each time the need arises can cost too much time. Developing a good working relationship with the legal department is also key, because this allows for the legal department to better understand the needs of the journalists and their program decisions.

Note: the interview was conducted in writing by Jochen Spangenberg with Kristin Zeier and it was first published on the REVEAL project website (http://revealproject.eu/dealing-with-ugc-and-its-ownership-interview-with-dws-head-of-social-media-news/). Parts of this interview will feature in a forthcoming study by Sam Dubberley who initiated the exchange.

TUNGSTÈNE Technology was used on MH17 crash investigation

TUNGSTÈNE Technology was used on MH17 crash investigation. The American blog www.armscontrolwonk.com published a full analysis made by a team from the Middlebury Institute’s James Martin Center for Nonproliferation Studies (CNS). The researchers have determined that images the Russian government published as part of its investigation into the downing of Malaysian Airlines Flight 17 over Ukraine two years ago have been “significantly modified or altered”. Jeffrey Lewis, Melissa Hanham, Catherine Dill and Dave Schmerler of CNS analyzed the images using Tungstène, a suite of forensic software that is used in the InVID project for detecting alterations to images, that was provided by an anonymous donor (MIIS). The full review is available here. Furthermore, this published analysis was also covered by The New York Times