WeVerify at ICCV 2019 – participation in one of the world’s leading conferences on computer vision

  • home
  • /
  • News
  • /
  • WeVerify at ICCV 2019 – participation in one of the world’s leading conferences on computer vision
By on November 21st, 2019 in News

From 27 October to 2 November 2019, WeVerify project partner CERTH participated in the International Conference on Computer Vision (ICCV) 2019 in Seoul, Korea. The ICCV is one of the top computer vision conferences, along with the Computer Visual Pattern Recognition Conference (CVPR) and the European Computer Vision Conference (ECCV). The venue that hosted the conference was COEX Convention Center, located in the Gangnam-gu district of Seoul. The main conference was held on from 29 October – 1 November 2019, while many co-located workshops and tutorials took place on 27 and 28 October, and on 2 November 2019.

Welcome banner from the ICCV 2019. Photo by Giorgos Kordopatis-Zilos

The ICCV attracts the interest of a large number of scientists and researchers from all over the world, who present their recent advancements in fields related to computer vision. Also, it appeals to large companies, such as Google, Facebook, and Microsoft, but also to well-known universities from academia, such as Stanford University, MIT, and Oxford University.

This year’s event was chaired by Kyoung Mu Lee, Prof. at Seoul National University, David Forsyth, Prof. at the University of Illinois at Urbana-Champaign, Marc Pollefeys, Prof. at ETH-Zurich, and Tang Xiaoou, Prof. at the Chinese University of Hong Kong. In total, the ICCV 2019 counted 7,501 attendees, an increase of 2.4 times over the previous edition ICCV 2017. It received more than 4,300 papers (double the number of submissions received by the previous edition), of which 1,075 papers were accepted for publication (an acceptance rate of ~25%). 200 papers were selected for oral presentation (just 4.6% of the submitted papers).

CERTH researcher Giorgos Kordopatis-Zilos made an oral and poster presentation of his team’s work. It was titled ‘ViSiL: Fine-grained Spatio-Temporal Video Similarity Learning’ and was developed within the WeVerify project in the context of Near-Duplicate Detection. The main objective of this work is to devise a function that, given two arbitrary videos, generates a similarity score based on their visual content respecting their spatio-temporal relations. This is the core functionality of a reverse video search system that is designed to retrieve close-to-identical videos. For more information, you can go through the paper or this simple blog post presenting the proposed method. The code and slides are available online on GitHub and SlideShare.

All in all, a very worthwhile and highly interesting event.

Author: Giorgos Kordopatis-Zilos (CERTH)
Editor: Jochen Spangenberg (Deutsche Welle)

Image credits: respective persons named. Usage rights have been obtained by the authors named above for publication in this article. Copyright / IPR remains with the respective originators. Note: This post is an adaptation of the Mever@ICCV2019 blog post, which was originally prepared for the CERTH Media Verification team (MeVer)  website.

Leave a Comment

sing in to post your comment or sign-up if you dont have any account.