Keynote Speakers


Kostas Daniilidis
, University of Pennsylvania, USA

Kostas Daniilidis is the Ruth Yalom Stone Professor of Computer and Information Science at the University of Pennsylvania where he has been faculty since 1998. He is an IEEE Fellow. He was the director of the GRASP laboratory from 2008 to 2013, Associate Dean for Graduate Education from 2012-2016, and Faculty Director of Online Learning 2012-2017. He obtained his undergraduate degree in Electrical Engineering from the National Technical University of Athens, 1986, and his PhD in Computer Science from the University of Karlsruhe, 1992. He is co-recipient of the Best Conference Paper Award at ICRA 2017 and Best Student Paper Finalist at RSS 2018. His most cited works have been on visual odometry, omnidirectional vision, 3D pose estimation, 3D registration, hand-eye calibration, structure from motion, and image matching. Kostas’ main interest today is in geometric deep learning, data association, and event-based cameras, as applied to vision based manipulation and navigation.

Speech Title: Learning geometry-aware representations: 3D object and human pose inference

Abstract: Traditional convolutional networks exhibit unprecedented robustness to intraclass nuisances when trained on big data. However, such data have to be augmented to cover geometric transformations. Several approaches have shown recently that data augmentation can be avoided if networks are structured such that feature representations are transformed the same way as the input, a desirable property called equivariance. We show in this talk that global equivariance can be achieved for the case of 2D scaling, rotation, and translation as well as 3D rotations. We show state of the art results using an order of magnitude lower capacity than competing approaches. Moreover, we show how such geometric embeddings can recover the 3D pose of objects without keypoints or using ground-truth pose on regression. We finish by showing how graph convolutions enable the recovery of human pose and shape without any 2D annotations.



Antonis Argyros ,
University of Crete, Greece

Antonis Argyros is a Professor of Computer Science at the Computer Science Department (CSD), University of Crete (UoC) and a researcher at the Institute of Computer Science (ICS), Foundation for Research and Technology-Hellas (FORTH) in Heraklion, Crete, Greece. Since 1999, as a member of the Computational Vision and Robotics Laboratory (CVRL) of ICS-FORTH, he has been involved in several European and national RTD projects on computer vision, pattern recognition, image analysis and robotics. His current research interests fall in the areas of computer vision and pattern recognition, with emphasis on the analysis of humans in images and videos, human pose analysis, recognition of human activities and gestures, 3D computer vision, as well as image motion and tracking. He is also interested in applications of computer vision in the fields of robotics and smart environments. In these areas, he has published several research papers in scientific journals and refereed conference proceedings and has delivered invited talks in international events, universities and research centers. Antonis Argyros has served in the organizing and program committees of several international vision, graphics and robotics conferences and in the editorial boards of computer vision, image analysis and robotics journals.

Speech Title: Computer vision methods for capturing and interpreting human motion

Abstract: In this talk, we provide an overview of our work on computational methods for tracking human motion and for the semantic interpretation of human activities, based on unobtrusive computer vision techniques that rely on the processing and analysis of markerless visual data. We focus on tracking the 3D position, orientation and full articulation of the human body and human body parts and we show how this is employed to solve problems of varying complexity, ranging from 3D tracking of a hand (possibly in interaction with objects) up to action recognition, gesture interpretation and intention prediction. Finally, we show how our work can support the development of vision systems aiming at intuitive human-robot interaction and human-robot collaboration as well as the development of interactive exhibits in the context of smart environments.