Skip to main content

Mixed Reality, crowd-mapping and privacy - Mark Pollefeys (ETH, Microsoft)

Event Details

Monday, November 23, 2020
10 a.m.


Mixed Reality (MR) allows to blend virtual information with the real world.  After a short presentation of how computer vision is a key technology to enable MR experiences, we will focus on the need to build up 3D maps of the world so that people can share and persist information precisely aligned to specific locations in the real-world.  These maps are sometimes referred to as the AR cloud.  The key technology for this is Simultaneous Localization and Mapping (SLAM).   Ideally, it becomes possible to have every device that wants to localize itself to this map also contribute to improve and extend the map.  These devices can be MR headsets like the HoloLens, but also mobile phones or even robots.   While in most current scenarios privacy concerns are limited, it is important to consider the privacy implications of MR as the use of MR devices is likely to become more pervasive in the years to come.  In this context, I will describe some of our recent work in privacy-preserving image-based localization and mapping. 


Marc Pollefeys is a Professor of Computer Science at ETH Zurich and the Director of the Microsoft Mixed Reality and AI Labin Zurich where he works with a team of scientists and engineers to develop advanced perception capabilities for HoloLens and Mixed Reality.  He was elected Fellow of the IEEE in 2012.  He obtained his PhD from the KU Leuven in 1999 and was a professor at UNC Chapel Hill before joining ETH Zurich. 

He is best known for his work in 3D computer vision, having been the first to develop a software pipeline to automatically turn photographs into 3D models, but also works on robotics, graphics and machine learning problems.  Other noteworthy projects he worked on are real-time 3D scanning with mobile devices, a real-time pipeline for 3D reconstruction of cities from vehicle mounted-cameras, camera-based self-driving cars and the first fully autonomous vision-based drone.  Most recently his academic research has focused on combining 3D reconstruction with semantic scene understanding.