Period
January 2025 – January 2027
Objective
This project aims to develop a collaborative spatial perception framework that constructs various levels of abstract representations in a city-scale area, incorporating LiDAR point clouds, RGBD images, and remote sensing images collected by various agents in a collaborative autonomous system.
Background
The concept of digital twins, involving the creation of virtual representations or models that accurately mirror physical entities or systems, has garnered growing research attention in the realm of smart cities. However, a critical challenge in realizing digital twins lies in efficiently collecting data and recreating the real world, a task that typically demands substantial human effort. To address this gap, autonomous robots, originally designed to reduce human workload, hold immense potential in shaping the future of digital twinning. These robots can potentially assume a pivotal role in autonomously creating and updating the complete mirroring of the physical world, paving the way for the next generation of digital twinning.
About the Digital Futures Postdoc Fellow
Yixi Cai completed his PhD degree in Robotics at Mechatronics and Robotic Systems (MaRS) Laboratory from Department of Mechanical Engineering, University of Hong Kong. His research focuses on efficient LiDAR-based mapping with applications on Robotics. During his PhD journey, he explored the potential of LiDAR technology to enhance the autonomous capabilities of mobile robots, particularly unmanned aerial vehicles (UAVs). He developed ikd-Tree, FAST-LIO2, and D-Map that have been widely used in LiDAR community. He is deeply interested in exploring elegant representations of the world, which would definitely unlock the boundless possibilities in Robotics.
You might find more information about him from his personal website: yixicai.com
Main supervisor
Patric Jensfelt, Professor, Head of Division of Robotics, Perception, and Learning at KTH Royal Institute of Technology, Digital Futures Faculty
Co-supervisor
Olov Andersson, Assistant Professor at Division of Robotics, Perception, and Learning at KTH Royal Institute of Technology, Digital Futures Faculty