Logo

Updating Dynamic 3D Scene Graphs from Egocentric Observations

1ETH Zürich, 2Microsoft, 3Uni Bonn    *Equal supervision

Abstract

Recent approaches have successfully focused on the segmentation of static reconstructions, thereby equipping downstream applications with semantic 3D understanding. However, the world in which we live is dynamic, characterized by numerous interactions between the environment and humans or robotic agents. Static semantic maps are unable to capture this information, and the naive solution of rescanning the environment after every change is both costly and ineffective in tracking e.g. objects being stored away in drawers. With Lost & Found we present an approach that addresses this limitation. Based solely on egocentric recordings with corresponding hand position and camera pose estimates, we are able to track the 6DoF poses of the moving object within the detected interaction interval. These changes are applied online to a transformable scene graph that captures object-level relations. Compared to state-of-the-art object pose trackers, our approach is more reliable in handling the challenging egocentric viewpoint and the lack of depth information. It outperforms the second-best approach by 34% and 56% for translational and orientational error, respectively, and produces visibly smoother 6DoF object trajectories. In addition, we illustrate how the acquired interaction information in the dynamic scene graph can be employed in the context of robotic applications that would otherwise be unfeasible: We show how our method allows to command a mobile manipulator through teach & repeat, and how information about prior interaction allows a mobile manipulator to retrieve an object hidden in a drawer.



Featured Video

Lost & Found



Method Overview: We build a static scene graph that captures object-level relationships, given our initial 3D scan with its semantic instance segmentation. Each Aria glasses recording provides hand positions and device poses. With Lost & Found, we identify object interactions by locating them in our 3D prior and simultaneously querying a 2D hand-object tracker. At the beginning of such an interaction, we project 3D points of our object instance onto the image plane. A point tracking method keeps track of these 2D feature points in subsequent observations. While the 3D hand location yields an anchor for the object translation, we can apply a robust perspective-n-point algorithm to the known 2D-3D correspondences for each RGB image, to identify the correct 6DoF pose of the object. The scene graph is updated accordingly to reflect the correct state of the current environment. In the example above, the picture frame (red) is carried from the rack on the right to the top of the tall shelf on the left.



Interactive Trajectory Visualization

Visualize different 6DoF trajectories for interaction with a
.

BibTeX

@misc{behrens2024lostfoundupdating,
      title={Lost & Found: Updating Dynamic 3D Scene Graphs from Egocentric Observations}, 
      author={Tjark Behrens and René Zurbrügg and Marc Pollefeys and Zuria Bauer and Hermann Blum},
      year={2024},
      eprint={2411.19162},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2411.19162}, 
}