A Semi-Automatic Multimodal Annotation Environment for Robot Sensor Data

Printer-friendly versionSend by email
Conference Proceedings (fully refereed)
23
2
2014
Tsiakas
Konstantinos Tsiakas, Thedoros Giannakopoulos, Stasinos Konstantopoulos
In this paper, we present RoboMAE, a multi-modal sensor data annotation environment that allows humans to concentrate on high-level decisions producing full frame-by-frame annotations. Multi-modal annotation tools focus on interpreting a scene by annotating data on separate modalities. In this work, we focus on the cross-linking of the same object's recognition across the different modalities. Our approach is based on exploiting spatio-temporal co-occurrence to link the different projections of the same object in the various supported modalities and on automatically interpolating annotations between explicitly annotated frames. The backend automations interact with the visual environment in real time, providing annotators with immediate feedback for their actions. Our approach is demonstrated and evaluated on a dataset collected for the recognition and localization of conversing humans, an important task in human-robot interaction applications. Both the annotation environment and the conversation dataset are made publicly available.
Software and Knowledge Engineering Laboratory (SKEL)
Conference Short Name: 
MMEDIA 2014
Conference Full Name: 
6th International Conferences on Advances in Multimedia
Conference Country: 
FR:France
Conference City: 
Nice
Conference Date(s): 
Sun, 23/02/2014 - Thu, 27/02/2014
Conference Level: 
International
Publisher: 
IARIA
Publication Series: 
2308-4448
Page Start: 
122
Page End: 
125
ISBN Code: 
978-1-61208-320-9

© 2018 - Institute of Informatics and Telecommunications | National Centre for Scientific Research "Demokritos"

Terms of Service and Privacy Policy