Leveraging explainability for understanding object descriptions in ambiguous 3D environments

Doğan, Fethiye Irmak and Melsión, Gaspar I. and Leite, Iolanda (2023) Leveraging explainability for understanding object descriptions in ambiguous 3D environments. Frontiers in Robotics and AI, 9. ISSN 2296-9144

[thumbnail of pubmed-zip/versions/3/package-entries/frobt-09-937772-r2/frobt-09-937772.pdf] Text
pubmed-zip/versions/3/package-entries/frobt-09-937772-r2/frobt-09-937772.pdf - Published Version

Download (31MB)

Abstract

For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users’ object descriptions in the requests, existing studies have focused on this challenge for limited object categories that can be detected or localized with existing object detection and localization modules. Further, they have mostly focused on comprehending the object descriptions using flat RGB images without considering the depth dimension. On the other hand, in the wild, it is impossible to limit the object categories that can be encountered during the interaction, and 3-dimensional space perception that includes depth information is fundamental in successful task completion. To understand described objects and resolve ambiguities in the wild, for the first time, we suggest a method leveraging explainability. Our method focuses on the active areas of an RGB scene to find the described objects without putting the previous constraints on object categories and natural language instructions. We further improve our method to identify the described objects considering depth dimension. We evaluate our method in varied real-world images and observe that the regions suggested by our method can help resolve ambiguities. When we compare our method with a state-of-the-art baseline, we show that our method performs better in scenes with ambiguous objects which cannot be recognized by existing object detectors. We also show that using depth features significantly improves performance in scenes where depth data is critical to disambiguate the objects and across our evaluation dataset that contains objects that can be specified with and without the depth dimension.

Item Type: Article
Subjects: STM Repository > Mathematical Science
Depositing User: Managing Editor
Date Deposited: 30 Jun 2023 04:30
Last Modified: 18 Nov 2023 05:32
URI: http://classical.goforpromo.com/id/eprint/3523

Actions (login required)

View Item
View Item