Treffer: From Percepts to Semantics: A Multi-modal Saliency Map to Support Social Robots' Attention.

Title:
From Percepts to Semantics: A Multi-modal Saliency Map to Support Social Robots' Attention.
Source:
ACM Transactions on Human-Robot Interaction; Dec2025, Vol. 14 Issue 4, p1-19, 19p
Database:
Complementary Index

Weitere Informationen

In social robots, visual attention expresses awareness of the scenario components and dynamics. As in humans, their attention should be driven by a combination of different attention mechanisms. In this article, we introduce multi-modal saliency maps, i.e., spatial representations of saliency that dynamically integrate multiple attention sources depending on the context. We provide the mathematical formulation of the model and an open source software implementation. Finally, we present an initial exploration of its potential in social interaction scenarios with humans and evaluate its implementation. [ABSTRACT FROM AUTHOR]

Copyright of ACM Transactions on Human-Robot Interaction is the property of Association for Computing Machinery and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)