Treffer: Explainable agents adapt to human behaviour

Title:
Explainable agents adapt to human behaviour
Contributors:
Universitat Politècnica de Catalunya. Doctorat en Intel·ligència Artificial, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Barcelona Supercomputing Center, Universitat Politècnica de Catalunya. IDEAI-UPC - Intelligent Data sciEnce and Artificial Intelligence Research Group
Publication Year:
2023
Collection:
Universitat Politècnica de Catalunya, BarcelonaTech: UPCommons - Global access to UPC knowledge
Document Type:
Konferenz conference object
File Description:
7 p.; application/pdf
Language:
English
Relation:
info:eu-repo/grantAgreement/EC/H2020/101017142/EU/Stairway to AI: Ease the Engagement of Low-Tech users to the AI-on-Demand platform through AI/StairwAI; http://hdl.handle.net/2117/390757
Rights:
Attribution 4.0 International ; http://creativecommons.org/licenses/by/4.0/ ; Open Access
Accession Number:
edsbas.3F9AE45C
Database:
BASE

Weitere Informationen

When integrating artificial agents into physical or digital environments that are shared with humans, agents are often equipped with opaque Machine Learning methods to enable adapting their behaviour to dynamic human needs and environment. This brings about agents that are also opaque and therefore hard to explain. In previous work, we show that we can reduce an opaque agent into an explainable Policy Graph (PG) which works accurately in multi-agent environments. Policy Graphs are based on a discretisation of the world into propositional logic to identify states, and the choice of which discretiser to apply is key to the performance of the reduced agent. In this work, we explore this further by 1) reducing a single agent into an explainable PG, and 2) enforcing collaboration between this agent and an agent trained from human behaviour. The human agent is computed by using GAIL from a series of human-played episodes, and kept unchanged. We show that an opaque agent created and trained to collaborate with the human agent can be reduced to an explainable, non-opaque PG, so long as predicates regarding collaboration are included in the state representation, by showing the difference in reward between the agent and its PG. Code is available at https://github.com/HPAI-BSC/explainable-agents-with-humans ; This work has been partially supported by EU Horizon 2020 Project StairwAI (grant agreement No. 101017142). ; Peer Reviewed ; Postprint (published version)