Towards Interpretable Reinforcement Learning with Constrained Normalizing Flow Policies

Finn Rietz, Erik Schaffernicht, Stefan Heinrich, Johannes A. Stork

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

Abstract

Reinforcement learning policies are typically represented by black-box neural networks, which are non-interpretable and not well-suited for safety-critical domains. To address both of these issues, we propose constrained normalizing flow policies as interpretable and safe-by-construction policy models. We achieve safety for reinforcement learning problems with instantaneous safety constraints, for which we can exploit domain knowledge by analytically constructing a normalizing flow that ensures constraint satisfaction. The normalizing flow corresponds to an interpretable sequence of transformations on action samples, each ensuring alignment with respect to a particular constraint. Our experiments reveal benefits beyond interpretability in an easier learning objective and maintained constraint satisfaction throughout the entire learning process. Our approach leverages constraints over reward engineering while offering enhanced interpretability, safety, and direct means of providing domain knowledge to the agent without relying on complex reward functions.
OriginalsprogEngelsk
TitelProceedings of the ICRA2024 Workshop on Human-aligned Reinforcement Learning for Autonomous Agents and Robots
Antal sider7
UdgivelsesstedVienna, AT
Publikationsdato17 maj 2024
DOI
StatusUdgivet - 17 maj 2024

Fingeraftryk

Dyk ned i forskningsemnerne om 'Towards Interpretable Reinforcement Learning with Constrained Normalizing Flow Policies'. Sammen danner de et unikt fingeraftryk.

Citationsformater