Towards Interpretable Reinforcement Learning with Constrained Normalizing Flow Policies

Finn Rietz, Erik Schaffernicht, Stefan Heinrich, Johannes A. Stork

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Reinforcement learning policies are typically represented by black-box neural networks, which are non-interpretable and not well-suited for safety-critical domains. To address both of these issues, we propose constrained normalizing flow policies as interpretable and safe-by-construction policy models. We achieve safety for reinforcement learning problems with instantaneous safety constraints, for which we can exploit domain knowledge by analytically constructing a normalizing flow that ensures constraint satisfaction. The normalizing flow corresponds to an interpretable sequence of transformations on action samples, each ensuring alignment with respect to a particular constraint. Our experiments reveal benefits beyond interpretability in an easier learning objective and maintained constraint satisfaction throughout the entire learning process. Our approach leverages constraints over reward engineering while offering enhanced interpretability, safety, and direct means of providing domain knowledge to the agent without relying on complex reward functions.
Original languageEnglish
Title of host publicationProceedings of the ICRA2024 Workshop on Human-aligned Reinforcement Learning for Autonomous Agents and Robots
Number of pages7
Place of PublicationVienna, AT
Publication date17 May 2024
DOIs
Publication statusPublished - 17 May 2024

Fingerprint

Dive into the research topics of 'Towards Interpretable Reinforcement Learning with Constrained Normalizing Flow Policies'. Together they form a unique fingerprint.

Cite this