TY - GEN
T1 - Neuro-symbolic hierarchical rule induction
AU - Glanois, Claire
AU - Jiang, Zhaohui
AU - Feng, Xuening
AU - Weng, Paul
AU - Zimmer, Matthieu
PY - 2022/6/28
Y1 - 2022/6/28
N2 - We propose Neuro-Symbolic Hierarchical Rule Induction, an efficient interpretable neuro-symbolic model, to solve Inductive Logic Programming (ILP) problems. In this model, which is built from a pre-defined set of meta-rules organized in a hierarchical structure, first-order rules are invented by learning embeddings to match facts and body predicates of a meta-rule. To instantiate, we specifically design an expressive set of generic meta-rules, and demonstrate they generate a consequent fragment of Horn clauses. As a differentiable model, HRI can be trained both via supervised learning and reinforcement learning. To converge to interpretable rules, we inject a controlled noise to avoid local optima and employ an interpretability-regularization term. We empirically validate our model on various tasks (ILP, visual genome, reinforcement learning) against relevant state-of-the-art methods, including traditional ILP methods and neuro-symbolic models.
AB - We propose Neuro-Symbolic Hierarchical Rule Induction, an efficient interpretable neuro-symbolic model, to solve Inductive Logic Programming (ILP) problems. In this model, which is built from a pre-defined set of meta-rules organized in a hierarchical structure, first-order rules are invented by learning embeddings to match facts and body predicates of a meta-rule. To instantiate, we specifically design an expressive set of generic meta-rules, and demonstrate they generate a consequent fragment of Horn clauses. As a differentiable model, HRI can be trained both via supervised learning and reinforcement learning. To converge to interpretable rules, we inject a controlled noise to avoid local optima and employ an interpretability-regularization term. We empirically validate our model on various tasks (ILP, visual genome, reinforcement learning) against relevant state-of-the-art methods, including traditional ILP methods and neuro-symbolic models.
KW - Neuro-Symmetric Modeling
KW - Inductive Logic Programming
KW - Hierarchical Rule Induction
KW - Interpretability-Regularization
KW - Reinforcement Learning
KW - Neuro-Symmetric Modeling
KW - Inductive Logic Programming
KW - Hierarchical Rule Induction
KW - Interpretability-Regularization
KW - Reinforcement Learning
UR - https://proceedings.mlr.press/v162/glanois22a/glanois22a.pdf
UR - https://arxiv.org/abs/2112.13418
M3 - Conference article
SN - 2640-3498
VL - 162
SP - 7583
EP - 7615
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
IS - 39
ER -