In this paper, we study the trade-offs of different inference approaches for Bayesian matrix factorisation methods, which are commonly used for predicting missing values, and for finding patterns in the data. In particular, we consider Bayesian nonnegative variants of matrix factorisation and tri-factorisation, and compare non-probabilistic inference, Gibbs sampling, variational Bayesian inference, and a maximum-a-posteriori approach. The variational approach is new for the Bayesian nonnegative models. We compare their convergence, and robustness to noise and sparsity of the data, on both synthetic and real-world datasets. Furthermore, we extend the models with the Bayesian automatic relevance determination prior, allowing the models to perform automatic model selection, and demonstrate its efficiency. Code and data related to this chapter are availabe at: https://github.com/ThomasBrouwer/BNMTF_ARD.
|Title of host publication
|The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Database 2017
|Print ISBN 978-3-319-71248-2
|Published - 2017
|Lecture Notes in Computer Science