TY - JOUR
T1 - Preference Learning for Cognitive Modeling: A Case Study on Entertainment Preferences
AU - Yannakakis, Georgios N.
AU - Maragoudakis, Manolis
AU - Hallam, John
PY - 2009
Y1 - 2009
N2 - Learning from preferences, which provide means for expressing a subject's desires, constitutes an important topic in machine learning research. This paper presents a comparative study of four alternative instance preference learning algorithms (both linear and nonlinear). The case study investigated is to learn to predict the expressed entertainment preferences of children when playing physical games built on their personalized playing features ( entertainment modeling ). Two of the approaches are derived from the literature-the large-margin algorithm (LMA) and preference learning with Gaussian processes-while the remaining two are custom-designed approaches for the problem under investigation: meta-LMA and neuroevolution. Preference learning techniques are combined with feature set selection methods permitting the construction of effective preference models, given suitable individual playing features. The underlying preference model that best reflects children preferences is obtained through neuroevolution: 82.22% of cross-validation accuracy in predicting reported entertainment in the main set of game survey experimentation. The model is able to correctly match expressed preferences in 66.66% of cases on previously unseen data ( p -value = 0.0136) of a second physical activity control experiment. Results indicate the benefit of the use of neuroevolution and sequential forward selection for the investigated complex case study of cognitive modeling in physical games.
AB - Learning from preferences, which provide means for expressing a subject's desires, constitutes an important topic in machine learning research. This paper presents a comparative study of four alternative instance preference learning algorithms (both linear and nonlinear). The case study investigated is to learn to predict the expressed entertainment preferences of children when playing physical games built on their personalized playing features ( entertainment modeling ). Two of the approaches are derived from the literature-the large-margin algorithm (LMA) and preference learning with Gaussian processes-while the remaining two are custom-designed approaches for the problem under investigation: meta-LMA and neuroevolution. Preference learning techniques are combined with feature set selection methods permitting the construction of effective preference models, given suitable individual playing features. The underlying preference model that best reflects children preferences is obtained through neuroevolution: 82.22% of cross-validation accuracy in predicting reported entertainment in the main set of game survey experimentation. The model is able to correctly match expressed preferences in 66.66% of cases on previously unseen data ( p -value = 0.0136) of a second physical activity control experiment. Results indicate the benefit of the use of neuroevolution and sequential forward selection for the investigated complex case study of cognitive modeling in physical games.
KW - Augmented-reality games
KW - Bayesian learning (BL)
KW - entertainment modeling
KW - large-margin classifiers
KW - neuroevolution
KW - preference learning
KW - Augmented-reality games
KW - Bayesian learning (BL)
KW - entertainment modeling
KW - large-margin classifiers
KW - neuroevolution
KW - preference learning
U2 - 10.1109/TSMCA.2009.2028152
DO - 10.1109/TSMCA.2009.2028152
M3 - Journal article
SN - 0018-9472
VL - 39
SP - 1165
EP - 1175
JO - IEEE Transactions on Systems, Man and Cybernetics
JF - IEEE Transactions on Systems, Man and Cybernetics
IS - 6
ER -