Replicating replicability modeling of psychology papers

Aske Mottelson, Dimosthenis Kontogiorgos

    Research output: Journal Article or Conference Article in JournalJournal articleResearchpeer-review

    Abstract

    Youyou et al. (1) estimated the replicability of more than 14,000 psychology papers using a machine learning model, trained on main texts of 388 replicated studies. The authors identified mean replicability scores of psychological subfields. They also verified the causality of the model predictions; correlations between model predictions and study details the model was not trained on (i.e., P value and sample size) were reported.

    In attempting replication, we identified important shortcomings of the approach and findings. First, the training data contain duplicated paper entries. Second, our analysis shows that the model predictions also correlate with variables that are not causal to replicability (e.g., language style). These issues impede the validity of the model output and thereby paint an erroneous picture of replication rates of the psychological science. In this letter, we attempt to mitigate these issues and nuance the findings of the original paper.
    Original languageEnglish
    JournalProceedings of the National Academy of Sciences of the United States of America
    Volume120
    Issue number33
    ISSN0027-8424
    DOIs
    Publication statusPublished - 7 Aug 2023

    Keywords

    • Replicability
    • Machine learning
    • Psychology
    • Causal inference
    • Model validation

    Fingerprint

    Dive into the research topics of 'Replicating replicability modeling of psychology papers'. Together they form a unique fingerprint.

    Cite this