Crowdsourcing hypothesis tests: Making transparent how design choices shape research results.

dc.contributor.authorJustin F. Landy
dc.contributor.authorMiaolei Jia
dc.contributor.authorIsabel L. Ding
dc.contributor.authorDomenico Viganola
dc.contributor.authorWarren Tierney
dc.contributor.authorAnna Dreber
dc.contributor.authorMagnus Johannesson
dc.contributor.authorThomas Pfeiffer
dc.contributor.authorCharles R. Ebersole
dc.contributor.authorQuentin F. Gronau
dc.coverage.spatialBolivia
dc.date.accessioned2026-03-22T13:52:11Z
dc.date.available2026-03-22T13:52:11Z
dc.date.issued2020
dc.descriptionCitaciones: 178
dc.description.abstractTo what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
dc.identifier.doi10.1037/bul0000220
dc.identifier.urihttps://doi.org/10.1037/bul0000220
dc.identifier.urihttps://andeanlibrary.org/handle/123456789/43195
dc.language.isoen
dc.publisherAmerican Psychological Association
dc.relation.ispartofPsychological Bulletin
dc.sourceNova Southeastern University
dc.subjectPsycINFO
dc.subjectPsychology
dc.subjectStatistical hypothesis testing
dc.subjectConsistency (knowledge bases)
dc.subjectEmpirical research
dc.subjectTest (biology)
dc.subjectCrowdsourcing
dc.subjectStatistical power
dc.subjectResearch design
dc.subjectSocial psychology
dc.titleCrowdsourcing hypothesis tests: Making transparent how design choices shape research results.
dc.typearticle

Files