Many transparency advocates call for pre-registration of planned research online... but there are some instances in which we can already examine the difference between an initial research proposal and the published outcome.
In the study "Underreporting in Political Science Survey Experiments: Comparing Questionnaires to Published Results", authors Annie Franco, Neil Malhotra and Gabor Simonovits from Stanford University examined how accurate published findings were in political science survey experiments.
...Franco et al. collected pre-analysis plans and questionnaires for 249 studies that were conducted between 2002 and 2012; out of these, 53 studies made it into peer-reviewed political science journals such as the American Journal of Political Science or Public Opinion Quarterly.
When they compared the planned design features against what was reported in published articles, they found:
-
30% of papers report fewer experimental conditions in the published paper than in the questionnaire
-
60% of papers report fewer outcome variables than what are listed in the questionnaire;
-
80% of papers fail to report all experimental conditions and outcomes.
Raed more at the political science replication blog.
On the one hand, this is a terrific service to the profession. On the other hand, it's entirely possible that there are mitigating factors, and that theory and common sense help drive what is reported in a paper, among other factors. Nonetheless, the authors highlight a big problem.
Pre-analysis plans will be very useful for keeping people from cherry picking results. But if applied too stringently (as I think they will) they are also very useful for generating dull papers---papers that do not feel free to do more speculative and suggestive analysis and theory generation. You could say "fine if they just label it so", and I agree, but I still think this kind of inductive work will become even more rare than it already is.
It's easy to forget that inductive (theory-building, more speculative work based on finding interesting patterns) is a crucial part of scientific discovery. And sadly referees and journals seem to have little appetite for it. People who take inductive work and rewrite it to look at deductive work are just responding to incentives.
No comments:
Post a Comment