Experimental evaluations of policies or programs are prospective. As such they typically require deep engagement between researchers and implementers in processes of policy formulation, beneficiary selection, and site selection. Compare this to an ex post analysis. In an ex post analysis, such details are often lost. It is for good reason then that you often hear from practitioners that ex post evaluators did not understand "what really went on" in the program. They weren't there from the beginning. In my experience, this is much less the case for experimental studies. Working prospectively, the researcher is there operating alongside implementation.
That is Cyrus Samii on the under-appreciated benefits of field experiments, which have been coming under a lot of criticism for over-selling the weight of their evidence, and over-taking the profession.
I'm going to go even further than Cyrus. At the end of the day, the great benefit of field experiments to economics and political scientists is that it's forced some of the best social scientists to try to get complicated things done in unfamiliar places, and deal with all the constraints, bureaucrats, logistics, and impediments to reform you can imagine.
Arguably, the tacit knowledge these academics have developed about development and reform will be more influential to their long run work and world view than the experiments themselves.
But first, a step back. Not all experiments get people into unfamiliar places. I can think of a lot of field experiments where the main researcher has never even been to the country where it's all happening. And of course not all observational work is context free. My years in northern Uganda studying a war (mostly) after the fact deeply shaped my world view.
So it's about the investment in "what really went on" that matters to research quality. But I'll accept that, on average, field experiments force more of this. For now.
But like I said, the big benefit to social science is not the quality of the final research paper that results. Take a so-called randomista. I'm willing to bet that maybe now, but certainly in ten years, how they explain how the world works, how they train their students, what they write about, what work they promote, the big books they write for the public: all of these things will be more influenced by their experiences trying to get things done than the causal estimates themselves.
This is even true of the political scientists, who are veterans of field work, but seldom ever actually try to implement something (at least before tenure). Even so, I expect the biggest effect on development economics.
Won't it be ironic if the biggest effect of the experimental revolution in development economics is to grow the number of economists doing comparative politics?
We are all Albert Hirschman now.
No comments:
Post a Comment