The Miller Prize for is awarded for the best work appearing in Political Analysis the preceding year.
2024 Winner | |
Recipients | JBrandon Duck-Mayr and Jacob Montgomery |
Work |
Ends Against the Middle: Measuring Latent Traits when Opposites Respond the Same Way for Antithetical Reasons |
Citation |
This paper identifies a glaring weakness of standard ideal-point models: their inability to account for ends-against-the-middle coalitions (think of The Squad and the Freedom Caucus both voting against a bipartisan compromise bill). In addition to fitting such roll calls poorly, standard ideal-point models also provide misleadingly moderate ideal-point estimates of extreme members. "Ends Against the Middle" proposes an elegant and intuitive model to address this problem, describe a novel Bayesian estimation procedure, and make it accessible to applied users through a freely available R package (`bggum`). All in all, the paper is a model of clarity, creativity, and utility. |
Selection committee | Yiqing Xu (Stanford), Libby Jenke (Duke), Cassy Dorf (Vanderbilt), Devin Caughey (MIT) and Jeff Gill (ex officio, American) |
2023 Winner | |
Recipients | Blair Read, Lukas Wolters, Adam Berinsky |
Work |
Racing the Clock: Using Response Time as a Proxy for Attentiveness on Self-Administered Survey |
Citation |
This paper offers an unobtrusive way to classify respondents to online surveys as quick and inattentive survey-takers, baseline/attentive survey-takers, or slow and inattentive survey-takers. The authors provide a method for estimating latent attentiveness based on respondents' response times on all survey questions. This is an improvement to the prior literature, which did not consider slow and inattentive respondents and typically measured attention according to a few screener questions. The technique has a massive application potential to the large literature utilizing online surveys. |
Selection committee | Yiqing Xu (Stanford), Libby Jenke (Duke), Cassy Dorf (Vanderbilt), Devin Caughey (MIT) and Jeff Gill (ex officio, American) |
Past Recipients
2022 Winner | |
Recipients |
Sherry Zaks (USC)
|
Work | "Updating Bayesian(s): A Critical Evaluation of Bayesian Process Tracing" |
Citation |
We like this piece because carefully reveals a set of deep issues with Bayesian qualitative inference that were not appreciated with previous works in this area that readers of that literature would not normally appreciate. Dr. Zaks deftly takes on an established literature from a methodological perspective and reveals both strengths and weaknesses that are important to users are readers of the Bayesian process tracing literature. The abstract is pasted below. Given the increasing quantity and impressive placement of work on Bayesian process tracing, this approach has quickly become a frontier of qualitative research methods. Moreover, it has dominated the process-tracing modules at the Institute for Qualitative and Multi-Method Research (IQMR) and the American Political Science Association (APSA) meetings for over five years, rendering its impact even greater. Proponents of qualitative Bayesianism make a series of strong claims about its contributions and scope of inferential validity. Four claims stand out: (1) it enables causal inference from iterative research, (2) the sequence in which we evaluate evidence is irrelevant to inference, (3) it enables scholars to fully engage rival explanations, and (4) it prevents ad hoc hypothesizing and confirmation bias. Notwithstanding the stakes of these claims and breadth of traction this method has received, no one has systematically evaluated the promises, trade-offs, and limitations that accompany Bayesian process tracing. This article evaluates the extent to which the method lives up to the mission. Despite offering a useful framework for conducting iterative research, the current state of the method introduces more bias than it corrects for on numerous dimensions. The article concludes with an examination of the opportunity costs of learning Bayesian process tracing and a set of recommendations about how to push the field forward. |
Selection committee | Yiqing Xu (Stanford), Libby Jenke (Duke), Cassy Dorf (Vanderbilt), Devin Caughey (MIT) and Jeff Gill (ex officio, American) |
2021 Winner | |
Recipients |
Reagan Mozer (Bentley University)
Luke Miratrix (Harvard)
Aaron Russell Kaufman (NYU Abu Dhabi)
L. Jason Anastasopoulos (University of Georgia)
|
Work | "Matching with Text Data: An Experimental Evaluation of Methods for Matching Documents and of Measuring Match Quality" |
Citation |
On behalf of this year's Miller Prize committee (myself, Alexander Theodoridis, Patrick Brandt, and Jeff Gill), I’m delighted to announce the winner of the Society for Political Methodology’s 2021 Miller Prize for the best paper published in Political Analysis. This year the prize goes to the article "Matching with Text Data: An Experimental Evaluation of Methods for Matching Documents and of Measuring Match Quality," by Reagan Mozer, Luke Miratrix, Aaron Russell Kaufman, and L. Jason Anastasopoulos. The paper represents a significant advance in the important area of incorporating text data into a causal-inference framework. Please join us in congratulating the authors for this excellent piece of scholarship. The abstract is pasted below. Matching for causal inference is a well-studied problem, but standard methods fail when the units to match are text documents: the high-dimensional and rich nature of the data renders exact matching infeasible, causes propensity scores to produce incomparable matches, and makes assessing match quality difficult. In this paper, we characterize a framework for matching text documents that decomposes existing methods into (1) the choice of text representation and (2) the choice of distance metric. We investigate how different choices within this framework affect both the quantity and quality of matches identified through a systematic multifactor evaluation experiment using human subjects. Altogether, we evaluate over 100 unique text-matching methods along with 5 comparison methods taken from the literature. Our experimental results identify methods that generate matches with higher subjective match quality than current state-of-the-art techniques. We enhance the precision of these results by developing a predictive model to estimate the match quality of pairs of text documents as a function of our various distance scores. This model, which we find successfully mimics human judgment, also allows for approximate and unsupervised evaluation of new procedures in our context. We then employ the identified best method to illustrate the utility of text matching in two applications. First, we engage with a substantive debate in the study of media bias by using text matching to control for topic selection when comparing news articles from thirteen news sources. We then show how conditioning on text data leads to more precise causal inferences in an observational study examining the effects of a medical intervention. |
Selection committee | Bear Braumoeller (Ohio State), Alexandar Theodoridis (UC, Merced), Patrick Brandt (UT, Dallas), and Jeff Gill (ex officio, American) |
Past Selection Committees
Year | Committee |
2020 | Bear Braumoeller (Ohio State), Alexandar Theodoridis (UC, Merced), Patrick Brandt (UT, Dallas), and Jeff Gill (ex officio, American) |
2019 | Pablo Babera (LSE), Jennifer Pan (Stanford), and Jeff Gill (American University) |
2018 | Jennifer Pan (Stanford), Pablo Barberá (LSE), and Jonathan Katz (CalTech) |
2017 | Patrick Brandt (UT Dallas, chair), Devin Caughey (MIT), Sunshine Hillygus (Duke) and Michael Alvarez (Cal Tech, ex officio) |
2016 | Neil Malhotra (Stanford, chair), Megan Shannon (Colorado), Arthur Spirling (NYU) and Thad Dunning (UC Berkeley) |
2015 | Neil Malhotra (Chair), Thad Dunning, Meg Shannon, Arthur Spirling |
2014 | David Nickerson (Chair), Devin Caughey, Justin Grimmer, Brad Jones |
2013 | David Nickerson (Chair), Devin Caughey, Justin Grimmer, Brad Jones |
2012 | Burt Monroe (Chair), Justin Grimmer, David Nickerson, Greg Wawro |
2011 | Dan Wood (Chair), Kosuke Imai, Greg Wawro, Burt Monroe |
2010 | Dan Wood (Chair), Kosuke Imai, Greg Wawro, Burt Monroe |
2009 | Dan Wood (Chair), Kosuke Imai, Greg Wawro, Burt Monroe |
2008 | Tobin Grant (Chair), David Darmofal (winner from previous year), Michael Hanmer, Orit Kedar, Drew Linzer |
2007 | Brian Pollins (Chair), Robert Franzese (winner from previous year), William Berry |
2006 | Brian Pollins (Chair), David Nickerson (winner from previous year), Stanley Feldman |