# Chapter 1 Introduction

We describe the book’s general approach, preview our argument for the utility of causal models as a framework for choosing research strategies and drawing causal inferences, and provide a roadmap for the rest of the book.

Here is the key idea of this book.

Quantitative social scientists spend a lot of time trying to understand causal relations between variables by looking across large numbers of cases to see how outcomes differ when postulated causes differ. This strategy relies on variation in causal conditions across units of analysis, and the quality of the resulting inferences depends in large part on what forces give rise to that variation.

Qualitative social scientists, like historians, spend a lot of time looking at a smaller set of cases and seek to learn about causal relations by examining evidence of causal processes in operation within these cases. Qualitative scholars rely on theories of how things work, theories that specify what should be observable within a case if indeed an outcome were generated by a particular cause.

These two approaches seem to differ in what they seek to explain—individual-level or population-level outcomes; in the forms of evidence they require—cross-case variation or within-case detail; and in what they need to assume in order to draw inferences—knowledge of assignment processes or knowledge of causal processes.

The central theme of this book is that this distinction, though culturally real , is neither epistemologically deep nor analytically helpful. Social scientists can work with causal models that simultaneously exploit cross-case variation and within-case detail, that address both case-level and population-level questions, and that both depend on, and contribute to developing, theories of how things work.1 In a word, with well-specified causal models, researchers can make integrated inferences.

In this book, we describe an approach to doing this in which researchers form causal models, update those models using data, and then query the models to get answers to particular causal questions. This framework is very different from standard statistical approaches in which researchers focus on selecting the best estimator to estimate a particular estimand of interest. In a causal models framework, the model itself gets updated: we begin by learning about processes, and only then draw inferences about particular causal relations of interest, either at the case level or at the population level.

We do not claim that a causal-model-based approach is the best or only strategy suited to addressing causal questions. There are plenty of settings in which other approaches would likely work better. For instance, it is hard to beat a difference-in-means if you have easy access to large amounts of experimental data and are interested in sample average treatment effects. But we do think that the approach holds great promise—allowing researchers to combine disparate data in a principled way to ask a vast range of sample- and population-level causal questions, helping integrate theory and empirics, and providing coherent guidance on research design. It should, we think, sit prominently in the applied researcher’s toolkit.

Our goals in this book are to motivate this approach; provide an introduction to the theory of structural causal models; provide practical guidance for setting up, updating, and querying causal models; and show how the approach can inform key research-design choices, especially case-selection and data-collection strategies.

## 1.1 The Case for Causal Models

There are three closely related motivations for embracing a causal models approach. One is a concern with the limits of design-based inference. A second is an interest in integrating qualitative knowledge with quantitative approaches. A third is an interest in better connecting empirical strategies to theory.

### 1.1.1 The limits to design-based inference

To caricature positions a bit, consider the difference between an engineer and a skeptic. The engineer tackles problems of causal inference using models: theories of how the world works, generated from past experiences and applied to the situation at hand. They come with prior beliefs about a set of mechanisms operating in the world and, in a given situation, will ask whether the conditions are in place for a known mechanism to operate effectively. The skeptic, on the other hand, maintains a critical position, resisting basing conclusions on beliefs that are not supported by evidence in the context at hand.

The engineer’s approach echoes what was until recently a dominant orientation among social scientists. At the turn of the current century, much analysis—both empirical and theoretical—took the form of modelling processes (“data-generating processes”) and then interrogating those models .

Over the last two decades, however, skeptics have raised a set of compelling concerns about the assumption-laden nature of this kind of analysis, while also clarifying how valid inferences can be made with limited resort to models. The result has been a growth in the use of design-based inference techniques that, in principle, allow for model-free estimation of causal effects (see Dunning (2012), Gerber, Green, and Kaplan (2004), Druckman et al. (2011), Palfrey (2009) among others). These include lab, survey, and field experiments and natural-experimental methods exploiting either true or “as-if” randomization by nature. With the turn to experimental and natural-experimental methods has come a broader conceptual shift, with a growing reliance on the “potential outcomes” framework, which provides a clear language for articulating causal relations (see Rubin (1974), Splawa-Neyman et al. (1990) among others) without having to invoke fully specified models of data-generating processes. See Aronow and Miller (2019) for a thorough treatment of “agnostic statistics” which shows how much can be done without recourse to commitments to models of data generating processes.

The ability to estimate average effects and to characterize uncertainty—for instance, calculating $$p$$-values and standard errors—without resort to models is an extraordinary development. In Fisher (2017)’s term, with these tools, randomization processes provide a “reasoned basis for inference,” placing empirical claims on a powerful footing.

At the same time, excitement about the strengths of these approaches has been mixed with concerns about how the approach shapes inquiry. We highlight two.

The first concern—raised by many in recent years (e.g. Thelen and Mahoney (2015))—is about design-based inference’s scope of application. While experimentation and natural experiments represent powerful tools, the range of research situations in which model-free inference is possible is limited. For a wide range of questions of interest both to social scientists and to society, controlled experimentation is impossible, whether for practical or ethical reasons, and claims of “as-if” randomization are not plausible .2 Thus, limiting our focus to those questions for which, or situations in which, exogeneity can be established “by design” would represent a dramatic narrowing of social science’s ken.

To be clear, this is not an argument against experimentation or design-based inference when these can be used; rather it is an argument that social science needs a broader set of tools.

The second concern is more subtle. The great advantage of design-based inference is that it liberates researchers from the need to rely on models to make claims about causal effects. The risk is that, in operating model-free, researchers end up learning about effect sizes but not about models. Yet often the model is the thing we want to learn about. Our goal as social scientists is to come to grips with how the world works, not simply to collect propositions about the effects that different causes have had on different outcomes in different times and places. It is through models that we derive an understanding of how things might work in contexts and for processes and variables that we have not yet studied. Thus, our interest in models is intrinsic, not instrumental. By taking models out of the equation, as it were, we lose the baby with the bathwater.

We note however that although we return to models, lessons from the credibility revolution permeate this book. Though the approach we use relies on models we also highlight the importance of being skeptical towards models, checking their performance, and—to the extent possible—basing them on weaker, more defensible models. In practice, we sometimes find, progress, even for qualitative methods, relies on the kind of background knowledge that requires randomization. See our discussions in sections 11.2 and especially 15.1.1.

### 1.1.2 Qualitative and mixed-method inference

Recent years have seen the elucidation of the inferential logic behind “process tracing” procedures used in qualitative political science and other disciplines. On our read of this literature, the logic of process tracing in these accounts depends on a particular form of model-based inference.3 While process tracing as a method has been around for more than three decades (e.g., George and McKeown (1985)), its logic has been most fully laid out by qualitative methodologists in political science and sociology over the last twenty years (e.g., Bennett and Checkel (2015b), George and Bennett (2005), Brady and Collier (2010), P. A. Hall (2003), Mahoney (2010)). Whereas King, Keohane, and Verba (1994) sought to derive qualitative principles of causal inference within a correlational framework, qualitative methodologists writing in the wake of “KKV” have emphasized and clarified process-tracing’s “within-case” inferential logic: in process tracing, explanatory hypotheses are tested principally based on observations of what happened within a case, rather than on observation of covariation of causes and effects across cases.

The process-tracing literature has also advanced increasingly elaborate conceptualizations of the different kinds of “probative value” that within-case evidence can yield. For instance, qualitative methodologists have explicated the logic of different test types (“hoop tests”, “smoking gun tests”, etc.) involving varying degrees of specificity and sensitivity (Van Evera (1997), Collier (2011), Mahoney (2012)).4 Other scholars have described the leverage provided by process-tracing evidence in Bayesian terms, moving from a set of discrete test types to a more continuous notion of probative value (Fairfield and Charman (2017), Bennett (2015), Humphreys and Jacobs (2015)).5

Yet, conceptualizing the different ways in which probative value might operate leaves a fundamental question unanswered: what gives within-case evidence its probative value with respect to causal relations? We do not see a clear answer to this question in the current process-tracing literature. Implicitly—but worth rendering explicit—the probative value of a given piece of process-tracing evidence always depends on researcher beliefs that come from outside the case in question. We enter a research situation with a model of how the world works, and we use this model to make inferences given observed patterns in the data—while at the same time updating those models based on the data.

A key aim of this book is to demonstrate the role that models can—and, in our view, must—play in drawing case-level causal inferences and to clarify conditions under which these models can be defended. To do so we draw on an approach to specifying causal models developed originally in computer science and that predates most of the process-tracing literature. The broad approach, described in Cowell et al. (1999) and Pearl (2009), is consistent with the potential outcomes framework, and provides rules for updating on population- and case-level causal queries from different types of data.

In addition to clarifying the logic of qualitative inference, we will argue that such causal models can also enable the systematic integration of qualitative and quantitative forms of evidence. Social scientists are increasingly developing mixed-method research designs, research strategies that combine quantitative with qualitative forms of evidence . A typical mixed-methods study includes the estimation of causal effects using data from many cases as well as a detailed examination of the processes taking place in a few cases. Now-classic examples of this approach include Lieberman’s study of racial and regional dynamics in tax policy ; Swank’s analysis of globalization and the welfare state (Swank (2002)); and Stokes’ study of neoliberal reform in Latin America . Major recent methodological texts provide intellectual justification of this trend toward mixing, characterizing small-$$n$$ and large-$$n$$ analysis as drawing on a single logic of inference (King, Keohane, and Verba (1994)) and/or as serving complementary functions Collier, Brady, and Seawright (2010). The American Political Science Association now has an organized section devoted in part to the promotion of multi-method investigations, and the emphasis on multiple strategies of inference research is now embedded in guidelines from many research funding agencies .

However, while scholars frequently point to the benefits of mixing correlational and process-based inquiry (e.g., Collier, Brady, and Seawright (2010), p.~181), and have sometimes mapped out broad strategies of multi-method research design (Evan S. Lieberman (2005), Seawright and Gerring (2008), Seawright (2016)), they have rarely provided specific guidance on how the integration of inferential leverage should unfold. In particular, the literature has not supplied specific, formal procedures for aggregating findings—whether mutually reinforcing or contradictory—across different modes of analysis.6 As we aim to demonstrate in this book, however, grounding inference in causal models provides a very natural way of combining information of the $$X,Y$$ variety with information about the causal processes connecting $$X$$ and $$Y$$. The approach that we develop here can be readily addressed both to the case-oriented questions that tend to be of interest to qualitative scholars and to the population-oriented questions that tend to motivate quantitative inquiry.

As will become clear, when we structure our inquiry in terms of causal models, the conceptual distinction between qualitative and quantitative inference becomes hard to sustain. Notably, this is not because all causal inference depends fundamentally on covariation but because in a causal-model-based inference, what matters for the informativeness of a piece of evidence is how that evidence alters beliefs about a model, and in turn, a query. While the apparatus that we present is formal, the approach—in asking how pieces of evidence drawn from different parts of a process map on to a base of theoretical knowledge—is arguably most closely connected to process tracing in its core logic.

### 1.1.3 Connecting theory and empirics

The relationship between theory and empirics has been a surprisingly uncomfortable one in political science. In a prominent intervention, for instance, Clarke and Primo (2012) draw attention to and critique political scientists’ widespread reliance on the “hypothetico-deductive” (H-D) framework, in which a theory or model is elaborated, empirical predictions derived, and data sought to test these predictions and the model from which they derive. Clarke and Primo draw on decades of scholarship in the philosophy of science pointing to deep problems with the H-D framework, including with the idea that the truth of a model logically derived from first principles can be tested against evidence.

In fact, the relationship between theory and evidence in social inquiry is often surprisingly unclear both in qualitative and quantitative work. We can perhaps illustrate it best, however, by reference to qualitative work, where the centrality of theory to inference has been most emphasized. In process tracing, theory is what justifies inferences. In their classic text on case study approaches, George and Bennett (2005) describe process tracing as the search for evidence of “the causal process that a theory hypothesizes or implies” (6). Similarly, P. A. Hall (2003) conceptualizes the approach as testing for the causal-process-related observable implications of a theory; Mahoney (2010) indicates that the events for which process tracers go looking are those posited by theory (128); and Gerring (2006) describes theory as a source of predictions that the case-study analyst tests (116). Theory, in these accounts, is supposed to help us figure out where to look for discriminating evidence.

What is not clear, however, is how researchers can derive within-case empirical predictions from theory and how exactly doing so provides leverage on a causal question. From what elements of a theory can scholars derive informative within-case observations? How do the evidentiary requisites for drawing a causal inference, given a theory, depend on the particular causal question of interest—on whether, for instance, we are interested in identifying the cause of an outcome in a case, estimating an average causal effect, or identifying the pathway through which an effect is generated? Perhaps most confusingly, if the theory tells us what to look for to draw an inference, can the inferences be about the theory itself or are we constrained to making theory-dependent inferences?7 In short, how exactly can we ground causal inferences from within-case evidence in background knowledge about how the world works?

Much quantitative work in political science features a similarly weak integration between theory and research design. The modal inferential approach in quantitative work, both observational and experimental, involves looking for correlations between causes and outcomes, with less regard for intervening or surrounding causal relationships.8 If a theory suggests a set of relations, it is common to examine these separately—does $$A$$ cause $$B$$?; does $$B$$ cause $$C$$?; are relations stronger or weaker here or there?—without standard procedures for bringing the disparate pieces of evidence together to form theoretical conclusions. More attention has been paid to empirical implications of theoretical models than to theoretical implications of empirical models.

In this book, we seek to show how scholars can simultaneously make fuller and more explicit use of theoretical knowledge in designing their research projects and analyzing data, and make use of data to update theoretical models. Like Clarke and Primo, we treat models not as veridical accounts of the world but as maps: maps, based on prior theoretical knowledge, about causal relations in a domain of interest. Also, as in Clarke and Primo’s approach, we do not write down a model in order to test its veracity (though, in later chapters, we do discuss ways of justifying and evaluating models). Rather, our focus is on how we can systematically use causal models—in the sense of mobilizing background knowledge of the world—to guide our empirical strategies and inform our inferences. Grounding our empirical strategy in a model allows us, in turn, to update our model as we encounter the data thus letting our theory evolve as we encounter data.

## 1.2 Key contributions

This book draws on methods developed in the study of Bayesian networks, a field pioneered by scholars in computer science, statistics, and philosophy (see especially Pearl (2009)) to represent structures of causal relations between multiple variables. Although work in this tradition has had limited traction in political science to date, the literature on Bayesian networks and their graphical counterparts, directed acyclic graphs (DAGs), addresses very directly the kinds of problems with which qualitative and quantitative scholars routinely grapple.9

Drawing on this work, we show in the chapters that follow how a theory can be formalized as a causal model represented by a causal graph and a set of structural equations. Engaging in this modest degree of formalization yields enormous benefits. It allows us, for a wide range of causal questions, to specify causal questions clearly and assess what inferences to make about queries from new data.

For students engaging in process tracing, the benefits of this approach are multiple. In particular, the framework that we describe in this book provides:

• A clear language for defining causal questions of interest, consistent with advances using the potential outcomes framework and those using graphical causal models.

• A strategy for assessing the “probative value” of evidence drawn from different parts of any causal network. The approach yields a principled and transparent approach to answering the question: how should the observation of a given piece of data affect my causal beliefs about a case?

• A transparent, replicable method for aggregating inferences from observations drawn from different locations in a causal network. Having collected multiple pieces of evidence from different parts of a causal process or case context, what should I end up believing about the causal question of interest?

• A common approach for assessing a wide variety of queries (estimands). We can use the same apparatus to learn simultaneously about different case-level and population level -causal questions, such as “What caused the outcome in this case?” and “Through what pathway does this cause most commonly exert its effect?”

• Guidance for research design. Given finite resources, researchers must make choices about where to look for evidence. A causal model framework can help researchers assess, a priori, the relative expected informativeness of different evidentiary and case-selection strategies, conditional on how they think the world works and the question they want to answer.

The approach also offers a range of distinctive benefits to researchers seeking to engage in mixed-method inference and to learn about general causal relations, as well as about individual cases. The framework’s central payoff for multi-method research is the systematic integration of qualitative and quantitative information to answer any given causal query. We note that the form of integration that we pursue here differs from that offered in other accounts of multi-method research. In Seawright (2016)’s approach, for instance, one form of data—quantitative or qualitative—is always used to draw causal inferences, while the other form of data is used to test assumptions or improve measures employed in that primary inferential strategy. In the approach that we develop in this book, in contrast, we are always using all information available to update on causal quantities of interest.

In fact, within the causal models framework, there is no fundamental difference between quantitative and qualitative data, as both enter as values of nodes in a causal graph. This formalization—this reductive move—may well discomfit some readers. And we acknowledge that our approach undeniably involves a loss of some of what makes qualitative research distinct and valuable. Yet, this translation of qualitative and quantitative observations into a common, causal model framework offers major advantages. Beyond the integration of different forms of information, these advantages include:

• Transparency. The framework makes manifest precisely how each form of evidence enters into the analysis and shapes conclusions.
• Learning across levels of analysis. In a causal model approach, we use case-level information to learn about populations and general theory. At the same time, we use what we have learned about populations to sharpen our inferences about causal relations within individual cases.

• Cumulation of knowledge. A causal model framework provides a straightforward, principled mechanism for building on what we have already learned. As we see data, we update our model; and then our updated model can inform the inferences we draw from the next set of observations and give guidance to what sort of future data will be most beneficial. Models can, likewise, provide an explicit framework for positing and learning about the generalizability and portability of findings across research contexts.

• Guidance for research design. With a causal model in hand, we can formally assess key multi-method design choices, including the balance we should strike between breadth (the number of cases) and depth (intensiveness of analysis in individual cases) and the choice of cases for intensive analysis.

Using causal models also has substantial implications for common methodological intuitions, advice, and practice. To touch on just a few of these implications:

• Our elaboration and application of model-based process tracing shows that, given plausible causal models, process tracing’s common focus on intervening causal chains may be much less productive than other empirical strategies, such as examining moderating conditions.

• Our examination of model-based case-selection indicates that for many common purposes there is nothing particularly special about “on the regression line” cases or those in which the outcome occurred, and there is nothing necessarily damning about selecting on the dependent variable. Rather, optimal case selection depends on factors that have to date received little attention, such as the population distribution of cases and the probative value of the available evidence.

• Our analysis of clue-selection as a decision problem shows that the probative value of a given piece of evidence cannot be assessed in isolation, but hinges critically on what we have already observed.

The basic analytical apparatus that we employ here is not new. Rather, we see the book’s goals as being of three kinds. First, we aim to import insights: to introduce social scientists to an approach that has received little attention in their fields but that can be useful for addressing the sorts of causal questions with which they are commonly preoccupied. As a model-based approach, it is a framework especially well suited to fields of inquiry in which exogeneity frequently cannot be assumed by design—that is, in which we often have no choice but to be engineers.

Second, we draw connections between the Bayesian networks approach and key concerns and challenges with which the social sciences routinely grapple. Working with causal models and DAGs most naturally connects to concerns about confounding and identification that have been central to much quantitative methodological development. Yet we also show how causal models can address issues central to process tracing, such as how to select cases for examination, how to think about the probative value of causal process observations, and how to structure our search for evidence, given finite resources.

Third, we provide a set of usable tools for implementing the approach. We provide software, the CausalQueries package, that researchers can use to make research design choices and draw inferences from the data.

There are also important limits to this book’s contributions and aims. First, while we make use of Bayesian inference throughout, we do not engage here with fundamental debates over or critiques of Bayesianism itself. (For excellent treatments of some of the deeper issues and debates, see, for instance, Earman (1992) and Fairfield and Charman (2017).)

Second, this book does not address matters of data-collection (e.g., conducting interviews, searching for archival documents) or the construction of measures. For the most part, we assume that reliable data can be gathered (even if it is costly to do so), and we bracket the challenges that surround the measurement process itself.10 That said, a core concern of the book is using causal models to identify the kinds of evidence that qualitative researchers will want to collect. In Chapter 7, we show how causal models can tell us whether observing an element of a causal process is potentially informative about a causal question; and in Chapter 12 we demonstrate how we can use models to assess the likely learning that will arise from different clue-selection strategies. We also address the problem of measurement error in Chapter 9, showing how we can use causal models to learn about error from the data.

Finally, while we will often refer to the use of causal models for “qualitative” analysis, we do not seek to assimilate all forms of qualitative inquiry into a causal models framework. Our focus is on work that is squarely addressed to matters of causation; in particular, the logic that we elaborate is most closely connected to the method of process tracing. More generally, the formalization that we make use of here—the graphical representation of beliefs and the application of mathematical operations to numerically coded observations—will surely strike some readers as reductive and not particularly “qualitative.” It is almost certainly the case that, as we formalize, we leave behind many forms of information that qualitative researchers gather and make use of. Our aim in this book is not to discount the importance of those aspects of qualitative inquiry that resist formalization, but to show some of things we can do if we are willing to formalize in a particular way.

This book has four parts.

In the first part we present the framework’s foundational concepts. In Chapter 2, following a review of the potential outcomes approach to causality, we introduce the concept and key components of a causal model. Chapter 3 illustrates how we can represent causal beliefs in the form of causal models by translating the arguments of several prominent works of political science into causal models. In Chapter 4, we set out a range of causal questions that researchers might want to address—including questions about case-level causal effects, population-level effects, and mechanisms—and define these queries within a causal model framework. Chapter 5 offers a primer on the key ideas in Bayesian inference that we will mobilize in later sections of the book. In Chapter 6, we map between causal models and theories, showing how we can think of any causal model as situated within a hierarchy of complexity: within this hierarchy, any causal model can be justified by references to a “lower level,” more detailed model that offers a theory of why things work the way they do at the higher level. This conceptualization is crucial insofar as we use more detailed (lower-level) models to generate empirical leverage on relationships represented in simpler, higher-level models.

In the second part, we show how we can use causal models to undertake process-tracing and mixed method inference. Chapter 7 lays out the logic of case-level inference from causal models: the central idea here is that what we learn from evidence is always conditional on the prior beliefs embedded in our model. In Chapter 8, we illustrate model-based process-tracing with two applications: one on the substantive issue of economic inequality’s effects on democratization and a second on the relationship between political institutions and economic development. Chapter 9 moves to mixed-data problems: situations in which a researcher wants to use “quantitative” (broadly, $$X,Y$$) data on a large set of cases and more detailed (“qualitative”) data on some subset of these cases. We show how we can use any arbitrary mix of observations across a sample of any size (greater than 1) to update on all causal parameters in a model, and then use the updated model to address the full range of general and case-level queries of interest. In Chapter 10, we illustrate this integrative approach by revisiting the applications introduced in Chapter 8. Finally, in Chapter 11, we take the project of integration a step further by showing how we can use models to integrate findings across studies and across settings. We show, for instance, how we can learn jointly from the data generated by an observational study and an experimental study of the same causal domain and how models can help us reason in principled ways about the transportability of findings across contexts.

In the third part, we unpack what causal models can contribute to research design. In terms of the Model-Inquiry-Data strategy-Answer strategy framework (MIDA) from Blair, Coppock, and Humphreys (2023), we can think of chapters 2, 4, and 5 as corresponding to Models, Inquiries, and Answer strategies, while Data strategies are dealt with in this third part. Across Chapters 12, 13, and 14 we demonstrate how researchers can mobilize their models, as well as prior observations, to determine what kind of new evidence is likely to be most informative about the query of interest, how to strike the balance between extensiveness and intensiveness of analysis, and which cases to select for in-depth process tracing. Consistent with the principle in Blair, Coppock, and Humphreys (2023) to design holistically, we find that questions around data selection strategies cannot be answered in isolation from model and query specification.

The fourth and final part of the book steps back to put the model-based approach into question. Until this point, we will have been advocating an embrace of models to aid inference. But the dangers of doing this are demonstrably large. The key problem is that with model-based inference, our inferences are only as good as the model we start with. In the end, while we advocate a focus on models, we know that skeptics are right to distrust them. The final part of the book approaches this problem from two perspectives. In Chapter 15, we demonstrate the possibility of justifying models from external evidence, though we do not pretend that the conditions for doing so will arise commonly. In Chapter 16, drawing on common practice in Bayesian statistics, we present a set of strategies that researchers can use to evaluate and compare the validity of models, and to investigate the degree to which findings hinge on model assumptions. The key point here is that using a model does not require a commitment to it: the model itself can provide indications that it is doing a poor job.

In the concluding chapter we summarize what we see as the main advantages of a causal-model-based approach to inference, draw out a set of key concerns and limitations of the framework, and identify what we see as the key avenues for future progress in model-based inference.

Here we go.

### References

Aronow, Peter M, and Benjamin T Miller. 2019. Foundations of Agnostic Statistics. Cambridge University Press.
———. 2015. “Appendix.” In Process Tracing: From Metaphor to Analytic Tool, edited by Andrew Bennett and Jeffrey Checkel. New York: Cambridge University Press.
Bennett, Andrew, and Jeffrey T Checkel. 2015b. Process Tracing. New York: Cambridge University Press.
Blair, Graeme, Alexander Coppock, and Macartan Humphreys. 2023. Research Design: Declaration, Diagnosis, Redesign. Princeton University Press.
Brady, H. E., and D. Collier. 2010. Rethinking Social Inquiriy: Diverse Tools, Shared Standards. Lanham, MD: Rowman & Littlefield. http://books.google.ca/books?id=djovjEXZYccC.
Clarke, Kevin A, and David M Primo. 2012. A Model Discipline: Political Science and the Logic of Representations. New York: Oxford University Press.
Collier, David. 2011. “Understanding Process Tracing.” PS: Political Science & Politics 44 (04): 823–30.
———. 2010. “Sources of Leverage in Causal Inference: Toward an Alternative View of Methodology.” In Rethinking Social Inquiry: Diverse Tools, Shared Standards, edited by David Collier and Henry E Brady, 161–99. Lanham MD: Rowman; Littlefield.
Cook, Thomas D. 2018. “Twenty-Six Assumptions That Have to Be Met If Single Random Assignment Experiments Are to Warrant ‘Gold Standard’ Status: A Commentary on Deaton and Cartwright.” Social Science & Medicine.
Coppock, Alexander, and Dipin Kaur. 2022. “Qualitative Imputation of Missing Potential Outcomes.” American Journal of Political Science.
Cowell, Robert G, Philip Dawid, Steffen L Lauritzen, and David J Spiegelhalter. 1999. Probabilistic Networks and Expert Systems. Springer.
Creswell, J. W., and Amanda L. Garrett. 2008. “The "Movement" of Mixed Methods Research and the Role of Educators.” South African Journal of Education 28: 321–33. http://www.scielo.org.za/scielo.php?pid=S0256-01002008000300003\&script=sci_arttext\&tlng=pt.
Druckman, James N, Donald P Green, James H Kuklinski, and Arthur Lupia. 2011. “Experimentation in Political Science.” In Handbook of Experimental Political Science, edited by James N Druckman, Donald P Green, James H Kuklinski, and Arthur Lupia, 3–14. New York: Cambridge University Press.
Dunning, T. 2012. Natural Experiments in the Social Sciences: A Design-Based Approach. Strategies for Social Inquiry. Cambridge University Press. http://books.google.de/books?id=ThxVBFZJp0UC.
Earman, John. 1992. Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory. Cambridge, ma: MIT Press.
Fairfield, Tasha, and Andrew Charman. 2017. “Explicit Bayesian Analysis for Process Tracing: Guidelines, Opportunities, and Caveats.” Political Analysis 25 (3): 363–80.
Fisher, Ronald A. 2017. Design of Experiments. New York: Hafner.
García, Fernando Martel, and Leonard Wantchekon. 2015. “A Graphical Approximation to Generalization: Definitions and Diagrams.” Journal of Globalization and Development 6 (1): 71–86.
George, Alexander L., and Andrew A. Bennett. 2005. Case Studies and Theory Development in the Social Sciences. A BCSIA Book. MIT Press. http://books.google.ch/books?id=JEGzE6ExN-gC.
George, Alexander L., and Timothy J McKeown. 1985. “Case Studies and Theories of Organizational Decision Making.” Advances in Information Processing in Organizations 2 (1): 21–58.
Gerber, Alan S., Donald P. Green, and Edward H. Kaplan. 2004. “The Illusion of Learning from Observational Research.” In Problems and Methods in the Study of Politics, edited by Ian Shapiro, Rogers M. Smith, and Tarek E. Masoud, 251–73. Cambridge, UK: Cambridge University Press.
Gerring, John. 2006. Case Study Research: Principles and Practices. New York: Cambridge University Press.
Glynn, Adam, and Kevin Quinn. 2007. “Non-Parametric Mechanisms and Causal Modeling.” working paper.
———. 2011. “Why Process Matters for Causal Inference.” Political Analysis 19: 273–86.
Goertz, G., and J. Mahoney. 2012. Tale of Two Cultures - Contrasting Qualitative and Quantitative. University Press Group Limited. http://books.google.de/books?id=3DZ6d0d2K3EC.
Gordon, Sanford C, and Alastair Smith. 2004. “Quantitative Leverage Through Qualitative Knowledge: Augmenting the Statistical Analysis of Complex Causes.” Political Analysis 12 (3): 233–55.
Hall, Peter A. 2003. “Aligning Ontology and Methodology in Comparative Research.” In Comparative Historical Analysis in the Social Sciences, edited by James Mahoney and Dietrich Rueschemeyer, 373–404. Cambridge, UK; New York: Cambridge University Press; Cambridge University Press.
Humphreys, Macartan, and Alan M Jacobs. 2015. “Mixing Methods: A Bayesian Approach.” American Political Science Review 109 (04): 653–73.
Humphreys, Macartan, and Jeremy M Weinstein. 2009. “Field Experiments and the Political Economy of Development.” Annual Review of Political Science 12: 367–78.
King, Gary. 1998. Unifying Political Methodology: The Likelihood Theory of Statistical Inference. University of Michigan Press.
King, Gary, Robert Keohane, and Sidney Verba. 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton University Press. http://books.google.de/books?id=A7VFF-JR3b8C.
Lieberman, Evan S. 2010. “Bridging the Qualitative-Quantitative Divide: Best Practices in the Development of Historically Oriented Replication Databases.” Annual Review of Political Science 13: 37–59.
Lieberman, Evan S. 2003. Race and Regionalism in the Politics of Taxation in Brazil and South Africa. Cambridge Studies in Comparative Politics. Cambridge University Press. http://books.google.de/books?id=S6BOgyL-KYQC.
———. 2005. “Nested Analysis as a Mixed-Method Strategy for Comparative Research.” American Political Science Review 99 (July): 435–52. https://doi.org/10.1017/S0003055405051762.
———. 2010. “After KKV: The New Methodology of Qualitative Research.” World Politics 62 (01): 120–47.
———. 2012. “The Logic of Process Tracing Tests in the Social Sciences.” Sociological Methods and Research 41 (4): 570–97. http://EconPapers.repec.org/RePEc:sae:somere:v:41:y:2012:i:4:p:570-597.
Mosley, Layna. 2013. Interview Research in Political Science. Cornell University Press.
Palfrey, Thomas R. 2009. “Laboratory Experiments in Political Economy.” Annual Review of Political Science 12: 379–88.
———. 2009. Causality. Cambridge university press.
Rohlfing, Ingo. 2013. “Comparative Hypothesis Testing via Process Tracing.” Sociological Methods & Research 43 (04): 606–42.
Rohrer, Julia M. 2018. “Thinking Clearly about Correlations and Causation: Graphical Causal Models for Observational Data.” Advances in Methods and Practices in Psychological Science 1 (1): 27–42.
Rubin, Donald B. 1974. “Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies.” Journal of Educational Psychology 66: 688–701.
Seawright, Jason. 2016. Multi-Method Social Science: Combining Qualitative and Quantitative Tools. New York: Cambridge University Press.
Seawright, Jason, and John Gerring. 2008. “Case Selection Techniques in Case Study Research: A Menu of Qualitative and Quantitative Options.” Political Research Quarterly 61 (2): 294–308. https://doi.org/10.1177/1065912907313077.
Small, Mario Luis. 2011. “How to Conduct a Mixed Methods Study: Recent Trends in a Rapidly Growing Literature.” Annual Review of Sociology 37: 57–86.
Splawa-Neyman, Jerzy, DM Dabrowska, TP Speed, et al. 1990. “On the Application of Probability Theory to Agricultural Experiments. Essay on Principles. Section 9.” Statistical Science 5 (4): 465–72.
Stokes, S. C. 2001. Mandates and Democracy: Neoliberalism by Surprise in Latin America. Cambridge Studies in Comparative Politics. Cambridge University Press. http://books.google.de/books?id=-cdcSVFZRU8C.
Swank, D. 2002. Global Capital, Political Institutions, and Policy Change in Developed Welfare States. Cambridge Studies in Comparative Politics. Cambridge University Press. http://books.google.de/books?id=p3F-agj4CXcC.
Thelen, Kathleen, and James Mahoney. 2015. “Comparative-Historical Analysis in Contemporary Political Science.” In Advances in Comparative-Historical Analysis, 1–36. New York: Cambridge University Press.
Van Evera, Stephen. 1997. Guide to Methods for Students of Political Science. Ithaca, NY: Cornell University Press.
Waldner, David. 2015. “What Makes Process Tracing Good? Causal Mechanisms, Causal Inference, and the Completeness Standard in Comparative Politics.” In Process Tracing: From Metaphor to Analytic Tool, edited by Andrew Bennett and Jeffrey Checkel, 126–52. New York: Cambridge University Press.
Weller, Nicholas, and Jeb Barnes. 2014. Finding Pathways: Mixed-Method Research for Studying Causal Mechanisms. New York: Cambridge University Press.
Western, Bruce, and Simon Jackman. 1994. “Bayesian Inference for Comparative Research.” American Political Science Review 88 (02): 412–23.

1. We will sometimes follow convention and refer to “within case” and “cross-case” observations. However, in the framework we present in this book, all data are data on cases and enter into analysis in the same fundamental way: we are always asking how consistent a given data pattern is with alternative sets of beliefs.↩︎

2. Of course, even when randomization is possible, the conditions needed for clean inference from an experiment can sometimes be difficult to meet .↩︎

3. As we describe in Humphreys and Jacobs (2015), the term “qualitative research” means many different things to different scholars, and there are multiple approaches to mixing qualitative and quantitative methods. There we distinguish between approaches that suggest that qualitative and quantitative approaches address distinct, if complementary, questions; those that suggest that they involve distinct measurement strategies; and those that suggest that they employ distinct inferential logics. The approach that we employ in Humphreys and Jacobs (2015) connects most with the third family of approaches. Most closely related, in political science, is work in Glynn and Quinn (2011), in which researchers use knowledge about the empirical joint distribution of the treatment variable, the outcome variable, and a post-treatment variable, alongside assumptions about how causal processes operate, to tighten estimated bounds on causal effects. In the present book, however, we move toward a position in which fundamental differences between qualitative and quantitative inference tend to dissolve, with all inference drawing on what might be considered a “qualitative” logic in which the researcher’s task is to confront a pattern of evidence with a theoretical logic.↩︎

4. A smoking-gun test is a test that seeks information that is only plausibly present if a hypothesis is true (thus, generating strong evidence for the hypothesis if passed); a hoop test seeks data that should certainly be present if a proposition is true (thus generating strong evidence against the hypothesis if failed); and a doubly decisive test is both smoking-gun and hoop (for an expanded typology, see also Ingo Rohlfing (2013)).↩︎

5. In Humphreys and Jacobs (2015), we use a fully Bayesian structure to generalize Van Evera’s four test types in two ways: first, by allowing the probative values of clues to be continuous; and, second, by allowing for researcher uncertainty (and, in turn, updating) over these values. In the Bayesian formulation, use of process-tracing information is not formally used to conduct tests that are either “passed” or “failed”, but rather to update beliefs about different propositions.↩︎

6. A small number of exceptions stand out. In the approach suggested by Gordon and Smith (2004), for instance, available expert (possibly imperfect) knowledge regarding the operative causal mechanisms for a small number of cases can be used to anchor the statistical estimation procedure in a large-N study. Western and Jackman (1994) propose a Bayesian approach in which qualitative information shapes subjective priors which in turn affect inferences from quantitative data. Relatedly, in Glynn and Quinn (2011), researchers use knowledge about the empirical joint distribution of the treatment variable, the outcome variable, and a post-treatment variable, alongside assumptions about how causal processes operate, to tighten estimated bounds on causal effects. Coppock and Kaur (2022) show how bounds can be placed on causal quantities following qualitative imputation of missing potential outcomes for some or all cases. Seawright (2016) and Dunning (2012) describe approaches in which case studies are used to test the assumptions underlying statistical inferences, such as the assumption of no-confounding or the stable-unit treatment value assumption (SUTVA).↩︎

7. More precisely it is not always clear whether the strategy is of the form: (1) “if theory $$T$$ is correct we should observe $$K$$”, with evidence on $$K$$ used to update beliefs about the theory; or (2) “According to theory $$T$$, if $$A$$ caused $$B$$ then we should observe $$K$$” in which case $$K$$ is informative about whether $$A$$ caused $$B$$ under $$T$$.↩︎

8. There are of course many exceptions, including work that uses structural equation modeling, and research that focuses specifically on understanding heterogeneity and mediation processes.↩︎

9. For application to quantitative analysis strategies in political science, Rohrer (2018) and Glynn and Quinn (2007) give clear introductions to how these methods can be used to motivate strategies for conditioning and adjusting for causal inference. García and Wantchekon (2015) demonstrate how these methods can be used to assess claims of external validity. With a focus on qualitative methods, Waldner (2015) uses causal diagrams to lay out a “completeness standard” for good process tracing. Weller and Barnes (2014) employ graphs to conceptualize the different possible pathways between causal and outcome variables among which qualitative researchers may want to distinguish. Generally, in discussions of qualitative methodology, graphs are used to capture core features of theoretical accounts, but are not developed specifically to ensure a representation of the kind of independence relations implied by structural causal models (notably, what is called in the literature the “Markov condition”). Moreover, efforts to tie these causal graphs to probative observations, as in Waldner (2015), are generally limited to identifying steps in a causal chain that the researcher should seek to observe.↩︎

10. See Mosley (2013) for a treatment of complexities around interview research in political science and Evan S. Lieberman (2010) on strategies for historically oriented research.↩︎