View sample political science research paper on comparative methods. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.
II. Comparative Methods
A. Research as a Mediated Encounter Between Theory and Fact
B. Methods of Theory Generation and Goals of Research
1. Goals of Research
C. Methods of Analysis
3. Theory Assessment
C. Formal Modeling
IV. Future Directions
As a subdiscipline of political science, comparative politics aims to explain and understand the dynamics of political power as practiced throughout the world. In pursuit of this goal, comparativists have developed a range of methods to compare the large number of vastly different political systems they study. While philosophers, historians, and theologians have long crafted political theory in a systematic fashion, the establishment of modern political science departments and the rapid increase in their number during the 20th century inspired a fruitful debate about the appropriate means to carry out comparative political research. In the early 21st century, there is growing recognition of the necessity of multiple methods, and recent methodological debates have centered on the best ways to enhance dialogue between scholars from different methodological backgrounds who nevertheless share substantive concerns.
This research paper provides an overview of comparative methods as understood by their practitioners. It presents a number of alternative approaches, discusses their implications, and shows how these approaches have been used in exemplary works in the field. The paper ends with a discussion of current trends in comparative methodology and how they might impact the future of the discipline.
II. Comparative Methods
Taking the natural sciences as its model, political science has sought to create theories to explain and predict various aspects of political life. Indeed, political scientists have striven to shape their craft scientifically by putting in place and advocating systematic research processes aimed at cumulating knowledge. In this sense, the choice of method is but one step in a larger research process that usually includes a clear delineation of the research question, an examination of the existent theory related to the problem, a description of the data to be used, a method of data analysis, and discussion of the potential contribution to theory. The sum of these parts is referred to as the research design, and comparativists generally agree that it should be both logically consistent and justified by the problem it studies. Therefore, in assessing the range of comparative methods, it is important to look at how comparative methods fit with various aspects of research design.
The most influential early work on research design was Adam Przeworski and Henry Teune’s (1970) Logic of Comparative Social Inquiry. Their work aimed at designing research that would develop general social theory by confirming, through comparative research, hypothetical statements that replaced proper names of social systems with names of variables. They posited a basic distinction between what they called most similar and most different systems research designs. In most similar systems research designs, cases are chosen on the basis of assumed similarities at the systemic level (state, culture, nation, etc.), whereas in most different systems designs, the type of cases and the level of analysis emerge from the analysis of theoretically relevant factors in data that assume the homogeneity of all units. Although Przeworski and Teune did not deny that there was some value in most similar systems designs, their delineation of comparative research was particularly rigid insofar as it asserted that the most different systems design, the definition of whose units was based on a random multistep sample of all social systems, was the only research design that could allow universal generalizations. Nevertheless, their argument was enormously influential and sparked an invaluable debate within the field about the goals of research and the importance of research design.
Przeworski and Teune’s argument went well beyond the matter of choosing cases, however, and sought to emphasize the scientific quality of the comparative method.Arend Lijphart (1971) furthered this logic, depicting the comparative method as a way of achieving scientific explanation, albeit one with certain limitations. Chief among the difficulties facing comparativists, Lijphart contended, was constructing parsimonious theories based on research that inherently involved many variables but few cases, especially cross-national research. This difficulty was not seen as debilitating, however, and many of the ways that Lijphart suggested to mitigate the problem—including conceptual and statistical techniques for reducing the number of variables and increasing the number of cases—continue to be used today (see section titled “Scope” below).
More recent methodological debates, however, center less on justifying a scientific approach to political phenomena than on arguing a best fit between research question and the types of data that will be gathered, how they will be analyzed, and the relationship between data analysis and theory. Although the mainstream methodology literature in comparative politics continues to advocate a quantitative, statistical approach to studying comparative politics, there is growing recognition that the methodological landscape has become far more complex. It can be roughly divided into two categories: empirical and formal methods. Empirical methodologies are largely divided between quantitative and qualitative traditions, and the formal methods used in comparative politics are dominated by game theoretic models of rational choice theory (Laitin, 2002).
A. Research as a Mediated Encounter Between Theory and Fact
Whether perceived as a constant dialogue or as one controlled instance, comparative political research can be usefully described as the researcher’s fruitful encounter with theory and fact. Comparative methods mediate this encounter, providing researchers with systematic ways to produce knowledge based on what was previously understood about an issue and what can be observed in the world. They help the researcher explain connections, concepts, and causes that are not observable without systematic analysis. Thus, comparative methods are at the center of the systematic processes political scientists use to facilitate the creation and transmission of knowledge.
The choice of method impacts or is impacted by the decisions scholars make at every point in the research process, from choosing the research question to presenting their conclusions. Of course, there is a great deal of variation within methodological traditions, as well as some overlap in their application and potentialities. In fact, the differences presented here are not rigid, and much of the methodological innovation in the field rests on the ability of researchers to create internally consistent research designs that cannot be neatly categorized on either side of traditional methodological divisions. Nevertheless, for the purpose of this research paper, it is useful to sketch these ideal types based on their use in the discipline. What follows is a consideration of the role of comparative methods as a mediator in three aspects of research: theory generation and the goals of research, methods of analysis, and theory assessment.
B. Methods of Theory Generation and Goals of Research
Theory generation in political science can be carried out either inductively or deductively. According to Gerardo Munck and Richard Snyder (2007), the overwhelming majority of research in comparative politics is inductive. The inductive approach to theory is one in which theory flows from the analysis of observed facts. In other words, theoretical generalizations are built on the basis of specific facts, usually the data analyzed by the researcher. Although both qualitative and quantitative researchers engage in inductive analysis, game theoretic formal modelers of rational choice theory typically do not. Whichever method is used, inductive research typically contributes to generating new theories by specifying concepts and variables or by introducing new hypotheses to be tested. Inductive research is also particularly useful for studying areas of knowledge about which little is known and topics that lack a well-developed conceptual vocabulary. Comparative relationships between religion and the state are one such area of research. Jonathan Fox and Todd Sandler (2003) approach this issue area from the quantitative tradition in their article “Quantifying Religion,” which develops a series of variables for measuring religion in comparative studies. In this case, the notion of variable is roughly equivalent to the concept that would result from similarly inductive qualitative work. Such concepts and variables provide essential components for deductive theorizing.
Deductive research begins with a theoretically derived hypothesis (King, Keohane, & Verba, 1994). As with the inductive approach, deductive theorizing is used by quantitative and qualitative researchers alike; it also forms the sturdy basis on which rational choice game theorists model action. A deductive approach to theory builds on a discipline’s collective knowledge about a subject by encouraging researchers to form specific, testable hypotheses deduced from theoretical maxims and to submit those hypotheses to empirical tests. As such, the principal benefit of deductive research is its claim to produce cumulative knowledge. Another important benefit is the simple and powerful process that deductive theory generation prescribes for the conduct of research. Deductive reasoning requires researchers to deduce specific, observable implications of broad-gauged theories. In that way, it allows comparativists to address the most enduring questions in the field by using relatively little data (Geddes, 2003, offers a step-by-step procedure for formulating such questions). A potential weakness of the deductive approach is that it assumes that researchers have already amassed a great deal of coherent theoretical knowledge on a given topic. Indeed, whereas inductive reasoning, in its search for ever greater detail, risks infinitely delaying theory development, so deductive reasoning assumes that much of the theorizing has already been done.
1. Goals of Research
Although comparativists are united around their aim to explain and understand political phenomena around the world, their choice of method constrains them in the types of arguments they can make. Designing Social Inquiry, by Gary King, Robert Keohane, and Sidney Verba (1994), the most influential statement of the quantitative approach in the field, sums up the goal of research in a single word: inference. Inference allows researchers to extend their findings to other situations not directly observed by the initial study. In order to improve theory, King et al. outline a systematic, scientific procedure for testing theory aimed at producing valid descriptive and, preferably, causal inferences. A related goal of the quantitative approach is to maximize the researchers’ leverage in explaining the phenomena of interest by allowing researchers to use the least amount of data to make the broadest generalization possible. While the authors of Designing Social Inquiry contend that their approach is suitable for both quantitative and qualitative work, most scholars within the qualitative tradition take a different view.
Because qualitative research has the largest, most variegated literature, as well as a plethora of distinct methodological tools, its theoretical goals are somewhat more diffuse. However, it is often said that whereas quantitative researchers are primarily concerned with explaining, qualitative researchers seek to understand. Although many qualitative methods seek causal explanations, practitioners in this tradition are more likely to be concerned with understanding how a phenomenon came about than with explaining why it did. In other words, they tend to be more concerned with process than with probability or prediction. Charles Ragin, who has developed some of the most enduring qualitative tools (see, e.g., Ragin, 1987, 2000), describes the interpretive goals of qualitative research as “making sense of cases, selected because they are substantively or theoretically important” (Ragin, 2004, p. 109). Indeed, the pursuit of historical nuance and detailed narrative explain the tendency of qualitative researchers to focus on a small number of cases.
Whereas quantitative researchers seek to explain and qualitative researchers to understand, game theoretic modelers of rational choice theory aim their analysis at simplifying complex processes in order to predict. Rational choice–driven game theory is an individual-level theory that assumes that individuals attempt to maximize their utility, that decisions are made at points of equilibrium when “players” cannot increase their utility by making an additional move, and that the rules of the game are exogenous to the game itself (see Munck, 2001). Because these three conditions are assumed to be universal aspects of individual behavior, game theory purports to be applicable to any substantive question and able, therefore, to produce cumulative knowledge (for an important critique of the use of game theory in political science, see Green & Shapiro, 1994). While game theory is not the only framework used to carry out formal work in political science, it is by far the most common. Another formal approach is network analysis, which, although not as common in comparative politics, has already contributed to some substantive areas in the field and is poised to become an increasingly important method in the coming years (see Gould, 2003).
C. Methods of Analysis
Comparative methods mediate the scholarly encounter with observable facts by providing researchers with tools for analyzing data. Quantitative, qualitative, and formal methodological tools are differentiated by how they limit the scope of their research, how they measure the relevant variables or case aspects, and how they assess the theories they engage with.
Scope refers to the plausible applicability of a theory to a defined group of political situations or cases. In other words, the scope of a project informs its readers as to what precisely the research claims to create knowledge about and the relevance of its findings to other contexts and cases. Although many comparativists are concerned with the same “big questions,” they disagree about which types of evidence should be employed to theorize about such questions. Thus, scope is the aspect of theory most closely related to data collection and analysis and depends in large part on the choices that a researcher makes in this regard.
The scope of a quantitative research project involves specifying the statistical model to be used, including the independent and dependent variables, and the number and quality of cases to be studied. It should be mentioned here that statistical models, which some consider formal (King, 1989), are distinguished from game theoretic formal models of rational choice theory by the fact that variables in statistical models are typically closer representations of observable phenomena (Morton, 1991, p. 61). In terms of case selection, research norms in the quantitative tradition encourage the consideration of the entire universe of cases relevant to the phenomenon under study. What should be considered a case depends on the hypothesis and the unit for which it predicts outcomes. Thus, case may refer to a variety of units of analysis (i.e., state, party, city) or an event (i.e., civil war, policy selection, regime change). When it is not possible for a researcher to study the entire universe of cases, a sample from the universe should be taken in accordance with some substantive aspect of the theory (i.e., a given period), preferably at random, and in no case by selecting on the value of the dependent variable.
Certainly, choosing cases that have all experienced a similar dependent outcome in order to explain that very outcome leads to theoretical distortion in quantitative tests of theory. Yet resisting the temptation is not always intuitive. In fact, if one wants to explain why some states experience rapid economic growth in the wake of revolutions, it might seem logical to focus first on those cases in which such growth is known to have occurred, and only then attempt to explain what differentiates these cases from others. This would be a logical sequence for a qualitative researcher interested in developing in-depth knowledge of anomalous political processes or counterintuitive cases. However, if the researcher is more concerned with testing for the impact of theoretically relevant factors on the dependent variable, a strategy that begins with the universe of all cases would be a better fit. In fact, what distinguishes these research strategies from each other is not the absolute quality of the research involved but rather the scope of the arguments made possible by different types of research design (see Geddes, 2003, Chapter 3, for a more thorough analysis of this problem and its implications for comparative work).
Another important problem confronting quantitative researchers is the problem of indeterminacy. Indeterminacy usually springs from two sources related to specification of the model. The first is referred to as the many variables, small N problem identified by Lijphart (1971) and others. This problem arises when the number of inferences implied by a statistical model exceeds the number of cases. In such research designs, the number of cases could not possibly test for the causes suggested by the theory. The second most common reason for indeterminacy is multicollinearity. This problem arises when the explanatory variables of a statistical model are not independent of each other. For example, a study that seeks to explain the level of political participation by women in new democracies might include variables measuring women’s levels of education and women’s workforce participation. To the extent that variation in the value of one of these variables predicts variation in the other, it would not be possible to measure the independent impact of either of them on the level of women’s political participation in a given country. Statistically, problems of multicollinearity can be partially offset by increasing the number of observations. Such a strategy, however, runs the risks of either comparing cases that are not analytically equivalent or, if undertaken in an ad hoc fashion, altering the model without reference to theory. Despite these limitations, quantitative comparison has proven to be a useful and efficient method for testing hypotheses on large amounts of data that would be difficult to consider otherwise.
Scope is the most readily apparent difference between quantitative and qualitative work in comparative politics. While statistical work requires a relatively large number of cases, or observations, qualitative work tends to focus on a small number of cases. Part of this difference is semantic and attributable to the fact that the research questions of comparativists are often formulated at the level of the state. Even when the state is not the relevant focus of research, there is a substantial difference between the quantitative conception of a case as an analytically homogeneous unit among others and the qualitative view of a case as a “class of events” (George & Bennett, 2005, p. 17).
The scope of a qualitative research design ultimately depends on the goals of the researcher. If researchers aim to revise an existing theory or extend it, they will likely look to the literature for an anomalous case that has some potential to engage with the theoretical lacunae they seek to address. On the other hand, if researchers are interested in assessing the credibility of a theory, they might select a number of cases known to have experienced a similar outcome but whose histories they suspect involved different causal processes. This manner of case selection is starkly different from a statistical approach that warns against the analytical pitfalls of choosing cases on the value of the dependent variable. In cases of political phenomena about which there is relatively little theoretical knowledge, a qualitative research design may not be able to specify initially the cases under study. Such research designs, usually aimed at conceptual development or the construction of explanatory typologies, typically consist of a constant dialogue between theory and data aimed at understanding how to delimit the case itself and explaining what it is a case of.
The scope of a formal model rests on its assumptions and on how the model is constructed. As stated above, game theoretic models of rational choice theory assume that individuals seek to maximize their utility, that decisions are made at equilibria based on actors’ preferences, and that the rules of the game are exogenous to the game itself. Because these assumptions are generally seen as universal, formal modelers of rational choice theory must use some other criteria to explain their choice of scope. Indeed, rational choice theory does not itself stipulate any specific procedure for constructing formal models, and researchers in this tradition have not emphasized case selection as an important point of methodological reflection. Thus, during the late 1980s and 1990s, when game theory began to be used with greater frequency in studying comparative politics, the universality of rational choice assumptions became a subject of intense debate. In response, some researchers sought to limit the scope of rational choice theory either by relaxing its assumptions or by limiting its application to those cases in which its assumptions are most likely to reflect actual behavior. George Tsebelis (1990), for example, set forth the idea that rationality was a subset of human behavior more likely to describe situations in which the “actors’ identity and goals are established and the rules of the interaction are precise and known to the interacting agent” (p. 32). Yet others argued that much as regression analysis has, by necessity, an error term that provides researchers greater control in estimating causality, so formal models of rational choice theory are built on some false assumptions that facilitate hypothesis generation. Indeed, it is the simplicity of rational choice assumptions that allows the models to make clear and precise prediction. The more these assumptions are relaxed, the more difficult the model becomes to solve, and the less clear its predictions. In sum, the arguments that result from formal studies are relevant only to cases that fit the assumptions on which the model is based. Empirical work, on the other hand, is far more reliant on the precision of its definitions in specifying those cases to which its arguments can and cannot apply.
Another area in which methods mediate the encounter between the researcher and the data is in measuring the concepts and variables used in a study. In every methodological tradition, researchers use measurements based on the goals of the research, the theory it engages with, and the requirements of their method. Researchers working in different methodological traditions typically have distinct vocabularies to describe their endeavors, and they often use different indicators to measure a concept labeled with the same word but having different meanings. Despite these differences, all comparativists strive for, and often claim to have achieved, measurement validity (see Adcock & Collier, 2001).
Comparativists often describe measurement in terms of levels. Scholars in the quantitative tradition sometimes distinguish their tradition from the qualitative tradition by their use of ordinal- and interval-level data and argue for the superiority of such measures while discounting the value of nominal data such as those used to create typologies. The claim of superiority of higher levels of measurement is based on the ability of statistical researchers to draw fine-gauged distinctions between large numbers of cases. However, qualitative researchers would argue that such benefits are offset by the uncertainty of fit between such measurements and observed facts. Furthermore, Mahoney (2003), writing in the qualitative tradition, argues that the use of nominal and ordinal measurement is also central to the comparative historical approach and can be put to good use in determining necessary and sufficient causality in small-N studies.
While some of this disagreement is in fact substantive, part of it has to do with the relationship between measurement and the goals of research. For researchers in the quantitative traditions who seek to explain the impact of variables on an outcome, statistical models require measures that emphasize control. Furthermore, because such models usually test hypotheses on a large number of cases, researchers must use measures that can realistically be obtained in a fairly consistent manner for each case. Qualitative research designs, on the other hand, emphasize the credibility of measures for each case. Researchers in this tradition are more likely to develop highly nuanced measures of complicated variables, which accurately fit observations about the small number of cases considered. Indeed, in some qualitative research designs, the measurement of concepts may be the goal of the entire research project. Rather than measuring specific variables, formal modelers who use game theory must specify the components of their model, which usually include the relevant actors, their preferences and strategies, the level of information available to the actors, and the possible outcomes of the game. Although game theory does not recommend any specific procedure for conceptualizing a model, it rests on a well-defined set of universal assumptions that guide researchers in deducing these specifications from theory. Nevertheless, the absence of a single method for such an important aspect of modeling means that game theorists must rely on criteria exogenous to the theory itself. Although this encourages multimethod approaches, it introduces an element of potential inconsistency in the overall research design.
3. Theory Assessment
Given the variety of methods for generating theory, disparate goals of research, and logically distinct methods of data analysis, it is no surprise that different comparative methods also entail different ways of assessing theory. Indeed, both quantitative and qualitative methods mediate the dialogue between theory and fact. But whereas quantitative researchers tend to see a research project as one controlled communication, qualitative researchers are more likely to see the dialogue as a constant back-and-forth between theory and fact. Meanwhile, formal modelers of rational choice theory seek to contribute to theory by modeling the logical implications of its assumptions. These differing views of the nature of research directly impact how scholars use different comparative methods to assess theory.
The quantitative approach usually relies on a single data set to test the observable implications of theory in order to falsify or confirm it. For this reason, quantitative researchers tend to design studies that rely on a large number of aggregated cases to observe the impact of independent variables on certain outcomes. Such large-N studies tend to assume a constant linear notion of causality. That is, they assume that the effects of independent variables on dependent variables are constant for the episode under study and that the causal impact is direct. They further assume that the outcome in one case does not impact the outcome in other cases. In sum, quantitative researchers take a counterfactual view of causality. One way to imagine counterfactual causality is by positing two parallel universes in which everything is the same except the value of a researcher’s independent variable that alone explains the presence or absence of a given outcome. Of course, in observational studies, these universes do not exist, so causal inference must make up the gap. By accepting a counterfactual view of causality, quantitative work strives to approximate experimental work. In the absence of the perfectly controlled parallel universe required to carry out experimental research, quantitative analysts use statistical controls to decrease bias and improve the quality of inferences made from observational data.
In the constant dialogue between theory and fact that qualitative researchers undertake, it would likely be impossible to use new data for each encounter with theory. Because qualitative researchers are not generally constrained in their research by the controls of experimental logic, they can use the same data to test and refine their hypotheses. Thus, qualitative research designs tend to favor theory assessment over testing.
One method qualitative analysts use to assess theory is what is known as the congruence method. According to Alexander George and Andrew Bennett (2005), the congruence method is one in which a researcher “begins with a theory and then attempts to assess its ability to explain or predict the outcome in a particular case” (p. 181). Thus, it assesses the degree to which there is a fit between a theory’s hypothesized causes and a case’s observable outcomes. Among the advantages of this approach is that it can assess the ability of more than one theory to explain a given outcome. This is particularly important because it addresses the problem of equifinality—that is, that a single outcome may have multiple and unrelated causal paths. But because the congruence method, like many statistical methods, cannot explain why some theories are more congruent with outcomes, this approach is most usefully combined with other qualitative approaches that are more process oriented.
Qualitative researchers have not limited themselves to theory assessment but also seek to test theories using a variety of methods. It is important to point out, however, that a qualitative approach to theory testing differs substantially from quantitative, control-based theory testing focused on falsification. Bennett (2004) describes the goal of what he calls the “mechanism model of theory testing” as “to expand or narrow the scope conditions of contending theories as the evidence demands, and to identify the conditions under which the particular causal mechanisms hypothesized by these theories interact with one another in specified ways” (p. 50). Such an approach is particularly well suited for addressing the equifinality problem and answering the “how” questions that qualitative researchers tend to ask. It also helps researchers understand why multiple theories are feasible because it can demonstrate how mechanisms from different theories interact with one another.
It should be noted, however, that the causal claims of such a theory rest on a distinct notion of causality that has important implications for how theory is assessed. Quantitative researchers using statistics usually rely on probabilistic causation, which assumes that every observable occurrence in the world is the result of at least some random causes that the research is unable to specify. Qualitative researchers, on the other hand, tend to see causality as more deterministic, assuming that every occurrence in the world is fully explicable because it is the result of some prior occurrences. The latter view explains why many qualitative researchers focus on identifying necessary and sufficient causes by specifying the conditions under which a particular phenomenon occurs. These differing views of causality also explain why qualitative researchers may choose to examine anomalous cases, logically positing that if a general theory does not fit for a specific case, then it must be revised. Although most researchers in either tradition are not likely to fully endorse either view, such assumptions about causality are implicit in the methods that researchers choose, and they limit the conclusions that researchers can reach (see Mahoney, 2003).
As mentioned above, the solution of formal models does not in itself constitute an assessment of the theory being modeled; rather, it presents a formal simplification of it. The major output of formal research, then, is not a clear assessment of theory but a set of hypotheses to be tested using a different methodology. Munck (2001) states the situation as follows:
Though models are ultimately assessed in terms of the empirically tested knowledge they generate, the exercise of modeling proper culminates in the proposal of hypotheses. Thereafter, modelers should test these hypotheses. But a formal methodology does not have direct implications for the testability of hypotheses; nor does it offer any guidelines about how to conduct the testing. (p. 200)
Indeed, game theory has been criticized as tending toward “pure theory” because its practitioners have rarely carried out the empirical evaluation their models call for. In response to these criticisms, and in the absence of a method for theory assessment internal to the method, some game theorists have made explicit efforts to lay the foundation for multimethod work. In Analytic Narratives, Robert Bates, Avner Greif, Margaret Levi, Jean-Laurent Rosenthal, and Barry Weingast (1998) set forth a method that combines formal modeling with qualitative analysis, while in Methods and Models, Rebecca Morton (1991) demonstrates how empirical statistical analysis can be used to test hypotheses derived from game theory.
The previous section outlined the ways in which methods mediate the researcher’s encounter with theory and fact. An effort was made to show how the choice of methods mediates the scholarly encounter with theory and fact in terms of theory generation, the goals of research, methods of data analysis, and theory assessment. This section discusses three exemplary works in the field to demonstrate how these principles have worked in practice.
In a standard-setting work, Ruth Berins Collier and David Collier (1991) studied the process of labor incorporation in a paired comparison of eight Latin American countries: Brazil and Chile, Mexico and Venezuela, Uruguay and Colombia, and Argentina and Peru. These pairs represent what Przeworski and Teune (1970) would call “most different” systems, chosen on the basis of similar patterns of labor incorporation. By contrasting a comparably large number of cases, Collier and Collier highlight the significant differences between Latin American contexts while at the same time making an important theoretical and methodological contribution to comparative politics.
The Colliers situate their study in the literature on bureaucratic-authoritarian models that explain the collapse of democracy as a result of conflicts between workers and owners that arise as countries move from early industrialization to a more advanced economy requiring more intense capital accumulation to produce more sophisticated products. The Colliers critique this economically driven model by placing more emphasis on political factors. The basic argument they advance is that the process of labor incorporation in these states represents a critical juncture in the state’s history that shapes legacies both in the short-term “aftermath” and in the long-term institutional “heritage” of a political system. Ultimately, it is these processes that explain why some states experienced the breakdown of their democratic systems whereas others remained more stable.
Their analysis, firmly within the tradition of historical institutionalism (Thelen, 1999), begins with the emergence of a working class in each state. In nearly 900 pages, they develop a complex historical argument that can only be grossly simplified here. Using both within-case and between-case methods of analysis, they analyze the process of labor incorporation with a particular focus on labor groups, oligarchs, and reformers and the configuration of coalitions among them as they struggle for power. The relative strength of the oligarchy is seen as particularly important. Whereas a weaker oligarchy provides greater coalitional space for reformers and leads to the mobilization of labor, a stronger oligarchy limits the political space open to reformers, who respond by seeking to control labor. It is important to note that in none of their cases does the working class initially emerge as autonomous, able to effectuate political change on its own. Rather the institutional configuration resulting from elite choices seemed to provide more or less space for labor activism in the aftermath and heritage phases of labor incorporation.
The main methodological contribution of this work is the concept of critical junctures. In their analysis, critical junctures are seen much as their ordinary language use would imply, that is, as pivotal moments that transform society and that have long-term effects. Labor incorporation is hypothesized to constitute such a critical juncture, developing along two dimensions, resulting in four patterns of labor incorporation: radical populism, labor populism, electoral mobilization by a traditional party, and depoliticization and control. Collier and Collier use historical comparison to test this hypothesis and find that it can at least partially explain the breakdown of democracy inArgentina, Brazil, Chile, and Uruguay and in every case demonstrates that labor incorporation had an important impact on events in the post–World War II era by shaping the political arena of the states under study. Thus, Collier and Collier’s historical analysis represented an important theoretical innovation that ran contrary to most analyses of Latin American regimes. The potency of their analysis led many researchers to adopt and reuse their conceptualization of critical junctures as a way to make sense of slow-moving causal processes without reverting to a variable-oriented approach.
The relationship between economic development and democracy is one of the most contentious political issues that comparativists have consistently addressed in the past century. Przeworski and his colleagues Michael Alvarez, Jose Cheibub, and Fernando Limongi (2000) made an innovative contribution to this literature with their book Democracy and Development. The central question they address is, how do political regimes impact material well-being? To address this question, they use an inductive approach that gathers data on every country for which data were available for the period 1950 to 1990 and build an argument based on their findings at each step of the research.
First, they choose a minimalist definition of democracy suitable to their research question. Then they derive a set of rules that they use to define the cases in their universe as dictatorships and democracies. Using this descriptive data, they then use pro-bit analysis to investigate the relationship between economic development, regime type, and survival. Using lagged time series data, Przeworski et al. (2000) then consider the relationship between political regimes and economic growth. Here they mobilize their data to engage with the long-standing debate over whether democracy hinders economic growth by shifting resources from investment to consumption. After finding that political regime type does not impact economic growth, they turn to the question of political stability. From their exploration, they discover that instability means quite different things under different regime types and has a much greater impact on dictatorships than on democracies. In their final chapter, they investigate the paradox that population growth in dictatorships offsets higher rates of per capita income growth in the same states. Here their counterfactual statistical model leads to the striking conclusion that differences in a range of demographic indicators cannot be explained by exogenous factors but in fact stem from differences in the regime types, particularly the political uncertainty experienced by people living under dictatorships. Thus, each chapter of this study moves from a set of observations to a new set of questions, building a sophisticated statistical analysis, clearly outlined and explained in appendixes at the end of each chapter.
The inductive approach used by Przeworski et al. (2000), however, should not be seen as theory neutral. On the contrary, it is deeply engaged with existing theory, using previous analyses to guide the search. But their primary innovation is methodological. They suggest that most work done on the relationship between democracy and development is inconclusive because it is based on a counterfactual notion of causality but is not tested as such. By deliberately acknowledging the need for a counterfactual approach to causality in their statistical analysis, Przeworski et al. are able to arrive at new conclusions using data largely similar to that of other researchers before them. Among their most important findings is that democracies tend to have higher levels of economic development, not because development causes democracy, but because democracies are more likely to survive if the society is affluent. They also found that although democracies were particularly sensitive to economic crises, they were absolutely ensured of survival if they had reached a threshold level of per capita income. These theoretical contributions flow largely from the logical, explicit research design employed by the research team. In many ways, their study is not typical of quantitative studies in comparative politics. To begin with, they take an inductive approach to address a question that had previously been addressed by many other scholars. Furthermore, they use a series of statistical tests to assess hypotheses derived from an ongoing dialogue with theory that builds on the data being analyzed in the study. Their innovative approach, lucid writing style, and transparency of method have all contributed to this work’s endurance in the field.
C. Formal Modeling
Josep Colomer’s Strategic Transitions (2000) opens with a powerful and revealing statement: “Transition from a nondemocratic regime by agreement between different political actors is a rational game” (p. 1). It is clear throughout his analysis that the model he creates is not meant as a metaphor for what happened when the Soviet Union dissolved but as an accurate, descriptive explanation. He does not say that transitions are like games but that they are games. The question his work addresses is, how is it possible for rationally motivated, self-interested actors to agree on transition? This is an important question, not only because it was historically surprising and unpredicted, but also because it is rare for such dramatic transformations to have taken place in such a short time with relatively little violence. After presenting a historical sketch of the historical background and the circumstances leading up to the fall of the Soviet Union, Colomer deduces the relevant actors and their strategies and preferences. The starting point of Colomer’s analysis is that when an authoritarian regime is challenged, there are two possible outcomes: civil war or an agreed-on transition to democracy. In order to model this transition, Colomer uses the prisoner’s-dilemma game as well as “mugging” games to identify equilibria. Most methodologists contend that game theory is best applied in situations in highly institutionalized settings such as parliaments or individual voting behavior. One of the innovations of Colomer’s approach is that he applies game theory to a situation in which rules and institutional constraints are in flux. He justifies this approach by arguing that the outcomes are well defined and that in such situations, individuals are likely to make an important difference in the outcomes selected. Colomer contends that because the outcomes are known to the actors and because the actors are able to calculate that their choices would lead to suboptimal outcomes, they agree to some binding rules before engaging in the game.
In assessing the implications of this argument, Colomer’s analysis draws heavily on the empirical record, but it does so primarily to buttress the argument rather than to test it systematically. He finds that transition by agreement is possible when (a) maximalist actors are weak, (b) the relevant actors are sufficiently strategically distant from one another, and (c) actors are farsighted enough to avoid strategies that result in myopic equilibria. The formal models analyzed are used to identify three models of transition, which he labels transaction, negotiation, and collapse. He then uses these models to explain the separation of the Soviet Union and the Polish Roundtable. In a final chapter, Colomer extends his analysis to show how the different models of transition impact institutional choice in the new post-Soviet states. Colomer’s innovative methodology clearly achieves the objective of simplifying a complex set of strategic interactions. The delineation of the actors’ preferences and strategies is valuable in itself, and the analytical exercise he presents, even if one is not convinced by the strong claim of descriptive explanation that he promises, adds enormously to the literature on democratization and remains an exemplary work of formal methodology.
IV. Future Directions
This research paper has contrasted quantitative, qualitative, and formal approaches and has shown how they mediate the researcher’s encounter with theory and fact. Empirical and formal comparative methods were presented, as well as the implications for research design of the three main methodological traditions.
Any such summary will nevertheless pass over the many ways in which researchers working in each of the traditions push and pull the field in different directions. To the extent that the choice of method flows from personal intuition or a well-reasoned belief about what exists in the world and how we learn about it, a researcher may be less flexible or less accepting of approaches that contradict a certain set of principles. Others may be driven by a particular political problem, making them more open to a variety of methodologies but less likely to give value to the generalizations that comparativists often make. Many researchers may also find themselves constrained by their own methodological training, unwilling or unable to invest in learning other methods, and as a consequence, they advocate certain traditions over others even when the traditions’ shortcomings are clear. Thus, within each tradition, some researchers push for more methodological pluralism and others work within traditions, seeking hegemony over the research agenda of comparative politics as a field. Both positions can be fruitful and innovative, creating useful methodology syntheses or greater technical specificity for their approaches, but ultimately they have little to do with the ability of comparative research to explain or understand political problems.
Indeed, the community of comparative researchers is methodologically diverse, but its reasons for being so may have as much to do with theory and method as with larger social changes such as research funding, the structure of universities, the overall economic situation, and the quality and character of graduate education. Therefore, changes in these factors will have a great impact on the future direction of comparative politics. For example, publicly funded research projects may be more problem focused and require multidisciplinary team research. If universities rely more on such research grants, they may be more apt at some point to dismantle the traditional divisions of departments based on disciplines such as political science and sociology and replace them with a more research-center-based model in order to more effectively compete for funding. Such a move would dramatically change the character of graduate education and the methods that comparativists rely on to address political problems.
More substantively, one of the reasons that debates about methodology can be so intransigent is that the methods that a scholar chooses reflect assumptions about both ontology (what exists in the world) and epistemology (how people learn about what exists; see Hall, 2003). Quantitative, qualitative, and formal approaches all assume a positivist epistemology, which assumes that researchers are capable of discovering political realities that exist independently of whether or how they are studied. Yet the positivist underpinnings of these methods have been highly criticized, particularly by constructivists and other critical theorists widely influential in other disciplines. Such approaches, often grouped together under the label of postmodernism or postpositivism, tend to be more reflexive about the role of the researcher and tend to blur the lines between research, theory, and practice. Nevertheless, while the positivist consensus in comparative research does not seem vulnerable to total collapse anytime soon, the postpositivist challenge may be one exciting venue for methodological innovation.
The lack of consensus regarding how to address the substantive questions relevant to the field leads some to question whether the field is maximizing its potential to contribute to the cumulative knowledge about politics across the globe in a systematic way. Some believe that greater consensus regarding methodological choices would lead to faster accumulation of knowledge and improved quality of research, whereas others believe that productive tensions among competing approaches lead to a best possible, if not ideal, outcome. This disagreement springs from questions regarding the purpose of the field and the goals of research.
It has not been possible in this short research paper to discuss the entire range of techniques, models, and games that quantitative, qualitative, and formal modelers use to carry out comparative work. Some of these techniques are dealt with in other research papers on political science, and many more are described in the additional readings listed below. Nevertheless, an effort has been made here to describe what is at stake when researchers choose their methodology and to provide references to some of the more important methodological works in the field.
- Adcock, R., & Collier, D. (2001). Measurement validity: A shared standard for qualitative and quantitative research. American Political Science Review, 95(3), 529-546.
- Alker, H. R., Jr. (1975). Polimetrics: Its descriptive foundations. In F. Greenstein & N. Polsby (Eds.), Handbook of political science (Political science: Scope and theory). Menlo Park, CA: Addison Wesley.
- Bates, R., Greif, A., Levi, M., Rosenthal, J., & Weingast, B. (1998). Analytic narratives. Princeton, NJ: Princeton University Press.
- Bennett, G. (2004). Testing theories and explaining cases. In C. Ragin, J. Nagel, & P. White (Eds.), Workshop on scien tific foundations of qualitative research (pp. 49-52). Washington, DC: National Science Foundation.
- Brady, H. E., & Collier, D. (Eds.). (2004). Rethinking social inquiry: Diverse tools, shared standards. Lanham, MD: Rowman & Littlefield.
- Collier, D. (1993). The comparative method. In A. W. Finifter (Ed.), Political science: The state of the discipline II (pp. 105-120).Washington, DC: American Political Science Association.
- Collier, R. B., & Collier, D. (1991). Shaping the political arena: Critical junctures, the labor movement, and regime dynamics in Latin America. Princeton, NJ: Princeton University Press.
- Colomer, J. (2000). Strategic transitions: Game theory and democratization. Baltimore: Johns Hopkins University Press.
- Eckstein, H. (1975). Case study and theory in political science. In F. Greenstein & N. Polsby (Eds.), Handbook of political science (Vol. 7, pp. 79-138). Reading, MA: Addison Wesley.
- Fox, J., & Sandler, S. (2003). Quantifying religion: Toward building more effective ways of measuring religious influence on state level behavior. Journal of Church and State, 45(3), 559-588.
- Gates, S., & Humes, B. (1997). Games, information, and politics: Applying game theoretic models to political science. Ann Arbor: University of Michigan Press.
- Geddes, B. (2003). Paradigms and sand castles: Theory building and research design in comparative politics. Ann Arbor: University of Michigan Press.
- George, A., & Bennett, A. (2005). Case studies and theory development in the social sciences. Cambridge: MIT Press.
- Gould, R. (2003). Uses of network tools in comparative historical research. In J. Mahoney & D. Rueschemeyer (Eds.), Comparative historical analysis in the social sciences (pp. 241-269). New York: Cambridge University Press.
- Green, D., & Shapiro, I. (1994). The pathologies of rational choice. New Haven, CT: Yale University Press.
- Hall, P. (2003). Aligning ontology and methodology in comparative politics. In J. Mahoney & D. Rueschemeyer (Eds.), Comparative historical analysis in the social sciences (pp. 373-406). New York: Cambridge University Press.
- King, G. (1989). Unifying political methodology. New York: Cambridge University Press.
- King, G., Keohane, R., & Verba, S. (1994). Designing social inquiry: Scientific inference in qualitative research. Princeton, NJ: Princeton University Press.
- Laitin, D. (2002). Comparative politics: The state of the subdiscipline. In I. Katznelson & H. Milner (Eds.), Political science: The state of the discipline (pp. 630-659). New York: W. W. Norton.
- Lijphart, A. (1971). Comparative politics and the comparative method. American Political Science Review, 65(3), 682-693.
- Mahoney, J. (2003) Strategies of causal assessment in comparative historical analysis. In J. Mahoney & D. Rueschemeyer (Eds.), Comparative historical analysis in the social sciences (pp. 337-372). New York: Cambridge University Press.
- Mahoney, J., & Rueschmeyer, D. (Eds.). (2003). Comparative historical analysis in the social sciences. New York: Cambridge University Press.
- Morrow, J. (1994). Game theory for political scientists. Princeton, NJ: Princeton University Press.
- Morton, R. (1991). Methods and models: A guide to the empirical analysis of formal models in political science. New York: Cambridge University Press.
- Munck, G. (2001). Game theory and comparative politics. World Politics, 53, 173-204.
- Munck, G., & Snyder, R. (2007). Debating the direction of comparative politics: An analysis of leading journals. Comparative Political Studies, 40(1), 5-31.
- Przeworksi, A., Alvarez, M., Cheibub, J. A., & Limongi, F. (2000). Democracy and development: Political institutions and well being in the world, 1950-1990. New York: Cambridge University Press.
- Przeworski, A., & Teune, H. (1970). The logic of comparative social inquiry. New York: Wiley Interscience.
- Ragin, C. (1987). The comparative method: Moving beyond qualitative and quantitative strategies. Berkeley: University of California Press.
- Ragin, C. (2000). Fuzzy set social science. Chicago: Chicago University Press.
- Ragin, C. (2004). Combining qualitative and quantitative research. In C. Ragin, J. Nagel, & P. White (Eds.), Workshop on scientific foundations of qualitative research (pp. 49-52). Washington, DC: National Science Foundation.
- Sartori, G. (1970). Concept misformation in comparative politics. American Political Science Review, 64, 1033.
- Thelen, K. (1999). Historical institutionalism in comparative politics. Annual Review of Political Science, 2, 369-404.
- Tsebelis, G. (1990). Nested games: Rational choice in comparative politics. Berkeley: University of California Press.