View sample political science research paper on formal theory and spatial modeling. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

### Outline

I. Introduction

II. What Is Formal Theory?

III. Formal Theory, Quantitative Methods, and Empirical Inquiry

IV. Rational Choice and Its Foundational Assumptions

V. Solving and Testing Formal Models

VI. Public Choice: Democratic Theory, Institutions, and Voting Paradoxes

VII. The Spatial Theory of Voting

VIII. Conclusion

## I. Introduction

In Greek mythology, Hercules is tasked with 12 impossible labors to regain honor and thus ascend to Mount Olympus as a god. The job of explaining formal theory and spatial theory in a brief, nontechnical essay is a labor of sufficient difficulty to make the search for the Golden Fleece pale in comparison. Given that this author has no transcendental gifts (though Hippolyta’s belt may be around here somewhere), aspirations, or pretentions, this research paper eschews the impossible task of summarizing and explaining the entirety of formal and spatial theory. Instead, this research paper settles for the daunting yet mortal goal of a thorough yet concise introduction to some of the classical and contemporary works of the formal and spatial theories on politics and the concepts, definitions, and models on which those works rest (for a more complete treatment of formal theory and its contribution as a field of political inquiry, see Morton, 1999; Ordeshook, 1992; and Shepsle & Bonchek, 1997). Although Duncan Black (1958) may have understated the mathematical underpinnings of spatial theory as “simple arithmetic,” it is as true today as it was then that the fundamental assumptions, intuitions, and predictions of formal and spatial theory can be grasped with a relatively basic foundation in mathematics such as algebra and geometry. Formal theorists employ a range of advanced mathematical concepts (i.e., integral calculus, matrix algebra, etc.) in their models. However, one does not need these to understand what formal theory is, what the foundational principles of formal theory are, and the gamut of its predictions and conclusions regarding political institutions and behavior. To the extent possible without compromising the material, this research paper keeps the discussion broad and descriptive and thus accessible to the undergraduate reader.

## II. What Is Formal Theory?

Formal theory is a field of inquiry that uses mathematical techniques to explicitly and precisely define theoretical concepts and the relationships between those concepts. Formal mathematics permits the systemizing of theory and thus permits precise deductions and synthesis and enhances the decidability of scientific propositions. Although the term formal theory is common parlance, it is also known as rational choice theory, public choice, positive political theory, political economy, the economic theory of politics, and a variety of other synonyms. Two of the primary branches of formal theory used frequently in political science are game theory and spatial theory. Game theory is concerned primarily with the strategic interaction of utility-maximizing actors in competitive and cooperative settings. Spatial theory, as the reader will see, examines the behavior of actors by representing beliefs, positions, choices, and institutional contexts in terms of spatial distance (most commonly as points on the Cartesian plane). Although formal theory shares a foundation in mathematics and logic with quantitative methods, it is distinct from the traditional empirical inquiry of standard statistical methods. Formal theory seeks to explicitly define political concepts and derive logical implications from their interrelations. Traditional empirical methods assess the relationships between political concepts through direct statistical analysis. Nonformal theory underpins much of the empirical work in political science. A nonformal model suggests relationships between actors and institutions in the real world of politics using general and sometimes ambiguous terminology. This is not to suggest that traditional theorizing is necessarily bad or unrelated to that of formal theorizing. Indeed, there may be an underlying formal model in a generally stated theory of politics that, as Arrow (1963) notes, has not been expressed formally because of mathematical or linguistic limitations. A model is formalized when we use abstract and symbolic representations to explicitly state the assumptions of the model and from which can be derived equilibrium and comparative statics predictions (Binamore, 1990; Elster, 1986; Morton, 1999).

For example, a nonformal voting model might entail this proposition: Voters vote for viable candidates that share their beliefs and positions on issues. This seems to be a reasonable statement of the voting process. Yet there is a great deal of ambiguity in this statement. What does it mean for a candidate to be viable? To what extent must a candidate share the voter’s beliefs and positions relative to the other candidates? How do voters assess candidate positions, and how do they relate them to their own beliefs? Furthermore, how important to the voter is the prospect that his or her vote will be decisive in the election? The nonformal model is silent or ambiguous on these questions. A formal model aims at explicitly defining the processes at work (in this case, the act of voting), the actors participating in the process (voters and candidates), and the gamut of alternative outcomes based on those choices (whether the citizen votes). Riker and Ordeshook (1968), operationalizing a spatial model of voting based on the classic median voter theorem developed by Downs (1957), give just such a formal model of voting.

According to Riker and Ordeshook (1968), an individual will decide to vote if and only if this equation holds true:

P ∗ NCD + D ≥ C,

where, for each voter, P = the probability that this person’s vote will affect the outcome of the election, NCD = perceived net benefits of one candidate over another (net candidate differential), D = the individual’s sense of civic duty, and C = costs associated with the act of voting (opportunity costs, driving time, gas, etc.).

This modeled cost–benefit analysis used by the voter hinges the act of voting on the difference among the candidates between the perceived spatial distance of the candidates’ positions and that of the potential voter’s own preferences, conditioned by the probability that the individual’s vote will be decisive in the election. The relevance of the difference between candidates is dependent on the probability of a decisive vote. If the voter’s vote is not decisive, then the candidate differential is essentially irrelevant to the outcome of the election from the perspective of the potential voter. This formal theory of voting uses mathematical notation to precisely relate the costs of voting to the benefits the voter receives from voting, and in so doing, it provides a nonobvious expected outcome that tells something interesting about the rational voter. As the probability of a decisive vote goes to zero, differences between the candidates on issues are eliminated from the calculus of the vote decision. This fact led scholars to predict that citizens wouldn’t collect costly information on politics such as the policy positions of specific candidates or parties, a prediction confirmed by the significant political ignorance of voters found in surveys. Also, although many observers have decried the problem of low voter turnout in the United States, the Downsian and Riker–Ordeshook voting model suggests the real puzzle is that anyone votes at all.

## III. Formal Theory, Quantitative Methods, and Empirical Inquiry

One way to think about the difference between formal theory and quantitative methods employed for empirical inquiry, given that both use the language of mathematics, is to put the distinction in terms of the scientific method. In the social sciences, the scientific method involves stating a research question of some importance to our understanding of social phenomena; developing theories as to the processes, actors, and interactions within the social context; using hypotheses derived from these theories for empirical testing; using techniques to test these hypotheses against real-world data; and publicly reporting the results of those tests. Formal theory in political science is oriented toward developing precise theories with specifically defined assumptions and the derivation of their implications (the so-called front end of scientific inquiry) while statistical methodology applies mathematical rigor to the testing of theories and hypotheses (the so-called back end of scientific inquiry). This dichotomy, although useful, isn’t without its problems. Although it is true that the foci of formal theory and quantitative methods are distinct and have been historically pursed separately in political science, it is incorrect to assert that empiricists are unconcerned with precise theorizing and formal theorists are indifferent to empirical testing. Both formal theory and quantitative methods are effective tools to employ in the study of political phenomena and, in combination, can produce significant contributions to the knowledge of politics (Barry, 1978).

The increasing role of formal theory in political science is not without its critics. The behavioral revolution in political science that drew the discipline away from informal normative theories and descriptive analysis inspired greater and greater attention to developing strong empirical measures of political phenomena. Many see formal theory as a distraction from real, and hence important, empirical analysis of politics. Albert Einstein once quipped that not everything that is important can be measured, and not everything that can be measured is important. To its critics, this sums up the problem with formal modeling. Its important theories cannot be measured, and what it measures is not important. Yet the formal theories of politics address many of the most important questions in politics: why citizens vote, how organized interests form, why democracies emerge, and why nations go to war. Furthermore, there is merit in assessing pure theory in its own right. Formal theory can reveal surprising and counterintuitive behavioral expectations and provide insights into political processes in important areas suffering from a scarcity of available data. Pure theory is often a precursor to the development of empirically testable measures. Though we lack—and may ever lack—a complete model of political behavior, both theory and empirics have a role in filling and bridging the gaps. Many of the first principles from which formal theories are derived are either undiscovered or only partially described and understood.

There is room in the discipline for both forms of inquiry. The ambition of political science is to provide pieces of the puzzles of politics with increasingly better developed and more rigorously tested models of behavior. Both formal theorists and empiricists have contributions to make to our understanding of politics. Where possible, it is best to precisely define both our theoretical expectations and our empirical tests of those expectations. It is difficult to test theories that lack precision or clear implications, and the ambiguity of these nonformal theories can result in conflicting and mutually exclusive tests. The practical usefulness of precise theories is lessened without ways to test them against reality. Theories that wander too far away from the real world of politics make the discipline less relevant to both policymakers and students of practical politics. The combination of the two approaches, where we use quantitative methodology to assess the predictions and comparative statics of formal models against empirical data (empirical implications of theoretical models, often referred to as EITM), is one of the more significant modern trends in political science and is an active field of inquiry in the discipline, coexisting alongside the more traditional behavioral and pure theoretic fields of inquiry.

Whether through the investigation of the empirical implications of formal models or the mind experiments of pure formal theory, formal models have much to contribute to the study of politics today. Why do two parties form plurality electoral systems, how do two major parties in first-past-the-post electoral systems respond to the threat of entry by third parties, why do voters turn out, how many seats should a party seek to control in a legislature, can we get irrational aggregate social outcomes when society is composed of rational individuals, why and how do procedural rules in institutions such as legislatures matter, and why do individuals choose to join interest groups? These questions and more lend themselves to formal analysis (Downs, 1957; Olson, 1965; Ordeshook, 1992; Palfrey, 1989; Riker & Ordeshook, 1968).

## IV. Rational Choice and Its Foundational Assumptions

Formal theory is a deductive form of inquiry, deriving implications and relationships from established first principles. One such assumption is rational choice. Rationality, as it is generally conceived in formal modeling, is the assumption that individuals have a set of preferences and act intentionally, constrained and motivated by real-world contexts, consistent with those preferences. Individuals are instrumentally rational. Thus, a rational choice is not a person doing what you think they should do if you were in their shoes, such as staying home and studying (you) rather than going out to a party before the big test (them). Just because you would value getting a good grade on the exam more than having fun on a Friday night does not make the other person’s decision to party irrational. It also does not mean having superhuman knowledge or being brilliant decision makers. Individuals order their complete preferences as they see fit, and they make choices aimed at getting the best possible outcome according to those preferences from their perspective, however imperfect that may be.

There are three important principles that undergird rational choice. The first is that of completeness or comparability. If one is to choose among possible alternatives, one has to know what all the alternatives are and be capable of comparing them to one another. The second is the mathematical principle of transitivity (if A > B and B > C, then A > C). To make a rational choice, one has to be able to order preferences consistently. The transitive principle permits a rational choice because the interrelation between all of a person’s choices makes sense. If an individual prefers pizza to waffles and waffles to apples, then it isn’t rational to prefer apples to pizza. Third, rational choice models assume that individual actors are self-interested, in that they attempt to get the best outcome possible for themselves. This is also called utility maximizing, where utility is just a quantifying term for a benefit to the individual and maximizing means that the individual seeks to get the largest benefit possible. Now, this isn’t to say that all potential choices meet the comparability and transitivity and maximizing requirements. Indeed, people’s choices are often uninformed, hurried, inconsistent, and emotional. However, behavior that is best modeled as intentional, self-interested, and maximizing across comparable and transitive preference orderings—true of many political choices and decisions—lends itself to rational choice analysis (Riker & Ordeshook, 1973).

This definition of rationality reveals another fundamental assumption of formal theory: methodological individualism. Most formal theories employ the individual as the fundamental unit of analysis. An individual can have preferences and beliefs while groups, firms, and states cannot. Consider again the Riker–Ordeshook model of voting. It assumes methodological individualism. Note that the model defines the individual citizen’s calculus in deciding whether to vote. The Riker–Ordeshook model is also a good example of an application of the rationality assumption. They presume that the voter will assess both the internal factors (preferences ordered across candidates) and external factors (the probability that the voter’s vote will be decisive) in making a rational cost–benefit decision whether to vote.

The assumption of rationality is one of the more controversial aspects of formal theory. Many critics argue that human beings lack the capacity, the evolutionary development, and the necessary information to make rational decisions as conceived by formal models. Although this may be the case, it does not necessarily mean that rationality is a useless or even pernicious assumption in formal theory. Assumptions can be both unrealistic and useful in terms of either identifying puzzles (if X is the rational choice, why do most individuals choose Y?) or by reflecting an important aspect of decision making, even if it does not accurately represent many individual decision-making processes. All models are inaccurate to some degree. Models are crude, stylized approximations of the real world, intended to reflect some important aspect of politics rather than every aspect. Model airplanes fall short of the realism ideal in terms of material composition, scale, and functionality. They are made of plastic rather than steel and fiberglass. Key components of real airplanes are missing or misrepresented. Few models use jet fuel or have afterburners. Yet model airplanes have fundamentally contributed to our understanding of flight and the design of aircraft. Indeed, real flight would have been impossible without creative modelers like Leonardo da Vinci informing and inspiring practical developers such as the Wright brothers. The measure of a model of politics is not whether it perfectly approximates the real world, but rather its usefulness and parsimony in contributing to our understanding of politics (Morton, 1999; Ordeshook, 1992; Shepsle & Bonchek, 1997).

That said, there have been significant innovations that incorporate more realistic assumptions regarding individual behavior in formal models. One important modification is a move away from deterministic models to probabilistic models of choice. This research paper has noted that utility maximization is a key component of rational choice models, where people assign utilities to the outcomes of choices, and the rational individual chooses the highest utility outcome. When an individual is highly confident that X action will lead to Y outcome, we say that individual is operating under the condition of certainty. However, in many contexts, an individual is uncertain as to what actions lead to what outcome. Rather, individuals make choices that may or may not lead to a particular outcome. In such instances, the individual is uncertain about what happens when he or she makes a particular choice. When the individual has a good sense of the likelihood of certain outcomes (say, a 75% chance of Y and a 25% chance of Z), we say that individual is operating under the condition of risk. When an individual has no idea what will happen or what is likely to happen, he or she is faced with the condition of uncertainty (Dawes, 1988). Under probabilistic conditions, it is particularly useful to assign numbers to outcomes. Formal theory defines these as utility. Quantifying the outcomes permits the incorporation of probabilistic decisions into models of behavior. Now, rather than choosing acts that necessarily produce a particular outcome, the individual chooses among lotteries where the utility from outcomes is conditioned on the probability of that outcome occurring. This expected utility theory is an important innovation in modeling behavior. An individual may value being crowned king of the world very highly, and thus, we would assign that outcome a high score in utility. However, given that the probability of that outcome approaches zero, the individual’s expected utility from choosing the actions that might lead to ascension to world ruler are actually quite low. This is why a lottery jackpot in the millions of dollars doesn’t require individuals to buy lottery tickets to maximize their utility. Indeed, buying a lottery ticket may yield a lower expected utility than using that money on a soda or a hamburger (Shepsle & Bonchek, 1997).

Although probabilistic models may be more realistic, they still assume that individuals are rational utility maximizers. Other theorists have relaxed the assumption of rationality itself. Although this research paper cannot give them full treatment, nonlinear expected utility, prospect theory, bounded rationality, learning, and evolutionary models use near or quasi-rational models of behavior (Morton, 1999). Bounded rationality incorporates decision makers with incomplete information, who have cognitive limitations and emotional responses that prevent or complicate utility maximizing based on the limited information they do have, and the complexity inherent to decision making (Jones, 2001; Simon, 1957). Herbert Simon, an early developer of boundedly rational models, says individuals “satisfice” rather than satisfy a preference ordering. An individual who satisfices doesn’t consider all possible alternatives, but rather uses a heuristic to search among a limited number of choices at hand that need not contain the optimal available decision when all preferences are considered (Simon, 1957). Prospect theory is a psychological theory of decision making where individuals evaluate losses and gains differently. Loss aversion, where individuals fear losses more than they value gains, is a concept from prospect theory, and it generated different predicted behavior than traditional expected utility theory (Kahneman & Tversky, 1979). Most formal models in political science, however, employ the traditional assumptions of rationality and methodological individualism in constructing their models of politics.

## V. Solving and Testing Formal Models

After a formal model has been developed, the model is solved for predictions presented as theorems or results. The implications (or solution) of the model are deducted axiomatically from the assumptions and structure of the model itself. In most formal models relevant to political science, the researcher seeks to solve the model analytically. This involves the search for equilibria (stable outcomes). Where analytical solutions are not feasible or possible, obtaining numerical solutions through computer simulation is an option. If the formal model is game theoretic, then the interactions between the players are strategic. A common solution concept in this form of model is the Nash equilibrium, where each player’s choice is optimal, given the choices of other players, and thus, no player has an incentive to change strategies within the game. Solving for the decision of the potential voter in the Riker– Ordeshook model of turnout in a two-candidate election yields the instrumental solution that the voter should vote for his or her preferred candidate only if the probability that the vote will be decisive exceeds twice the cost of voting. It furthermore yields the nontrivial result that, even if the cost of voting is very small, the voter should vote only if the probability of his or her breaking a tie exceeds 2 in 1,000. Given an election with a large number of voters (e.g., a presidential election), the Riker–Ordeshook equation yields a prediction: no vote.

In evaluating a formal model empirically, by relating the model to data from the real world (e.g., election results, legislative votes, presidential vetoes, etc.), one can evaluate assumptions, predictions, and alternative models. The evaluation of assumptions is a validation of the relevance of the formal model. If an assumption of a model is violated frequently in the real world, then the scope of the applicability of that model is smaller. In evaluating predictions, one can look at point estimates. Point estimates are the values of the variables in the model when in equilibrium (models can predict one or multiple equilibria). Another method of evaluation is comparative statics, where changes in the endogenous variables of the model in equilibrium (dependent variable) vary with the values of an exogenous variable (independent variable). Finally, one can assess models by looking at them in competition with other contrary formulations of the political phenomenon (Morton, 1999).

## VI. Public Choice: Democratic Theory, Institutions, and Voting Paradoxes

Between 1950 and 1965, the seminal and foundational works in formal and spatial theory were published. Among them are Arrow’s (1963) Social Choice and Individual Values, Black’s (1958) The Theory of Committees and Elections, Buchanan and Tullock’s (1962) The Calculus of Consent: Logical Foundations of Constitutional Democracy, Riker’s (1962) The Theory of Political Coalitions, Olson’s (1965) The Logic of Collective Action, and Anthony Downs’s (1957) An Economic Theory of Democracy. Each represents an important contribution to formal modeling and identifies important paradoxes or puzzles of logical political behavior, collective action, choice mechanisms, and democratic theory that continue to be the subject of innovative research today.

However, the study of choice mechanisms using mathematics actually began in the 18th century. Procedural problems in electoral systems led Condorcet and Borda to investigate the problem analytically. In the 1920s, Pareto would use mathematics to understand social phenomenon (Pareto, 1927; Shepsle & Bonchek, 1997). It is Pareto’s efficiency concept that underlies Buchanan and Tullock’s (1962) calculus of consent. These thinkers paved the way for the explosion of formal and spatial political theory in the 1960s. The works of these political economists formed the pillars on which modern public choice theory was built. Public choice theory (also called social choice) focuses on macroinstitutional factors and how the structure of government interacts and often conflicts with the aggregate preferences of the public. Olson’s work on the collective action problems inherent to group formation, Downs’s conclusions regarding the rational ignorance of voters, Riker’s theory on minimum winning coalitions, and Buchanan and Tullock’s treatise on the political organization of a free society are all significant contributions worthy of attention, but this research paper focuses on only one aspect of public choice theory as an illustration: voting behavior and electoral competition.

Condorcet was one of the first to apply mathematical modeling to the problem of making a collective decision among a group of individuals with preference diversity (not everyone wants the same thing). One of the common themes in public choice theory is the normative principle that choice mechanisms should reflect democratic values. One such mechanism intended to reflect a democratic choice is first-preference majority rule, where the top preference of the largest number of individuals is given effect as the decision on behalf of the collective. But there can be multiple majorities, and which choice is made is dependent on the order in which the alternatives are presented, particularly in pairwise comparisons. If there is one choice that defeats all others in a pairwise vote, it is said to be a Condorcet winner. Condorcet identified a problem with majority rule when group preferences are intransitive. Although rational choice requires transitive individual preference orderings, this does not require that group preferences be transitive in the aggregate. When group preferences are intransitive (Condorcet’s paradox), cycling can occur. When A defeats B and B defeats C and C defeats A, there is not one majority rule election that will produce the group or democratic preference, since no such preference exists (McLean & Urken, 1993). There is no stable outcome that a majoritarian procedure can produce under these conditions.

Condorcet examined just the special case of majority rule. Arrow (1963) looked at the problem more generally by making a few basic minimal assumptions about what a democratic process would entail: All preference orderings are possible including those with indifference, Pareto optimality, the independence of irrelevant alternatives, and nondictatorship. He asked whether it was possible to construct a method that would aggregate those preferences in such a way as to satisfy these conditions. Arrow’s theorem asserts that there is no such possible choice mechanism. Rational democracy isn’t merely impractical; it is impossible. What this means practically is that there is a trade-off between having a rational system that translates preferences to policy and the concentration of political power. Put another way, dictators are good for consistency. This is not to say that all social aggregation is unfair or irrational. Rather, there is no mechanism that can guarantee such an outcome in any given context. Arrow’s result shows that democratic processes that yield socially coherent policy are a much more difficult proposition than had been thought.

## VII. The Spatial Theory of Voting

As noted earlier, one of the major innovations of formal theory was the use of geometric space to represent political choices. Let’s set out some of the basics of spatial theory from the outset. The standard spatial model depicts voting with Euclidean preferences in a one- or two-dimensional space. This means that political choice is represented as the choice of some point on a line or a two-dimensional space over which all the actors have preferences. Specifically, each actor, j, has an ideal point (top preference) on the line or space, prefers a point closer to this ideal point to one more distant from it, and is indifferent between two equally distant points. In the two-dimensional case, an actor’s indifference curves are concentric circles centered on his ideal point. Actor j’s preference set Pj(x) is the set of points j prefers to x. Furthermore, in most spatial models, preference orderings are assumed to be single peaked (monotonic).

Although single-peaked preferences are helpful in producing social consensus in the absence of unanimity, Duncan Black (1958) demonstrates that they are also an important aspect of the spatial representations of politics. If one takes a group of individuals (voters in an election, legislators on a committee) who are considering a policy along one dimension (say, candidates in an ideological dimension or the amount of tax dollars to budget for defense spending), and their utility function is single peaked, then the outcome of this process is determined by the median voter—specifically, the committee member located at the center of the group on the relevant dimension determines the outcome. Geometrically speaking, Black’s median voter theorem shows that the ideal point of the median voter has an empty win set. A win set W(x) is the set of all points that beat (are collectively preferred to) x under a decision rule. If the ideal point of the median voter has an empty win set, then the median voter’s preference commands a majority against all other possible points on the policy dimension.

The commanding stature of the median voter was first suggested by Harold Hotelling (1929) in predicting the geographic congregation of firms at one location, such as hot dog vendors on a street. Although Black (1958) studied committees, Downs (1957) adopted Hotelling’s proximity model in his now famous median voter theorem in elections (MVT). The theorem states that the median voter in a single dimension cannot be defeated in a pairwise vote with full turnout and sincere voting (individuals vote according to their true preference ordering rather than trying to game the vote by strategically voting for a less-preferred alternative).

Both Downs’s and Black’s theorems suggest there is a centripetal force at work in politics. Downs predicted that party (or candidate) platforms would converge to the median voter’s policy preference. It is called a proximity model because Downs assumed that voters used the spatial distance between themselves and candidates to determine whom they should vote for. Rational voters in this model vote for the candidate or party with a platform closest to their most preferred policies. Parties converge because that’s where the votes are (Downs, 1957). But does this model accurately depict how parties behave in real elections?

There are a variety of complications that can prevent Downsian convergence of parties. Empirically speaking, there is evidence from a plethora of elections here and abroad where parties and candidates failed to converge to a single policy point or even a vector of policy points in a continuum of policies. Divergence appears to be the norm rather than the exception (Morton, 1993). Multiple dimensions are also a complication for the MVT (Riker, 1980). The Downsian model assumes party competition occurs in a unidimensional space, but political competition can occur along multiple dimensions. After all, the cost of a policy is only one consideration when it comes to deciding how to authoritatively allocate resources. Fairness, effectiveness, efficiency, and other considerations can come in-to play. Ideology is one way to rate candidates, but what about affect (likeability), trust, and performance considerations? Nonpolicy attributes, or valence, may influence election outcomes (Groseclose, 2001).

Empirical and theoretical issues with the MVT have led scholars to develop more sophisticated formal models of party competition. Scholars have extended the spatial model of voting developed by Downs into multiple dimensions using a Euclidian model of utility functions (Enelow & Hinich, 1984; Hinich & Pollard, 1981). Plott’s (1967) theorem suggests that a median voter result is possible in multiple dimensions but that the conditions for it are attenuated. It is dependent on radial symmetry among the alternatives. McKelvey’s (1976, 1979) chaos theorem asserts that there is no majority-rule empty-win-set point in a multidimensional spatial setting other than Plott’s special case. In other words, we can start at an arbitrary point in the space, and a majority can move us to any other point in the space. With no Condorcet winner, policy cycles endlessly (McKelvey). However, is this chaos ephemeral? Consider that policy cycling isn’t frequent in legislatures. As Gordon Tullock (1981) famously queries, “Why all the stability?” It remains a point of contention, though Shepsle (1979) suggests that institutions impose policy stability through restrictive rules. Institutions may impose stability, but this merely changes the choice context (rules instead of policy). Ultimately, it is an open question requiring further theory and study.

Theories of candidate divergence suggest alternative specifications such as nonnormal voter distributions, directional logic, permitting third-party entry, valence, and turnout variance as bases for moving away from the stylized median voter model. Can the distribution of voters produce platform divergence on its own? Where multiple modes exist, candidate divergence may be optimal. The implications for polarization in the median voter model were anticipated by Downs (1957). He argued that the location of equilibria in an election would be dependent on the shape of the distribution of citizen preferences. Downs, on this at least, was wrong. The pure MVT with complete turnout and sincere voting predicts that the median voter is, in fact, a Condorcet winner: No position defeats the ideal point of the median voter in a pairwise vote, irrespective of distributional qualities (Black, 1958).

Other scholars have taken issue with the proximity calculus where voters choose the candidate or party that is closest to them. Rabinowitz (1989) argues that platform space cannot be represented in terms of an ordered and continuous set of policy alternatives. Rather than a policy continuum, the directional theory of voting suggests policy alternatives are dichotomous, and thus, candidates are judged by their intensity and policy direction. Finally, the prospect of entry by a third party may cause parties to diverge from the median to discourage a third-party challenge on their extremes in a polarized electorate (Fiorina, 1999; Palfrey, 1984). Hinich and Munger (1994) develop a theory of ideology that permits party divergence. Incorporating previous modifications to the MVT, such as incomplete information and uncertainty in voter policy locations and candidate locations, Hinich and Munger argue that the creation and maintenance of an ideology by parties is a necessary component of political competition. In a political environment where Republicans have become much more consistently and strongly conservative (and likewise, Democrats and liberals have become more liberal), vote-seeking parties rationally diverge to create a credible ideology that they can sell to their constituents. Establishing an ideological flag at one of the poles in a bimodal distribution can account for platform divergence.

The MVT has received a lot of attention in political science and, as the reader has seen, a great deal of criticism. Although the MVT point predictions on turnout and convergence have been falsified, the comparative statics of the model have been validated in observed elections. The fact that most parties in most elections do not converge to a single point on the policy dimension is not a failure of the MVT as a model of politics, let alone a failure of formal theory. There is a centripetal force drawing parties to the center in American politics, and Downs (1957) gives a parsimonious explanation of why that is. Furthermore, a plethora of theoretical modifications and empirical tests have been conducted using formal theory in extending and critiquing the MVT that has greatly advanced political scientists’ understanding of party behavior in elections. This is a key point. The efficacy of formal modeling is not dependent on the success or failure of one model. Indeed, one can have competing formal models with polar opposition predictions about the same phenomenon. Formal theory is a deductive method of social scientific analysis. The analyses using formal theory stand or fall on their own merits.

## VIII. Conclusion

The major paradigms of formal and spatial theory in social choice, voting, institutions, and political behavior have spawned decades of empirical and theoretical research as well as countless additional, alternative, and contrary models of political decision making. Many of the early formal models discussed here have been either falsified or significantly modified to account for empirical deficiencies. Riker’s (1962) theory of minimum winning coalitions doesn’t describe many legislative contexts, and the behavior of legislators often violates his theoretical expectations. Downs’s (1957) turnout prediction has been falsified, and his prediction of party convergence at the median has been serially violated in actual elections. As Plott (1967) observes, Downs’s and Black’s median voter theorems are problematic in multiple dimensions. The cycling and instability of social choice identified by McKelvey (1976, 1979) is not a consistent characteristic of government institutions, causing some political scientists to puzzle over the apparent stability in these institutions and their incorporated choice mechanisms.

These empirical failures and the ad hoc modifications aimed at rescuing them have led some scholars to suggest that political behavior is inherently irrational or, at minimum, that there is a poverty of realistic and empirically supported rational choice models, rendering them to be of little use or relevance (Green & Shapiro, 1994). This is a mistake. It ignores important empirical validations of formal models, fetishizes point predictions over comparative statics, and sets up a straw man of rational choice theory when there is not one but rather a multitude of formal, spatial, and rational choice theories. The failure of one or more formal models does not prove that formal theory has little utility in empirical investigations of politics. Those failures spur puzzle solving, the development of better and alternative models, and the exploration of new and innovative empirical tests of model predictions.

One can see this in the variety of formal models, evidence, and arguments directly responding to the Downs–Hotelling proximity model of party platform convergence. The Riker–Ordeshook model of turnout that was considered at the beginning of the research paper modified the traditional Downsian turnout model by incorporating a new variable: a psychic benefit from participation. Political scientists have learned quite a bit from the socalled failure of the Downsian turnout and proximity models. They now know that instrumental calculation (the cost of voting combined with the probability of affecting the outcome) is insufficient to spur a voter to participate. Rather, the experiential benefit characterized as a psychic civic-duty benefit by Riker and Ordeshook (1973) is the decisive consideration. Formal modelers have incorporated alienation, abstention, and other modifications to account for positive turnout in elections. Thus, an apparent formal model failure has actually yielded numerous and significant contributions to our understanding of voting behavior. These and many other formal treatments of politics are real and important contributions to social scientific knowledge. For political scientists, formal and spatial models are essential tools for understanding and predicting political behavior and phenomena.

### Bibliography:

- Arrow, K. J. (1963). Social choice and individual values (2nd ed.). New Haven, CT: Yale University Press.
- Barry, B. (1978). Sociologists, economists and democracy. Chicago: University of Chicago Press.
- Binamore, K. (1990). Essays on the foundation of game theory. Oxford, UK: Blackwell.
- Black, D. (1958). The theory of committees and elections. Cambridge, UK: Cambridge University Press.
- Buchanan, J. M., & Tullock, G. (1962). The calculus of consent: Logical foundations of constitutional democracy. Ann Arbor: University of Michigan Press.
- Dawes, R. M. (1988). Rational choice in an uncertain world. Orlando, FL: Harcourt Brace.
- Downs, A. (1957). An economic theory of democracy. New York: Harper & Row.
- Elster, J. (1986). Rational choice. New York: New York University Press.
- Enelow, J. M., & Hinich, M. J. (1984). The spatial theory of voting: An introduction. Cambridge, UK: Cambridge University Press.
- Fiorina, M. P. (1999, October). Whatever happened to the median voter? Paper presented at the Massachusetts Institute of Technology Conference on Parties and Congress, Cambridge.
- Green, D. P., & Shapiro, I. (1994). Pathologies of rational choice theory: A critique of applications in political science. New Haven, CT: Yale University Press.
- Groseclose, T. (2001). A model of candidate location when one candidate has a valence advantage. American Journal of Political Science, 45(4), 862-886.
- Hinich, M. J., & Munger, M. C. (1994). Ideology and the theory of political choice. Ann Arbor: University of Michigan Press.
- Hinich, M. J., & Pollard, W. (1981). A new approach to the spatial theory of elections. American Journal of Political Science, 25, 323-341.
- Hotelling, H. (1929). Stability in competition. Economic Journal, 39, 41-57.
- Jones, B. D. (2001). Politics and the architecture of choice: Bounded rationality and governance. Chicago: University of Chicago Press.
- Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263-291.
- McKelvey, R. D. (1976). Intransitivities in multidimensional voting models and some implications for agenda control. Journal of Economic Theory, 12, 472-482.
- McKelvey, R. D. (1979). General conditions for global intransitivities in formal voting models. Econometrica, 47(5), 1085-1112.
- McLean, I., & Urken, A. B. (1993). Classics of social choice. Ann Arbor: University of Michigan Press.
- Morton, R. B. (1993). Incomplete information and ideological explanations of platform divergence. American Political Science Review, 87, 382-392.
- Morton, R. B. (1999). Methods & models: A guide to the empirical analysis of formal models in political science. Cambridge, UK: Cambridge University Press.
- Olson, M. (1965). The logic of collective action. Cambridge, MA: Harvard University Press.
- Ordeshook, P. C. (1992). A political theory primer. New York: Routledge.
- Palfrey, T. R. (1984). Spatial equilibrium with entry. Review of Economic Studies, 51, 139-156.
- Palfrey, T. R. (1989). A mathematical proof of Duverger’s law. In P. C. Ordeshook (Ed.), Models of strategic choice in politics (pp. 69-92). Ann Arbor: University of Michigan Press.
- Pareto, V. (1927). Manuel d’économie politique [Manual of political economics]. Paris: Marcel Giard.
- Plott, C. R. (1967). A notion of equilibrium and its possibility under majority rule. American Economic Review, 57, 787-806.
- Rabinowitz, G. (1989). A directional theory of issue voting. American Political Science Review, 83(1), 93-121.
- Riker, W. H. (1962). The theory of political coalitions. New Haven, CT: Yale University Press.
- Riker, W. H. (1980). Implications from the disequilibrium of majority rule for the study of institutions. American Political Science Review, 74, 432-446.
- Riker, W. H., & Ordeshook, P. C. (1968). A theory of the calculus of voting. American Political Science Review, 62, 25-42.
- Riker, W. H., & Ordeshook, P. C. (1973). An introduction to positive political theory. Engelwood Cliffs, NJ: Prentice Hall.
- Shepsle, K. A. (1979). Institutional arrangements and equilibrium in multidimensional voting models. American Journal of Political Science, 32, 27 59.
- Shepsle, K. A., & Bonchek, M. S. (1997). Analyzing politics: Rationality, behavior, and institutions. New York: W. W. Norton.
- Simon, H. (1957). Models of man: Social and rational; mathematical essays on rational human behavior in a social setting (3rd ed.). New York: Free Press.
- Tullock, G. (1981). Why so much stability? Public Choice, 37(2), 189-204.