Comparative Studies Research Paper

Academic Writing Service

Sample Comparative Studies Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. iResearchNet offers academic assignment help for students all over the world: writing from scratch, editing, proofreading, problem solving, from essays to dissertations, from humanities to STEM. We offer full confidentiality, safe payment, originality, and money-back guarantee. Secure your academic success with our risk-free services.

1. Definition

Comparisons are essential in any science to establish systematic similarities and differences between observed phenomena and, possibly, to develop and test hypotheses and theories about their causal relationships. Whereas this broad definition of comparative inquiry comprises several distinct methods, including case studies, statistical analysis, and experimental research, the comparative method in the narrower sense of the term characteristically involves a limited number of cases (‘small N’), but usually very complex phenomena and relationships often at the ‘macro’-level of entire societies or political systems. In this latter sense this method and its respective designs will be discussed here.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


2. Intellectual Context

Comparative procedures in the historical and political sciences date back to antiquity, but their methodological backgrounds have only been developed systematically in the natural sciences in the eighteenth and nineteenth centuries based on studies, for example, by Linne in botany, Cuvier in anatomy, or Bernard in experimental medicine. The logical foundations for this method have been laid by Hume and, in particular, J. S. Mill’s (1843) ‘canons.’ The first of these refers to the ‘Method of Agreement’ eliminating all similarities but one: ‘If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree is the cause (or effect) of the given phenomenon’ (p. 390). By contrast, the ‘Method of Difference’ establishes the absence of a common cause or effect, even if all other circumstances are identical: ‘If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save one, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or the cause, or an indispensable part of the cause, of the phenomenon’ (p. 391).

Both methods thus are concerned with the systematic matching and contrasting of cases in order to establish common causal relationships by eliminating all other possibilities. Both procedures are, however, somewhat extreme in the sense that they attempt to establish a single common cause, or its absence, by controlling all other possibilities and the entire environment. The method of difference can, to some extent, be approximated in ‘pure’ laboratory situations where such experiments may be conducted. The method of agreement is more applicable in ‘field research’ situations, but then the question of how to control the ‘environment’ becomes crucial.




Mill, therefore, devised a combination of the two which he called the ‘Joint Method of Agreement and Difference’ or the ‘Indirect Method of Difference’ which consists of a double employment of the Method of Agreement, once before and once after a certain event: ‘If two or more instances in which the phenomenon occurs have only one circumstance in common, while two or more instances in which it does not occur have nothing in common save the absence of that circumstance, the circumstance in which alone the two sets of instances differ, is the effect, or the cause, or an indispensable part of the cause, of the phenomenon’ (Mill 1974, p. 396). This ‘quasi-experimental’ design is, however, as Mill himself realized, less cogent than the pure Method of Difference.

A fourth method, which is a derivation of the Method of Difference, is the ‘Method of Residues’. When cause and effect of other factors have been established by previous experiments, remaining variation in the outcome can be attributed to the ‘residues’ in the original experiments: ‘Subduct from any phenomenon such part as is known by previous inductions to be the effect of certain antecedents, and the residue of the phenomenon is the effect of the remaining antecedents’ (Mill 1974, p. 398).

Finally, for instances where a certain common external factor cannot be entirely eliminated by any of these procedures, its impact may at least be systematically varied from one situation to another (e.g., altitude above sea level, or heat or air pressure in certain experiments in physics) and then its varying influence assessed. This is the ‘Method of Concomitant Variation’: ‘Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation’ (Mill 1974, p. 401).

3. Changes In Emphasis

Mill’s ‘canons’ imply rather rigid ‘positivist’ assumptions about the relationships of cause and effect and the state of valid theory in any given area of research. These are: (a) contiguity between the presumed cause and effect; (b) temporal precedence, in that the cause had to precede the effect in time; and (c) constant conjunction, in that the cause had to be present whenever the effect was obtained (cf. Cook and Campbell 1979, p. 10). On the whole, such relatively mechanical and deterministic relationships can only rarely be established even in the ‘hard’ sciences.

By themselves, therefore, these methods do not produce any new disco veries unless some truly relevant factors have been included. Similarly, they may not pro e any causal relationship unless a clear and complete (preconceived) model of such links has been tested and other factors were sufficiently ‘controlled.’ But even such a model may eventually be falsified and be integrated in a more comprehensive theory. So these methods may not produce any ‘positive’ results. They constitute, however, a valuable step to eliminate many irrelevant factors and to approximate the causal conditions in the ‘real’ world. In this sense they correspond to Popper’s (1934) famous principle of ‘falsification.’ Or as it was expressed in another ‘classic’ of this period, Mill’s methods are nevertheless ‘… of undoubted value in the process of attaining truth. For in eliminating false hypotheses they narrow the field in which true ones may be found. And even where these methods may fail to eliminate all irrelevant circumstances, they enable us with some degree of approximation to so establish the conditions for the occurrence of a phenomenon, that we can say one hypothesis is logically preferable to its rivals’ (Cohen and Nagel 1934, p. 267).

If these qualifications have to be made from a general epistemological point of view, they apply even more to the social and behavioral sciences where ‘objective’ facts and their ‘subjective’ perceptions are closely interlinked and may change in the course of time. Mill himself was aware of the much higher complexity in the social sciences and the inapplicability of his methods in a very strict sense: in ‘politics and history … Plurality of Causes exists in almost bound-less excess, and effects are, for the most part, inextricably interwoven with one another. To add to the embarrassment, most of the inquiries in political science relate to the production of effects of a most comprehensive description, such as the public wealth, public security, public morality, and the like; results liable to be affected directly or indirectly either in plus or in minus by nearly every fact which exists, or event which occurs, in human society’ (Mill 1974, p. 452, emphases in the original). He, therefore, found the Method of Difference inapplicable and the Methods of Agreement and Concomitant Variation inconclusive in the social sciences (pp. 881 ff.). Similarly, the Method of Residues requires a much more developed state of theory than is usually available in the social sciences (p. 884 ff.).

However, in the realm of the human and social sciences, Mill was willing to adopt a less deterministic and more probabilistic perspective:

Inasmuch as many of those effects which it is of most importance to render amenable to human foresight and control are determined, like the tides, in an incomparably greater degree by general causes, than by all partial causes taken together; depending in the main on those circumstances and qualities which are common to all mankind, or at least to large bodies of them, and only in a small degree on the idiosyncracies of organisation or the peculiar History of individuals; it is evidently possible with regard to all such effects, to make predictions which will almost always be verified, and general propositions which are almost true. And whenever it is sufficient to know how the great majority of the human race, or of some nation or class of persons, will think, feel, and act, these propositions are equivalent to universal ones. For the purpose of political and social science this is sufficient (Mill 1974, p. 847, emphases in the original).

4. Emphases In Current Theory And Research

Two basic problems remain in any research of this kind, namely the questions as to what have been called the ‘internal’ and ‘external’ validity of any scientific findings. The first refers to the ‘approximate validity with which we infer a relationship between two variables is causal,’ the second to the ‘approximate validity with which we can infer that the presumed causal relationship can be generalised to and across alternate measures of the cause and effect as well as across different types of persons, settings and times’ (Cook and Campbell 1979, p. 37). Even though both kinds of validity can only be approximated and remain subject to the possibility of further falsification, they can be established with some kind of certainty with the help of procedures of randomization.

As far as internal validity is concerned, Mill’s original procedures can be enhanced by first randomly selecting two groups of subjects and then administering a certain stimulus or treatment to one of them. If a significant effect can be observed in this group, then the stimulus may be considered to have been the ‘cause.’ This corresponds to Mill’s method of difference, but where all kinds of external influences can now be controlled, within certain limits, by the ‘equalizing’ procedure of random selection. This is the realm of ‘experimental research’ proper. Similarly, if a random sample has been drawn from a larger universe of persons, as for example in survey research, the relationships which can be observed in the sample may be generalized to the larger universe from which it was drawn. This is the realm of ‘statistical’ analysis.

Unfortunately, however, both procedures of randomization are not feasible when we deal with the characteristic ‘many variables–small N’ situation of many kinds of current social research apart from the fact that many ‘real’ experiments with groups of human beings are not possible for practical, political, or ethical reasons either. We are thus left with Mill’s original predicaments and have to attempt to cope with them in some other ways. This also depends, of course, on the kinds of cases and problems we are dealing with and the state of theory in any given area of investigation. In social research, ‘cases’ often refer to larger distinct groups of people, e.g., certain ethnic groups or entire societies and states, which have a proper name and in which we are interested because of their intrinsic relevance and not just as an anonymously selected unit out of a multitude of cases.

The kinds of problems we are dealing with vary according to the respective emphasis of the different social and behavioral sciences. What they have in common is the ‘multidimensionality’ (i.e., comprising ‘objective,’ ‘subjective,’ and often also normative dimensions) and the ‘malleability’ (i.e., possible changes over time and probabilistic internal relation-ships) of their subject matter (Almond and Genco 1977). The ways such relationships are perceived depend on the ‘social,’ i.e., also theoretical constructions of this reality. These, by necessity, can only be certain approximations as to their ‘real’ (and changing!) nature, but they have concrete referents and are not, in our view, entirely arbitrary whims of imagination as some of the more extreme ‘post-modern’ authors pretend. What to compare thus differs from one discipline to the other and depends on the respective state of theory.

By contrast, here we are concerned with how to compare under particularly difficult conditions concerning problems of internal and external validity in the social sciences. From this perspective, a number of ways can be conceived to cope with the prevailing ‘small N–complex cases’ dilemma. First of all, the number of cases may be increased, including for example historical cases or investigating them at some lower level of analysis, e.g. certain districts rather than entire nation-states. But even then neither randomly selected control groups nor random sampling for statistical purposes are possible. In any case, we usually are interested in concrete societies and their problems and not just in any ‘anonymous’ units of investigation. A second strategy concerns the purposeful selection of cases and a third, the reduction of complexity by means of combining several variables or employing better theory.

At the outset of any investigation an area of homogeneity must be defined which establishes boundaries for the selection of cases. Cases must ‘parallel each other sufficiently’ and be comparable along certain specified dimensions. This is meant by the common saying that ‘apples and/oranges’ should not be compared. In this regard, the subject matter and the problem we are interested in must first be specified in order to make any sense. Thus, these fruits may well be compared concerning their sugar or water contents, their nutritional value, etc. when these dimensions as the tertium comparationis have been made explicit. The specification of relevant cases at the start of an investigation thus amounts to an explicit or implicit hypothesis that the cases selected are in fact alike enough to permit comparisons. The primary consideration in delimiting cases for a small N comparative study is the dependent variable. For example, the breakdown or survival of democratic regimes in interwar Europe presupposes the prior existence of some form of democracy in the selected cases. In addition, some limitations in time and space can also enhance the homogeneity and thus the comparability of the cases examined. For example, certain kinds of colonial or other forms of external domination or religious-cultural influences may be useful criteria for selecting a specific group of cases.

A second consideration concerns the extent of diversity within the selected universe. In this regard, a maximum of heterogeneity for a minimum number of cases should be achieved. Taking the above example again, both survivors and breakdowns of democracy can be considered, and among the latter perhaps some more specific variants such as fascist vs. more generally authoritarian outcomes.

Within this universe two opposite strategies now become possible. One is the ‘most similar,’ the other the ‘most different’ systems design. These have been explicitly formulated and discussed by Przeworski and Teune: ‘The most similar systems design is based on a belief that a number of theoretically significant differences will be found among similar systems and that these differences can be used in explanation’ (1970, p. 39). By matching these similar cases as much as possible, most of the variables can be ‘controlled.’ Mill’s ‘indirect method of difference’ where different outcomes may be attributed to the remaining factors which differentiate these cases now becomes applicable. Even though only rarely just a single factor will remain to which the effect can be attributed, at least many others can be excluded, and the remaining ones can be examined more closely in a theoretically guided qualitative manner. The ‘internal validity’ of the observed relationships can thus be greatly enhanced.

The opposite strategy is the ‘most different’ systems design: this design, ‘… which seeks maximal het- erogeneity in the sample of systems, is based on a belief that in spite of intersystemic differentiation, the populations will differ with regard to only a limited number of variables or relationships’ (Przeworski and Teune 1970, p. 39). This ‘contrasting’ of cases thus eliminates all factors across the observed range which are not linked to an identical outcome. In this way, more ‘universal’ explanations are sought as far as the selected area of homogeneity is concerned. To some extent, thus, the ‘external validity’ of some hypothesized causal relationship can be extended and the range of its applicability including certain limitations in time and space can be established.

Until recently, these designs had not, however, been fully operationalized. Several problems must be addressed which follow from the necessity of measuring the proximity or remoteness of pairs of cases in the heterogeneous, multidimensional space defined by the independent variables. These distance measures provide the basis for determining the ‘most different’ and ‘most similar’ pairs or groups of cases with regard to the respective outcome. The two main issues are: (a) choosing from among a variety of different ways of measuring the distance between pairs of cases in a multidimensional space; and (b) assigning relative weights to the variables that define this space. In this way the complexity of the data can be retained as much as possible in the complexity of the proximity measure. These procedures then make it possible to identify ‘most different cases with the same outcome’ (MDSO) and the ‘most similar cases with different outcomes’ (MSDO). The causes of different outcomes may now be assumed to lie in the commonalities that remain among MDSO cases and the differences that distinguish MSDO cases (for such operationalizations see, e.g., De Meur and Berg-Schlosser 1996).

Another strategy for dealing with the ‘many variables—few cases’ dilemma concerns the variables side. Here, for reasons concerning the ‘internal validity’ of Mill’s methods, one should take a rather comprehensive approach in order not to leave out any possibly relevant variables. This, of course, depends on the state of theory in any given area of investigation, but usually in the social sciences there are several contending or supplementary hypotheses which should all be considered. Furthermore, not only bivariate relationships with an independent variable and the respective outcome, but also the possibility of more complex causal problems, e.g., ‘multiple’ (several variables combined have a certain effect) or ‘conjunctural’ ones (alternative combinations of different variables may lead to the same outcome) must be taken into account.

One technique is to look for ‘constants’ across the observed cases. If one variable turns out to be consistently linked to a particular outcome, it becomes a strong candidate for inclusion in any explanation as a necessary but perhaps not sufficient condition. A related approach focuses on correlations between the independent variables and the dependent variable. Like the search for constants, the examination of correlations may provide strong hints. More sophisticated techniques include discriminant analysis and logistic regression (applied to dichotomous outcome variables). A relatively large number of variables can be simultaneously treated in this way. These techniques are typically used to assess the net, additive contribution of each independent variable to some outcome. Variables that are uncorrelated with other variables but strongly correlated with the outcome are favored by these methods. Thus, these techniques are biased toward causal factors that act independently, not conjuncturally.

These statistical approaches are useful for identifying broad patterns, but they are of limited utility in situations of causal complexity. Most complex causal combinations are hidden from analyses based on correlations. For example, if a variable must be present in some conjunctures for an outcome but absent in others, the correlation between this cause and the outcome may be 0. ‘Qualitative Comparative Analysis’ (QCA), a recent comparative technique based on Boolean algebra (cf. Ragin 1987), can be used in situations of causal complexity. In contrast to the linear techniques just described, QCA focuses on configurations of variables. In QCA, an independent variable can be eliminated from an analysis if it does not uniquely distinguish any case that manifests an outcome from at least one case lacking the outcome.

Other techniques for reducing the number of variables focus on patterns among the variables. In the light of relevant theoretical and empirical knowledge, investigators may select a more limited number of major variables or reconstruct them in different ways. Such reductions may proceed statistically. For ex-ample, researchers often use factor analysis to combine correlated variables into single indices.

Qualitative techniques for constructing more en-compassing causal variables are also possible. For example, in the interwar study the existence of a rural proletariat and large-scale landlords has been combined into the more encompassing concept of ‘feudalism.’ Similarly, a Boolean ‘addition’ of factors, constituting alternative constellations in combination with another variable, may be possible. For example, the presence of ethnic, religious, or regional social cleavages in the absence of any overarching (‘ erzuiling’) structures was combined to the variable of ‘social heterogeneity’ in the study of democratic breakdowns and survivals (see also Berg-Schlosser and De Meur 1997). Use of ideal typic constructs can also be conceived of as reductions of complexity emphasizing some characteristics and de-emphasizing others. In this way, the overall number of variables may be reduced considerably while still retaining much of the original information. With the help of such strategies, both from the cases and the variables sides, the relationship between their respective numbers may be reduced to manageable proportions, thus avoiding the problem of ‘over-determination’ when the latter exceed the former.

QCA can also be employed for other purposes. It is based on dichotomized variables (which, however, can be constructed as ‘dummies’ from multichotomous or interval ones) and reduces, by employing certain algorithms, the observed complexity of possible in-dependent factors with regard to a particular outcome to a minimal logical formula (or several alternative ones) taking account of the possible interactions of these factors. In this way it can: (a) examine a universe of cases with regard to a given output in an exhaustive manner; this description will be the most concise, including all possible alternate minimal expressions which allow for the most parsimonious interpretations. (b) test hypotheses by showing whether they are consistent with the combinations described; if many contradictions occur, the hypothesis fails. (c) deduce the shortest possible formulas, describing the simplest subset of actual, potential, or counterfactual cases not contradicting the respective outcome; in this way, more general expressions can be deduced by using ‘simplifying assumptions.’ QCA thus serves some of the most important aims of any science, namely systematic description and the falsification and construction of theories.

QCA shares some of the problems and limitations concerning the loss of information and a certain amount of arbitrariness in setting cutting-points for dichotomized variables. In contrast with common statistical procedures, however, it retains the full complexity of all cases and all variables considered. Nonunique formulas may emerge, but this must be considered a strength and not a weakness. It deals explicitly with ‘outliers’ and covers them fully. It can arrive at several ‘conjunctural’ constellations which explain the same outcome. When several different formulas cover a single case, the technique forces the researcher (as the MSDO technique) to take a closer look at the results and to interpret them in light of the historical knowledge of the case. In comparative inquiry, this approach is preferred to a purely mechanical procedure which, in many statistical analyses, entirely obscures the fate of particular cases. Here, in fact, begins the real qualitative work, depending very much on the training and quality of researchers, their in-depth knowledge of cases, but also their sensitivity and understanding.

5. Methodological Issues And Problems

Even though the ‘small N’ comparative and the statistical methods have their own realm of possible applications, each with its own advantages and problems, different ‘schools’ have formed around these approaches fraught with controversies. In part, these reflect certain biases; to some extent they are based, however, on real problems which cannot easily be resolved. Some of these issues have been stated as quantitative vs. qualitative, and ‘case-oriented’ vs. ‘variable-oriented,’ holistic vs. analytic, or deterministic vs. probabilistic research.

These distinctions pose some false alternatives and represent differences in emphasis rather than in kind. Thus, statistical procedures are, by their very nature, based on larger quantities, but ‘small N’ analyses similarly may deal with numbers and certain quantitative data. ‘Qualitative’ aspects here refer to the presence or the absence of a factor, as in QCA, but should not be confounded with ‘qualitative’ methods as, for example, hermeneutics or participant observation in a different sense of the word.

In the same way, ‘case-oriented’ means that emphasis is laid on a number of known and theoretically or historically particularly interesting and relevant cases, as opposed to, for example, the anonymous ones in random sampling, but all scientific inquiry is, of course, based on distinct units (cases) and variables and a number of relevant observations (King et al. 1994). ‘Holistic’ is also a misleading term referring to the usual higher complexity of case-oriented research but it does not mean in this context that cases can only meaningfully be understood in their ‘entirety’ (as for example in Confucian philosophy) as opposed to ‘critical-rational’ analysis.

Finally, ‘deterministic’ is a misnomer for procedures based on Millian methods or logical distinctions as in Boolean algebra. Here, only the presence or absence of a factor is referred to and not its ‘probabilistic’ nature with respect to its effect on the internal or external validity of any particular finding. ‘Logically minimized’ relationships found by Boolean procedures may be as ‘spurious’ as correlational analysis. The revealed patterns indicate different more complex ‘qualitative’ constellations, but they are not based on their relative frequency or ‘probabilistic’ inferences.

As to the ‘real’ problems, however, these persist and only certain approximations or tradeoffs between certain emphases are possible. Millian methods and similar ones can only serve to eliminate false hypotheses and to delineate the ‘conditions for the occurrence’ of a phenomenon (Cohen and Nagel 1934). Within this realm the truly ‘qualitative’ and interpretative skills of knowledgeable experts etc. are called for to provide meaningful assessments of the factors involved. These may be more tedious and less universalistic, but they are often more meaningful and less superficial than many of the ‘macroquantitative’ studies.

Most ‘small-N’ comparative studies, but also the majority of statistical analyses, have been ‘cross-sectional’ and ‘static.’ There have been some advances in the statistical field in recent years as far as ‘pooled time series’ and ‘event history’ modeling are concerned, but these share some of the limitations and the relative ‘superficiality’ of macroquantitative procedures. There are, as yet, no truly ‘dynamic’ procedures comparable to the MSDO or QCA techniques combining both cross-sectional and longitudinal aspects in a ‘macro-qualitative’ way. These still rely more on well-informed ‘thick narratives,’ but some specific ‘path-dependent’ techniques have been developed more recently.

A further limitation refers to the ‘structure’ (‘macro’-) and ‘actor’ (‘micro’-) dimensions of any social and, by implication, comparative analysis. Most comparative studies are concerned with the broader and presumably more enduring ‘structures’ of social processes both in time and space. The final outcome, nevertheless, may very well depend on any particular (group or individual) actor in the analyzed chain of events. Again, only certain approximations can be arrived at in this regard and certain tradeoffs between more ‘universalizing’ and more ‘individualizing’ approaches (Tilly 1984) have to be made. No ‘macro’–‘macro’ shortcuts should be made in socio-logical explanations because always concrete subjects with their respective perceptions and preferences are involved, which only in a later step may be aggregated at a ‘meso-’ or ‘macro’-level. Whether these actors should always be conceived of as ‘rational’ or as ‘Resourceful, Restricted, Expecting, Evaluating, Maximizing Men’ (RREEMM) or women (Esser 1993, Chaps. 6 and 14) need not concern us here. But at least the ‘expecting’ and ‘evaluating’ aspects point to the possibility that different things may be perceived, valued, and maximized in different cultures. In any case, the broader structural (and also larger contingent) aspects can be conceived as the ‘opportunity set’ within the boundaries of which actual actors operate and respective choices are made. In this regard, the ‘scale’ and detail of any particular analysis may vary as with maps for different purposes. Thus, a hiker’s map may serve for a close orientation on the ground, whereas a pilot’s map may only include the more important contours and ‘structural’ features, and even a global and ‘universal’ one may be devised, but it is important that all scales and features remain consistent with each other!

A final point must again be mentioned, the problem of inference. Another important and relatively recent work has stressed, once more, the ‘unified logic of inference’ of both quantitative and qualitative research (King et al. 1994). In spite of many important issues which are raised and clarified in this book, the experiences and recommendations of the authors lean somewhat too strongly on the ‘quantitative’ side of research. This relates both to the ‘internal’ and ‘external’ validity of the inferences to be drawn. In terms of the ‘internal’ causal chains which should be established, King et al. look at them, as has also been noted by Scharpf (1997, 22 ff.), from the side of the independent variables in a ‘forward’ sense towards their eventual effect on the outcome.

Much of small-N comparative research, as for example in policy studies, is, however, concerned with disentangling the different influences on an outcome from a ‘backward’ perspective in order to identify the different factors in their complex interactions over time involving the major ‘structures’ and ‘actors,’ but also many contingent and unique aspects. This resembles more a detective’s investigation of a crime or an accident than any more ‘nomothetic’ generalizing scientific efforts and remains ‘idiographic.’

The more complex pattern of causality which has been established in this way in terms of its internal validity, can then, however, serve to devise new more complex hypotheses which can be tested across more similar (comparable!) cases and, possibly, establish a certain range of their external validity through ‘process tracing’ and similar procedures. A much stronger theory may thus emerge. We concur, therefore, with Goldstone’s conclusion that the comparative case study method does ‘… not stand on flimsy methodological foundations, but on a stronger combination of skill and insight in framing and testing hypotheses than many large-N statistical studies’ (Goldstone 1997, p. 119).

6. Probable Future Directions Of Theory And Research

The strength of the comparative method lies, therefore, in its critical applications to theory testing and development. The state of affairs of theory building in the social sciences remains relatively weak and is subject to specific problems and restrictions and to the general ‘malleability’ of its substance. Even if only certain approximations can be reached, the relevance of many macrosocial and political problems, for example in policy research, institution-building, ‘constitutional engineering’, etc., for potentially millions of people make it worth every effort. Better theory, in the longer run, then may also contribute to a better practice.

It is difficult, if not impossible, to predict any more distinct developments in this regard, but at least some of the desiderata which reflect some of the existing deficiencies can be outlined. First of all, many more existing propositions across the various fields of social inquiry based on other methods of data collection and theory building should be subjected to systematic testing by comparative methods as far as they refer to ‘small-N’ environments. Counterfactual examples and ‘most different’ cases should be sought to establish their respective range of external validity. Second, from this body of research, more coherent propositions may emerge delineating their respective range of application, both in space and time.

It seems likely that such ‘medium range theories’ will increasingly be based on regional and historical commonalities of certain countries and cultures. Some ‘deep structures,’ such as the social cleavages in Western Europe, and ‘deep cultures,’ such as certain characteristic submilieus in Germany or Italy, have persisted over centuries and can be linked to present developments (see also Flora 1999). What is needed are still better qualitative and quantitative methods to assess their continuities and changes over time and dynamic concepts and tools to establish their ‘self-referential’ nature, not just in abstract theory, but also in empirical reality. Such a combination of theoretical and historical concerns may also employ better ‘analytic narratives’ and lead to a new focus on the interactions between structures and actors.

Bibliography:

  1. Almond G A, Genco S J 1977 Clocks, clouds and the study of politics. World Politics 29: 489–522
  2. Berg-Schlosser D, De Meur G 1997 Reduction of Complexity for a Small-N Analysis. Comparative Social Research 16: 133–62
  3. Cohen M, Nagel E 1934 An Introduction to Logic and Scientific Method. Harcourt, Brace, New York
  4. Cook Th D, Campbell D T 1979 Quasi-Experimentation—Design and Analysis Issues for Field Settings. Houghton Mifflin, Boston
  5. De Meur G, Berg-Schlosser D 1996 Conditions of authoritarianism, fascism and democracy in inter-war Europe: systematic matching and contrasting of cases for ‘small N’ analysis. Comparative Political Studies 29(4): 423–68
  6. Durkheim E 1988 Les Regles de la Methode Sociologique. Flammarion, Paris [originally published in 1894]
  7. Esser H 1993 Soziologie. Campus, Frankfurt
  8. Flora P (ed.) 1999 State Formation, Nation-Building and Mass Politics in Europe—The Theory of Stein Rokkan. Oxford University Press, Oxford, UK
  9. Goldstone J A 1997 Methodological issues in comparative macrosociology. Comparative Social Research 16: 107–20
  10. King G, Keohane R O, Verba S 1994 Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton University Press, Princeton, NJ
  11. Mill J S 1974/75 A system of logic. Collected Works of J. S. Mill. Routledge and Kegan Paul, London, Vols. vii and viii [originally published in 1843]
  12. Popper K R 1934/1968 The Logic of Scientific Discovery. American edn. Harper and Row, New York
  13. Przeworski A, Teune H 1970 The Logic of Comparative Social Inquiry. Wiley, New York
  14. Ragin C C 1987 The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. University of California Press, Berkeley, CA
  15. Scharpf F W 1997 Games Real Actors Play—Actor-Centered Institutionalism in Policy Research. Westview Press, Boulder, CO
  16. Tilly Ch 1984 Big Structures, Large Processes, Huge Comparisons. Russell Sage Foundation, New York
  17. Weber M 1949 The Methodology of the Social Sciences [trans. eds. Shils EA, Finch HA]. Free Press, Glencoe, IL
Computational Approaches To Model Evaluation Research Paper
Comparative Neuroscience Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!