Sample Quantiﬁcation In The Social Sciences Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.
Quantiﬁcation means, very broadly, the use of numbers to comprehend objects and events. While quantiﬁcation is often identiﬁed with science, it has been equally signiﬁcant for business, administration, and the professions. Quantitative methods had a crucial role in shaping the social and behavioral sciences, and remain central to them. The purpose of this research paper is not to supply a practical guide to the use of quantitative tools in social science, but to examine their backgrounds and assess their cultural signiﬁcance from the standpoint of science studies and the history of science. In modern times, quantitative methods and the numbers they produce have assumed a powerful role in most societies as intersecting discourses of science, administration, and the public sphere. This research paper emphasizes the more public uses of quantiﬁcation.
Need a Custom-Written Essay or a Research Paper?
Academic Writing, Editing, Proofreading, And Problem Solving Services
The broad category of quantiﬁcation refers to several allied but distinct human activities. Mathematization means the characterization of objects and events in terms of mathematical relationships. While it is not the main topic of this research paper, it is of course relevant. The expression of scientiﬁc theories in mathematical terms has provided one important reason for relying on quantitative or numerical data. There have been other reasons of a technical, administrative, or moral kind. Among the more empirical forms of quantiﬁcation, measurement grew out of such practical activities as surveying lands or buying and selling. Counting, the basic tool for assigning numbers to sets of discrete objects was deployed for such tasks as inventorying one’s possessions or recording human populations. Calculation refers loosely to methods for combining and manipulating numerical information into more readily useable forms. Statistics, a body of methods for analyzing quantitative information, is discussed historically in other essays in these volumes, and receives also some attention here.
1. Theories And Measures
As early as the 1660s, and more systematically from about 1830, social and economic investigators held up quantiﬁcation as the proper method of science. The growth of university-based social disciplines in the twentieth century, and especially after about 1930, was attended by a widespread faith that through numbers, social research could be made genuinely scientiﬁc. A generation or more of positivistically-inclined researchers looked to systematic measurement as the proper foundation for solid, mathematical theory. Some compared its progress with the development of physics in the seventeenth century, and anticipated a breakthrough in the near future.
The most inﬂuential historical and philosophical writers on science in the postwar period took almost the opposite view. Koyre (1968), whose philosophical studies of seventeenth-century science inspired the ﬁrst generation of professional historians of science in the anglophone world, seriously doubted that mere measurements could oﬀer much to a new or unformed science. Science could never have become mathematical, he argued, through the mere processing of empirical measurements. He doubted that Galileo had learned much from rolling balls down inclined planes. The new kinematics applied not to real objects, but to abstract bodies in geometrical space, and Galileo must have derived his law of falling bodies mainly by working out logically the consequences of what he already knew. This mathematical and metaphysical work was, for Koyre, the most important achievement of the new science. Indeed, only through theory had the measures attained scientiﬁc meaning. Kuhn 1961, 1976) reaﬃrmed Koyre’s broad argument and provided a historical perspective that reached forward to the nineteenth century. The most successful sciences of the seventeenth-century ‘scientiﬁc revolution,’ including mechanics, astronomy, and geometrical optics, had already been mathematical in ancient times. The more Baconian, experimental studies of the early modern period were comparatively ineﬀective until a ‘second scientiﬁc revolution,’ beginning around 1800, when new scientiﬁc objects such as electricity and heat began to be comprehended mathematically.
Scholars in science studies since about 1980 have tended to dethrone theory in favor of a more respectful attention to scientiﬁc practices. Accordingly, they take more seriously the quantiﬁcation of phenomena that have not been grasped by exact laws. Just as the life of experiment has never been strictly subordinated to theory, so numbers have often been independent of higher mathematics. They are linked not just to theoretical science, but also, and perhaps even more consequentially, to economic and administrative changes. The history of ideas alone cannot convey their full signiﬁcance.
2. Quantities And Standards
Quantity is not ﬁrst of all the language of science, imported as something alien into everyday aﬀairs, but an indispensable part of the fabric of life. Yet the aspiration to science has gone a long way toward reshaping the role of numbers and measures in the world. In this—as Max Weber suggested—we ﬁnd much in common between bureaucratic and scientiﬁc rationality. Mathematics has often, and plausibly, been held up as an exemplar of universalized knowledge. The rules of arithmetic, or constants like π, are the same everywhere, at least in principle. Yet the universalism of measures and numbers, particularly when these have empirical content, has been achieved only in a limited way, by overcoming great obstacles. Even now, even in the industrialized West, quantities and prices are managed in everyday activities such as shopping by techniques quite diﬀerent from those taught in schools (Lave 1986). Before the rise of modern science, in the pre-industrial world, quantiﬁcation had very diﬀerent meanings. Numbers were pervasive, but for most purposes their use and meanings were thoroughly local. The process through which they gained general validity is one of the crucial transformations of modernity.
2.1 A Plethora Of Units
As Kula (1986) has shown, measures remained immensely variable in Europe as late as the eighteenth century. This was partly by design, since small cities and lesser states often claimed the right to deﬁne their own weights and measures as a mark of autonomy and an obstacle to rule by higher authorities. A diversity of measures, however, is the natural state of things wherever the pressures of commerce and administration have been insuﬃcient to bring about uniﬁcation. At a local level, authorities took some pains to reduce the ambiguity of measurement. A bushel measure, valid for the region, would often be ﬁxed to the town hall, and could be used to settle disputes. In the next town, however, measures might well be diﬀerent, and over greater distances they varied considerably. The subtle geography of measures, moreover, was not deﬁned exclusively by distance. Milk, oil, and wine came in diﬀerent units; measures of cloth depended on the width of looms, and also on the costliness of fabric and weave. The practices of measurement, too, were complexly nuanced. A bushel measure, even if the volume was ﬁxed, could be tall and narrow or short and wide; the wheat might or might not be compacted by being poured in from a considerable height; and a complex set of conditions governed whether to heap the bushel, and by how much. Parties to a transaction often negotiated about measures rather than about prices. Add to this the consideration that almost nothing was decimalized, not even money, and it is evident that arithmetical calculation provided no ﬁxed routine for managing aﬀairs or settling disputes (Kula 1986, Frangsmyr et al. 1990). The conversions required by trade provided an important source of employment for mathematicians.
The simpliﬁcation and standardization of measures was encouraged by the expansion of long-distance trade, but was achieved primarily by governments in alliance with scientists. Among Europeans, it was the British who came nearest to creating uniformity of measures in the eighteenth century. A more systematic, almost utopian, scheme, the metric system, was initiated in the 1790s under the French Revolution. In its most ambitious form this included a new calendar, with the declaration of the Republic deﬁning the Year One, using months of 30 days divided into weeks of 10 days, and even a few decimalized clocks, beating out 100 seconds per minute. The meter was set so that 10 million of them would reach from pole to equator, in the vain hope of integrating geodetic and astronomical measures with those on a human scale (Gillispie 1997). Hence, bold expeditions were sent out into a disordered countryside to survey and measure a meridian. Although the meter was justiﬁed in part as a response to pleas in the cahiers de doleance for ﬁxed measures, against the social and economic power implied by seigniorial discretion, the highly rationalized metric system of decimalized conversions and strange Greek names—of milliliters, kilometers, and centigrams— went well beyond this. Not until the July Monarchy, after 1830, did the metric system begin to succeed even in France (Alder 1995). In time, however, it has come to provide a common set of measures for most of the world, with the United States a notable, though partial, holdout.
A precise and maximally rigorous deﬁnition of units is a basic element in the infrastructure of modern life. The role of standardized measures in systems of manufacturing based on interchangeable parts is widely recognized. They are indispensable also for large-scale networks such as electric power distribution, telephony, and, by now, computers and the Internet. A collaboration of scientists and bureaucrats was required to create them. In some cases, as with the electrical units deﬁned in the latter nineteenth century, eminent scientists such as James Clerk Maxwell, William Thomson (Kelvin), and Hermann von Helmholtz were deeply involved in the ﬁxing of standards.
Other units, far from the domain of scientiﬁc theory, play an indispensable role in the regulation of modern life. Measures of eﬄuent concentrations, for example, are deployed by environmental agencies to regulate pollutants. They depend on meticulous speciﬁcation not only of instruments, but also of sampling methods, training of technicians, custody of materials, and so on. Since penalties and taxes often depend on them, they are always vulnerable to challenge. In this domain, as in many others, an immense apparatus of instrumentation and control lies behind what scientists and citizens have the luxury (usually) of regarding as ‘raw,’ or merely ‘descriptive,’ data.
2.2 Producing Uniformity
Units and instruments are not by themselves suﬃcient to quantify eﬀectively. A quantiﬁable world contains reasonably homogeneous objects, objects whose measures can be compared, or that can be grouped together and counted. Nature and unregulated human labor produce a few of these, though real uniformity occurs mainly on a scale—for example, the molecular one—that is not readily accessible to the human senses. Physics was quantiﬁed partly in terms of abstractly ‘mathematical’ entities, such as forces, and partly through increasingly rigorous experimental controls, involving an expanding apparatus of instruments.
In the social world, where quantiﬁcation is also a tool of administration, these same regulatory processes have helped to produce new quantiﬁable entities. The investigation and compensation of unemployment, for example, solidiﬁed the category of ‘the unemployed.’ To collect unemployment insurance requires that a person meet certain standards, which are more or less closely monitored, and to which persons who lack work are encouraged to conform. They also must enroll themselves, thereby providing basic data for the statistics. The point here is not that government oﬃces cause people to be without work, though mechanisms of compensation will certainly aﬀect the rates. It is that oﬃcial categories sharpen deﬁnitions, and so help to create countable entities. Most human and social types are variable, or at least fuzzy at the margins (Hacking 1995). Quantiﬁcation is allied to processes that make them more uniform, and hence countable. A similar story can be told about ethnicity or mental illness. Economic categories such as proﬁts, investment, and trade, are similarly dependent on the processes through which they are recorded and regulated.
3. The Authority Of Calculation
Numbers and measures are not merely tools of description and analysis. They work also as guides to action, especially public action. An explicit and formal analysis, often largely quantitative in form, is by now required for a variety of public decisions involving investment or regulation, especially in the United States and Britain. Business organizations, too, rely increasingly on measures and projections in order to make decisions. These analyses are often expressed in money terms, and their modern expansion attests to a triumph of capitalism. But public reliance on economic quantiﬁcation is by no means an imitation of its commercial uses. The tools of quantitative analysis were developed for as much for government as for business purposes, and have become increasingly dependent on academic researchers. In the background is an evolving political and administrative culture, reﬂecting changes in the status and composition of elites.
It is characteristically modern to suppose that a conscientious decision should involve explicit consideration of the available data. Decision by calculation goes beyond this, in expecting a course of action to be dictated objectively by facts. The prototype of this ideal of public reason is cost-beneﬁt analysis. In practice, cost-beneﬁt often involves predictions in many domains, including weather patterns, health, hydrology, ecology, and commerce, but its most distinctive aspect is the attempt to convert a great diversity of considerations into a common ﬁnancial metric. The diﬃculties of commensuration are most acute in regard to such considerations as human lives lost or saved, the disappearance of biological populations, or the degradation of a beautiful landscape. To the lay public, this appears equivalent to sacriﬁcing a life for a sum of money, and of very doubtful morality (Sagoﬀ 1988).
Economics provides a rationale for equations of this kind. Economists argue that people routinely accept increased risks for better wages. Neoclassical economic theories presuppose an underlying variable, utility, which individuals seek to maximize. Even if it cannot be directly measured, personal choices therefore involve some implicit commensuration of pleasures and responsibilities, acquisition and leisure, friendship and personal ambition. Cost-beneﬁt analysis did not, however, develop out of theoretical economics, but from speciﬁc political and administrative problems of the mid-twentieth century. Engineers, not economists, initially faced the immense diﬃculties of incorporating life, health, recreation, and scenic beauty along with economic development as they proﬀered measures of the costs and beneﬁts of water projects. Despite political opposition to some of these measures, a political logic of rationalized choice made them necessary none the less. The task was to devise methods that led to reasonably consistent answers, and where necessary to put aside what seemed their implausibility or even immorality. Beginning in the 1950s, economists and social scientists were recruited into agencies like the US Army Corps of Engineers and the Bureau of Reclamation to provide more consistent or more acceptable solutions to these thorny problems (Espeland 1998, Porter 1995).
3.2 Trust And Impersonality
Academic problems of social measurement and commensuration are, thus, embedded in social and political conditions that put a premium on scientiﬁc objectivity. In important ways, of course, science and philosophy have always been concerned with objectivity, in the sense of truth. Since the seventeenth century, the relation of fact and values has become more problematical, and for some purposes the credibility of science has been enhanced by its separation from the ‘subjective’ domain of values. The ambition to provide a scientiﬁc, possibly quantitative, basis for public decisions remained quixotic or utopian until at least the late eighteenth century. It remained unrealistic wherever an aristocratic political class retained power. Not everyone would have assented to Frederic Le Play’s assessment that numbers have become necessary only in democracies, where leaders no longer control the wisdom of birth and experience. But the aspiration to replace judgment with objective, quantitative rules grew up in situations where elite authority had become suspect.
The move by American state engineers to an increasingly rigorous cost-beneﬁt methodology was a response to systematic challenge, and exempliﬁes this point (Porter 1995). So also do other exemplary tools of ‘mechanical objectivity,’ such as the accountant’s bottom line and the test of statistical signiﬁcance of the social scientist or medical researcher. Accountants, for example, did not abandon willingly what many saw as their professional responsibility to oﬀer an expert judgment of the ﬁnancial condition of ﬁrms. They did so, rather, in the context of regulatory pressure and public distrust. This must be understood in the context of a broad cultural and political history of the twentieth-century, and especially of the United States. Some basic features of that history would include the growth of government, frequent interagency conﬂict, increasingly eﬀective opposition to administrative secrecy, and public distrust of ‘bureaucracy.’ The unprecedented quantitative enthusiasm of postwar social science involved, along with its optimism about science, a renunciation of the personal and the subjective that was nicely adapted to contemporary political circumstances. Allied to it was the institutionalized reliance on quantiﬁcation taught in management and policy schools.
4. A Historical Perspective On Social Science Quantiﬁcation
Reﬂective accounts about the social and behavioral sciences have often supposed that their ﬁelds have short histories, and that the pursuit of quantiﬁcation has meant following the track of natural science triumphant. But the rise of a quantitative ethos in natural science can be dated no earlier than the mid-eighteenth century. Systematic measurement until then was almost conﬁned to astronomy, for which a host of ‘mathematical instruments’ had been developed in China, India, the Near East, and Europe. Many were geometrical and not numerical. Their purposes were practical as well as scientiﬁc: to survey land, navigate ships, calculate calendars and horoscopes. Barometers and thermometers helped to extend the domain of measurement into meteorology. Measurement in early modern Europe was associated as closely (or more closely) with natural history as with experimental physics. By 1750, instruments were deployed systematically to measure mountains and other natural wonders and, thus, to lend precision to the appreciation of nature. The development of an exact science of electricity in the later eighteenth century marked the triumph of experimental quantiﬁcation (Heilbron 1979). Measurement had also become routine in technological endeavors, such as mineral assaying, decades before Lavoisier and his contemporaries began to insist on its centrality to chemistry.
From this perspective, it would be diﬃcult to argue that political and economic studies lagged behind natural science in their reliance on numbers. Among the early members of the Royal Society of London, founded in 1660, were William Petty and John Graunt. Petty announced a new science of ‘political arithmetic,’ which in his hands involved speculative estimates of population and of wealth. Graunt is noted for his more strictly empirical compilations from the London bills of mortality. Near the same time, Jan de Wit, the Dutch Stadholder, used rudimentary probability theory to assign rates for the sale of annuities, and the astronomer Edmond Halley calculated the ﬁrst mortality table. Political arithmetic, which entailed the compilation as well as the analysis of demographic and medical records, ﬂourished in the eighteenth century. At the same time, probability theory was pursued as a mathematical theory of belief and credibility (Hacking 1975, Daston 1988). The philosophe and mathematician Condorcet, a leader of the French Revolution who then was hunted down by it, hoped to develop probability into the theoretical basis for elections and judicial decisions (Baker 1975).
4.1 The Nineteenth Century And The Rise Of Statistics
By the early nineteenth century, as Kuhn observed, the natural-historical investigation of heat, electricity, and the physics of light had given way to sciences that were both mathematical and quantitative. Possibly the analysis of mortality records for insurance can be regarded as comparable in certain respects. But the explosive growth of quantiﬁcation beginning in the 1820s, what Hacking (1990) has called an ‘avalanche of printed numbers,’ was not concentrated in the experimental sciences. Rather, it involved systematic surveys of stellar positions, terrestrial magnetism, weather phenomena, tides, biogeographical distributions, and, most prominently, social statistics.
The collection and investigation of social numbers had expanded gradually over the course of the eighteenth century. Few Enlightenment states had the bureaucratic means to conduct a full census, and most were disinclined to release such sensitive information to their enemies or even their subjects. In many places, too, the people would have viewed census oﬃcial as harbingers of new taxes or military conscription, and in Britain the census was blocked by a gentry class that refused to be counted. In contrast to piecemeal eﬀorts at home, there were some systematic surveys abroad, the machinery of the information state having been turned loose ﬁrst of all in colonial situations. Among European states, Sweden pioneered the regular census, in 1749; its results remained secret until 1764. The French used sampling, of a sort, and enlisted mathematicians such as Pierre Simon Laplace to estimate the population. The US census of 1790 heralded a new era of regular, public censuses, which were encouraged in Europe by French military conquest and by the pressure of wartime mobilization. The British decennial census, for example, was instituted in 1801. Only in the1830s did bureaucratic professionals begin to take over the collection of social and economic numbers in the most advanced European states.
At the same time, an increasing pace of quantitative study by voluntary organizations made ‘statistics,’ pursued by ‘statists,’ a plausible ‘science of society.’ One model for it was developed by the Belgian Adolphe Quetelet, whose quantitative pursuits ranged from observational astronomy and meteorology to the bureaucratic organization of oﬃcial statistics. He argued for an alliance of data gathering with mathematical probability, in the quest to uncover ‘social laws.’ Apart from life insurance, however, most statistical collection was in the control of reformers and administrators with little mathematical knowledge. Their enthusiasm for counting was perhaps indiscriminate, yet it was nurtured by speciﬁc anxieties about poverty, sanitation, epidemic disease, economic dislocation, and urban unrest. Social investigation, still more than natural science, was an object of contest, and there were active eﬀorts to organize a social science to speak for working people (Yeo 1996). Karl Marx’s immense statistical labors for his Capital, drawing mainly from oﬃcial British inquiries whose results he praised as honest and reliable, testify impressively to the credibility of social numbers. Middle-class organizations had more staying power than working-class ones, and better standing to claim the mantle of ‘social science.’ In Britain, the Statistical Society of London (founded 1834) and the National Association for the Promotion of Social Science (1857) joined bureaucrats with professionals and even aristocratic political leaders. In Germany, statistics became a university discipline, which, from about 1860 to 1900, combined empirical quantiﬁcation with a historicized ‘national economics,’ in a reformist campaign for state action on behalf of workers and peasants. Some of the founders of social science in America were educated in this tradition, as was Max Weber. Emile Durkheim, too, engaged with it, as his Suicide of 1897 plainly attests (Porter 1986).
The nineteenth-century science of statistics embraced much of the subject matter of the modern social sciences. It overlapped, and sometimes competed, with political economy, political science, geography, anthropology, ethnology, sociology, and demography. These stood for genres, more or less distinct, though not for anything so deﬁnite as modern professional disciplines. Their quantitative methods were not identical, but neither were they clearly diﬀerentiated from each other or from measurement in biology. There was much uncertainty and debate from 1830 through the end of the century as to whether statistics was a science, with its own subject matter, or only a method. It was resolutely empirical, even to the point that sampling seemed too speculative. Far from being allied to any mathematical social science, statistics was more often at war with deduction and abstract theory. Some, like the economist William Stanley Jevons, would claim that political economy should be mathematical because it was inherently quantitative, but he could not attach the mathematical formulations he introduced to empirical numbers.
4.2 Quantiﬁcation In The Modern Social And Behavioral Sciences
Toward the end of the nineteenth century, a more mathematical ﬁeld of statistics began to develop. The most inﬂuential version of the new statistics was developed within a British biometric tradition by Francis Galton and Karl Pearson. At almost the same time, however, distinct social and behavioral sciences began to crystallize, in part around increasingly distinct quantitative methods. All were conceived as ways to enhance the scientiﬁc standing of their ﬁelds, and each was engaged also with issues of politics, policy, and the management of economies and populations. The typology to follow is intended to be suggestive, and certainly cannot be exhaustive.
4.2.1 Physical Anthropology And Eugenics. The measurement of humans was carried out for bureaucratic as well as scientiﬁc purposes throughout the nineteenth century. The increasing concern with race in anthropology was linked to a preoccupation with skulls, whose measures were used to distinguish distinct ‘races’ of Europeans. What Francis Galton called ‘correlation’ was developed to measure the interconnections among physical human characteristics, their perpetuation from one generation to another, and their relation to measures of merit or achievement.
4.2.2 Psychometrics And Psychology. Correlation provided also a solution to a problem of contemporary psychophysics, related to education. Gustav Theodor Fechner began applying the methods of error theory to the measurement of sensory discrimination in the mid-nineteenth century. Psychophysics was plagued, however, by the problem that a task—for example, of distinguishing two slightly diﬀerent weights— could not be repeated indeﬁnitely, because the subject learned over time to do better. Indeed, the left hand acquired some of the ﬁne discrimination developed through practice with the right hand. By the end of the century, this obstacle had become a topic of investigation in its own right, as an approach to the psychology of learning. Systematic use of experimental randomization was developed for studies of this kind (Dehue 1997). They provided also a wide ﬁeld of application for correlational methods. Did childhood study of dead languages really train the mind for speedier or more thorough learning of other subjects, such as science, history, or literature? The growth of mental testing in the early twentieth century opened another question of correlations. To what extent did students who proved themselves adept in some particular study excel also in others? Charles Spearman’s g, or general intelligence, and the ‘intelligence quotient’ or IQ, which ﬂourished in America, presupposed a basic unity of the mental faculties. Psychometricians and educational psychologists developed a body of statistical methods to measure this intelligence, or to decompose it, and to assess the relations of its components (Danziger 1990).
In the 1930s and 1940s, as psychology turned resolutely experimental, quantiﬁcation became practically mandatory. Once again, educational psychology and parapsychology took the lead in the assimilation of novel methods of experimental design and analysis. The new paradigm of a proper experiment involved a randomized design of control and experimental populations, yielding results that would, in most cases, be subjected to analysis of variance (ANOVA). A worthwhile result was deﬁned in part by a test of signiﬁcance: the ‘null hypothesis’ that the treatment had no eﬀect should be rejected at some speciﬁed level, commonly 0.05 (Gigerenzer et al. 1989).
4.2.3 Econometrics And Economics. The new mathematics of marginalism was developed in the 1870s and self-consciously reconciled to the classical tradition in the 1890s by Alfred Marshall. His work included abundant graphical illustrations, but these involved quantities that could not easily be measured. The origins of econometrics are largely distinct. From origins in the 1870s to the establishment of a society and a journal in the 1930s, the ﬁeld focused on the study of business cycles (Morgan 1990). Econometricians took their numbers where they could get them, often from public sources. They were highly dependent on national accounts, prepared by governments, and indeed participated in structuring these accounts. The paradigmatic statistical tool of econometrics, adapted from biometrics mainly after 1950, was regression analysis.
4.2.4 Surveys. From the founding of statistical societies in the 1830s, social science has relied on surveys to learn about how lives and opinions. Charles Booth’s monumental studies of East London beginning in the 1880s inspired a movement of surveys of towns and regions in Britain and America. At ﬁrst, researchers were expected to examine everyone, but after 1900, sampling began to gain acceptance. Arthur Bowley and then Jerzy Neyman developed sampling methods that permitted generalization using probability theory. The techniques of survey sampling have linked sociology and political science to marketing and political campaigns (Bulmer et al. 1991, Desrosieres 1998).
4.2.5 Modeling. By the late twentieth century, the use of models was pervasive in natural and social sciences alike. Modeling methods are far too diverse to think of summing them up, but the modeling outlook deserves comment. It is perhaps under this rubric that mathematical methods are most often integrated with the data of tallies and measurements. A quantitative model, very broadly, means working out the consequences of certain assumptions for a particular situation or problem. Often, models are used for predicting, and they can equally well be purely academic or highly pragmatic, for very speciﬁc applications. In the twentieth century they have come to pervade the sciences. Among the social disciplines, modeling is especially central to economics.
Quantiﬁcation by now is virtually as important for social as for natural science. Its expansion since the 1700s reveals some key characteristics of the social and behavioral sciences: among them, the drive for the status of science, the role of applied studies in forming academic ones, and the often problematic relationship of theoretical to empirical work. The quantiﬁcation of social phenomena goes back several centuries, and is by no means a mere product of the eﬀort to attain scientiﬁc standing. Indeed, the advance of measurement and statistical analysis in social science is not distinctively academic, but has been greatly stimulated by administrative and political demands. Yet quantiﬁcation in social science has, since the 1800s, been surrounded by a certain mystique, as an indication of rigor or at least of scientiﬁc maturity. It should be regarded not as the key to all mysteries, but as a powerful set of tools and concepts, to be integrated as much as possible with theoretical understanding and with other ways of comprehending social phenomena.
- Alder K 1995 A revolution to measure: The political economy of the metric system in France. In: Wise N (ed.) The Values of Precision. Princeton University Press, Princeton, NJ pp. 39–71
- Baker K 1975 Condorcet: From Natural Philosophy to Social Mathematics. University of Chicago Press, Chicago
- Bulmer M, Bales K, Sklar K (eds.) 1991 The Social Survey in Historical Perspective, 1880–1940. Cambridge University Press, Cambridge, UK
- Daston L 1988 Classical Probability in the Enlightenment. Princeton University Press, Princeton, NJ
- Danziger K 1990 Constructing the Subject: Historical Origins of Psychological Research. Cambridge Universi Press, Cambridge, UK
- Dehue T 1997 Deception, eﬃciency, and random groups: Psychology and the gradual origination of the random group design. Isis 88: 653–73
- Desrosieres A 1998 The Politics of Large Numbers. Harvard University Press, Cambridge, MA
- Espeland W 1998 The Struggle for Water. University of Chicago Press, Chicago
- Frangsmyr T, Heilbron J, Rider R (eds.) 1990 The Quantifying Spirit in the Eighteenth Century. University of California Press, Berkeley, CA
- Gigerenzer G, Swijtink Z, Porter T, Daston L, Beatty J, Kruger L 1989 The Empire of Chance: How Probability Changed Science and Everyday Life. Cambridge University Press, Cambridge, UK
- Gillispie C 1997 Pierre Simon Laplace, 1749–1827: A Life in Exact Science. Princeton University Press, Princeton, NJ
- Hacking I 1975 The Emergence of Probability. Cambridge University Press, Cambridge, UK
- Hacking I 1990 The Taming of Chance. Cambridge University Press, Cambridge, UK
- Hacking I 1995 Rewriting the Soul. Princeton University Press, Princeton, NJ
- Heilbron J 1979 Electricity in the 17th and 18th Centuries: A Study in Early Modern Physics. University of California Press, Berkeley, CA
- Koyre A 1968 Metaphysics and Measurement. Harvard University Press, Cambridge, MA
- Kuhn T 1961 The function of measurement in modern physical science. In: Woolf H (ed.) Measurement. Bobbs-Merrill, Indianapolis, IN pp. 31–63
- Kuhn T 1976 Mathematical versus experimental traditions in the development of physical science. Journal of Interdisciplinary History 7: 1–31 (reprinted in Kuhn T 1977 The Essential Tension. University of Chicago Press, Chicago, pp. 31–65)
- Kula W 1986 Measures and Men (Szreter R trans.). Princeton University Press, Princeton, NJ
- Lave J 1986 The values of quantiﬁcation. In: Law J (ed.) Power, Action, and Belief: A New Sociology of Knowledge? Routledge, London pp. 88–111
- Morgan M 1990 The History of Econometric Ideas. Cambridge University Press, Cambridge, UK
- Porter T 1986 The Rise of Statistical Thinking, 1820–1900. Princeton University Press, Princeton, NJ
- Porter T 1995 Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton University Press, Princeton, NJ
- Sagoﬀ M 1988 The Economy of the Earth: Philosophy, Law, and the Environment. Cambridge University Press, Cambridge, UK
- Yeo E 1996 The Contest for Social Science. Rivers Oram Press, London