Sociological Study Of Risk Research Paper

Academic Writing Service

View sample Sociological Study Of Risk Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

That the world is punctuated with uncertainty, danger, hazards, disasters, and threats to what humans value has been recognized since the beginning of human life on earth. Risk is the conceptualization of that recognition—providing a means for anticipating and avoiding or adapting to untoward outcomes. By 3200 BC, ancient Babylon had the first recorded risk assessors, and in 1700 BC, the Code of Hammurabi included a classification of risk matters. Despite these ancient beginnings, however, and despite subsequent refinements accompanying the rise of the insurance industry during the age of exploration, the centrality of risk is the child of advanced industrialization, particularly since World War II. With advanced industrialization came technologies that were larger, more sophisticated, and potentially much riskier, having the capacity to affect far greater numbers of people, as well as the things they value and the areas they occupy. At the same time, scientific and technological advances have led to remarkably increased sophistication and precision, in many cases, in the ability to detect unwanted side effects or risks.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


The notion of ‘risk’ has evolved over the last decades of the twentieth century to become the key analytical lens for anticipating our actions’ consequences for the environment and ourselves. It includes an analytic orientation and a suite of evaluation methodologies, but it is also a new consciousness—a way of looking at a world of technological and environmental uncertainty. Attention to risk is a consciousness born of vast uncertainties over the durability of nuclear peace, the resilience of the ozone protecting us, threats of global warming, the growing extinction of species and the possibility of creating new ones, and more broadly, threats of technological disasters both large and small. It is a consciousness entirely foreign to the optimistic beginning of the century just ended.

The first significant body of sociological work on risk can be traced to the 1940s, with the work on natural hazards by Gilbert White (a social geographer), gaining momentum in the 1950s with White’s collaboration with a number of sociologists, and evolving into the sociological specialty of natural disaster research. A key focus of this specialty has involved the social impacts of hazard events—how did structures and individuals respond to natural disasters such as storms, floods, earthquakes, wildfires, or droughts? One of the most consistent findings from this literature was the emergence of what has become labeled a ‘therapeutic community’: contrary to the self-focused maximizing predictions of utility theory, citizens typically engaged in acts of altruistic aid and community building (for reviews, see Drabek 1986, Kreps 1984).




Particularly in the United States, a significant fraction of the ongoing interest in natural disasters was motivated by the concerns of the early atomic age, including the prospect that a nuclear attack could wipe out a significant fraction of a community’s population; given that much the same could be said for natural phenomena, such as floods, fires, or hurricanes, these and other ‘natural disasters’ came to be seen as useful ‘natural experiments’ for studying community responses. Particularly after the accident at the Three Mile Island nuclear power plant in 1979, however, it was a different form of nuclear activity that came to provide what is in many ways the prototypical ‘risk’ issue.

Although the point is often forgotten today, nuclear power facilities once enjoyed widespread support, even in the communities where they were constructed. Still, even before the 1979 accident, there were early signs of rising public concern. One response of nuclear power supporters (including those who controlled some of the most powerful branches of government) was to characterize the concerns as being ill-founded or ‘irrational,’ doing so by demonstrating that the ‘real risks’ were instead quite small. The earliest efforts, however, indicated that the consequences of a plausible ‘worst-case’ disaster could have been quite serious, indeed. These early efforts led to little publicity, but to a change of focus—an emphasis on what soon came to be called ‘probabilistic’ risk assessment, or PRA, a reductionistic engineering technique that estimates the risk of a plant mishap by estimating the probability of failure of all the plant’s parts and subsystems and then aggregating these probabilities. The conclusion from the first systematic application of this technique to nuclear power plants, which was stated particularly forcefully in the executive summary for the multivolume effort that is still remembered as ‘the’ reactor safety study (US Atomic Energy Commission 1975), was that citizens were more likely to be killed by a falling meteorite than by a malfunction at a nuclear power plant.

Four years later came the accident at Three Mile Island, near Harrisburg, the capital city of the state of Pennsylvania—an accident that by technical analysis virtually could not happen. The accident brought to public and sociological attention an incipient schism between the state, its technological experts, and citizens. European sociological interest in risk was similarly intensified by the far more serious nuclear accident at Chernobyl in April 1986. While at one level of analysis these accidents and the vast media coverage surrounding them offered dramatic signals to publics of the risks of nuclear-generated electricity, at a deeper level they revealed conflicts between expert and lay knowledge, and between ‘top-down’ and ‘bottom-up’ strategies for managing large-scale, risky technologies.

The 1979 accident, in particular, generated a number of legacies. First among them was the further institutionalization of what soon came to be known as the field of risk analysis: the Society for Risk Analysis was established in the United States (with subsequent affiliates in Europe and Asia), as was the Division of Risk and Management Science at the US National Science Foundation, along with a number of new professional journals and a substantial outburst of published literature. That outburst included a significant increase in sociological work, with much of the initial increase paralleling the three general theoretical levels in sociology—the micro, the meso, and the macro, or in simpler terms, the social psychological, the organizational/community level, and the societal.

At the micro level, much of the work was conditioned by what soon became the dominant social science research tradition in risk studies: the psychometric tradition (Slovic 1987). A growing body of work in cognitive psychology was brought to bear in examining the extent to which lay perceptions of risk differed from actuarial data and from experts’ views. Key among the findings from this work, replicated in a variety of cultures, is the idea of heuristics—cognitive rules of thumb that provide shortcuts to risk estimation. One example is the so-called ‘availability heuristic,’ or the tendency to judge a risk on the basis of its cognitive ‘availability,’ or the ease of imagining or recalling examples; such a tendency increases the perceived riskiness of a low-probability but sensational risk, such as death from botulism, relative to more probable but more prosaic risks, such as deaths from asthma.

In political circles, the early psychometric findings were sometimes taken as reinforcing the argument that the ‘irrational’ public should be excluded from risk decisions, but the evidence proved not to be so clear-cut. Sociological work on technological controversies, for example, had already drawn attention to the lack of unanimity among experts (see e.g., Mazur 1980), and further research showed that experts relied on similar heuristics—even in the contexts of their own expertise, and even when the experts were studied with the same protocols used to study lay perceptions. Work from a social constructivist perspective (e.g., Wynne 1992) further challenged the tendency to equate expert risk estimates with ‘real’ risks, with Clarke (1999) eventually going so far as to argue that many official proclamations were little more than ‘fantasy documents’—efforts to deal with ultimately uncontrollable risks through ritualistic proclamations of rationality. At the same time, psychometric work pointed to the importance of a number of broader issues, such as trustworthiness and fairness, that were omitted from technical risk analyses. Hence, from the psychological literature emerged a reaffirmation of sociology’s first principle: context matters.

Sociologists responded with three sustained research efforts—one that contextualized the individual actors, a second that aligned risk choices with the most powerful technological decision-makers, namely organizations, and a third that examined the consequences of technological accidents in the contexts where people live, namely communities—as well as with broader and more conceptual work at the macrosocietal level.

At the individual level, the work showed that a social actor’s perception and responses to risk are shaped not just by heuristics, but also by an entire set of social, political, and institutional forces. Perhaps the most comprehensive effort to capture this wide variety of psychological and social forces is the ‘social amplification’ framework, a conceptualization adapted from formal communication theory (Kasperson et al. 1988), and emphasizing that, when signals are sent from a source to a receiver, those signals often flow through intermediate transmitters that ‘amplify’ or attenuate the message. Similarly, individuals can receive risk signals either directly, through personal experience, or indirectly, through information provided by institutions (e.g., the media) or social networks (e.g., opinion leaders). These ‘amplified’ interpretations can then produce behavioral responses, resulting in secondary impacts that, in turn, are often themselves amplified, producing tertiary impacts. Partly because of its interdisciplinary orientation and its capacity to integrate the cumulative psychological findings with other factors that have been shown to influence risk perception and response—social, organizational, institutional, political—the framework has generated a considerable body of generally supportive empirical research.

With mesolevel work on organizations, meanwhile, sociologists began to point out the inherent metamethodological bias of early PRA work—the Cartesian reductionism of assuming that one can understand risk by understanding its underlying fundamental elements. A counterpoint to this methodological individualism was provided by the sociological holism of organizational sociology, treating organizations, not individuals, as the key risk decision-makers in contemporary societies.

The landmark contribution to this orientation is Perrow’s Normal Accidents (1984), which argued that certain technological systems were complicated beyond our understanding of them—and beyond our ability to make them safe. Key factors included complexity and ‘tight coupling’—essentially the lack of room for error—meaning that even attempts to improve safety, such as the introduction of redundant parts and backup systems, could result in further complication of these systems, thereby increasing risk. The book argued that some of our technological systems— nuclear power plants, nuclear weapons, and perhaps others—were predictable ‘system’ accidents, or ‘normal accidents,’ waiting to happen.

A counterpoint to this argument emerged in subsequent work on ‘high reliability organizations’ (LaPorte and Consolini 1991). Building on a set of fieldwork-based case studies—e.g., an air traffic control system, a large electric utility system, aircraft carrier operations—this school of thought argues that complex systems can be made safe with proper component, system, and organizational design, with specialized, sometimes variable, management structures, and through the development of a corporate culture of safety. Given that a great many of society’s complex technological systems do indeed operate daily without incident, but also that other empirical tests appear to support normal accident theory, the jury remains out over which of these two schools of thought offers the deeper purchase on this key problem of advanced industrialization. The issue is only likely to be resolved with further refinement of the conditions under which ‘normal’ accidents or ‘high reliability’ operations are more likely.

Another body of mesolevel work focused on com- munity contexts. Particularly after Erikson’s study of a so-called ‘man-made’ or ‘non-natural’ disaster (1976), a growing number of studies found that the impacts of technological or non-natural disasters proved to be far more severe and long-lasting than those associated with natural disasters. In contrast to the ‘therapeutic community’ of natural disasters, numerous studies of technological disasters have encountered what Freudenburg and Jones (1991) call a ‘corrosive community,’ typified not by offers of help, but by efforts to avoid responsibility or affix blame. Similarly, while the long-term social, economic, psychological, and cultural impacts of natural disasters have generally been found to have been surprisingly small, the long-term impacts of technological disasters have been found to be far greater than would be expected on the basis of the immediate physical destruction. At Three Mile Island, for example, most observers have concluded that only small amounts of radiation escaped, but careful studies found increased reports of ‘intense distress symptoms’—that is, levels of distress symptoms that are characteristic of hospitalized mental patients—with later studies finding that stress levels were actually higher some six years later than they were in the immediate postaccident period. The symptoms were not merely self-reported, but included physiological measures such as elevated catecholamine levels in blood—a form of medical measure that is generally considered ‘real’ even by observers who question the legitimacy of people’s own reports on stress and well-being. More broadly, several studies indicate that distress levels may actually be highest among the citizens afflicted not by the worst levels of disaster, but by an ambiguity of harm—cases where not even the best scientific work can clearly demonstrate whether people have been seriously endangered or not (Freudenburg and Jones 1991).

At the macro or societal level, finally, a growing body of work has responded to the call of Short (1984), in his presidential address to the American Sociological Association, to examine the importance of risks to the social fabric itself. In the early days of the twentieth century, Weber had pointed out that the ‘rationality’ did not mean that the citizens of modernity would know more about their surroundings and tools than the citizens of a premodern age; instead, Weber emphasized, the modern citizen would see the tools and technologies as being ‘knowable,’ in principle, rather than being in the realm of magic. A century later, Weber’s point is, if anything, more clearly true. Collectively, we know far more about our world and tools than did our great-great-great grandparents, but individually, we actually know far less about the tools and technologies on which we depend; instead, we literally ‘depend on’ them to work properly. In turn, that means that we depend on whole armies of specialists, most of whom we will never meet, let alone be able to control. Even if we find that we can usually depend on the technologies, and on the people who are responsible for them, the rare exceptions can be genuinely troubling. The risks to the social fabric, in other words, include what Freudenburg (1993) has termed ‘recreancy’—cases where an organization or an individual entrusted with a specialized task fails to perform in a way that fulfills the responsibility and upholds the trust.

Still more broadly, while differing in ontological and epistemological assumptions, and while differing in orientation and research method, the various perspectives share the view that risk is an identifiable feature of the world, whether real or constructed, that risks are consequential in many facets of social life, and accordingly, the topic is worthy of sociological examination as well as policy attention (Rosa 1998). In addition, the pragmatic importance of new approaches to risk management reflects the growing recognition that the modern world has generated risk problems that demand scientific understanding, but that are too complex or too ambiguous to be ‘answered’ by science alone. Both the scientific examination and the policy making must take place in a context that is permeated by values, and also by unknowns—both those that can be recognized in advance and those that cannot. Owing to these conditions, democracies may have entered an era of postnormal risk (Funtowicz and Ravetz 1992), an era demanding better procedures for managing growing technological risks, procedures that require scientific inputs, but that broaden the scope of peer evaluation and integrate citizen involvement. A variety of social experiments have been undertaken to refine and implement these analytic-deliberative procedures, potentially reshaping not only technology policy but also democracy itself.

Risk is the unavoidable companion of a growing technological complexity. Managing large-scale technological risks has therefore become a central challenge for all societies—perhaps even the key basis for distinguishing contemporary from past societies (Beck 1986, Giddens 1990). It is this challenge that raises the fundamental and daunting question: are contemporary societies capable of generating technological complexity, technological interdependence, and pervasive risks at a more rapid pace than our growth in knowledge of these complexities and risks—and more rapidly than we can manage them safely? Whether a form of cultural lag, an inherent paradox of technological sophistication, an epiphenomenal feature of what we currently call postmodernity, this question will remain a focal point of discourse and decision making in the coming era, both for sociology and for society more broadly.

Bibliography:

  1. Beck U 1986 Risk Society: Towards a New Modernity (trans. Ritter M). Sage, London
  2. Clarke L 1999 Mission Improbable: Using Fantasy Documents to Tame Disaster. University of Chicago Press, Chicago
  3. Drabek T E 1986 Human System Responses to Disaster. Springer-Verlag, New York
  4. Erikson K T 1976 Everything in its Path: The Destruction of Community in the Buffalo Creek Flood. Simon and Schuster, New York
  5. Freudenburg W R 1993 Risk and recreancy: Weber, the division of labor, and the rationality of risk perceptions. Social Forces 71(4): 909–32
  6. Freudenburg W R, Jones T R 1991 Attitudes and stress in the presence of technological risk: A test of the Supreme Court hypothesis. Social Forces 69(4): 1143–68
  7. Funtowicz S O, Ravetz J R 1992 Three types of risk assessment and the emergence of post-normal science. In: Krimsky S, Golding D (eds.) Social Theories of Risk. Praeger, Westport, CT, pp. 251–73
  8. Giddens A 1990 The Consequences of Modernity. Stanford University Press, Stanford, CA
  9. Kasperson R E, Renn O, Slovic P, Brown H S, Emel J, Goble R, Kasperson J X, Ratick J 1988 The social amplification of risk: A conceptual framework. Risk Analysis 8: 177–87
  10. Kreps G A 1984 Sociological inquiry and disaster research. Annual Review of Sociology 10: 309–30
  11. LaPorte T R, Consolini P M 1991 Working in practice but not in theory: theoretical challenges of high reliability organizations. Journal of Public Administration Research and Theory 1: 19–47
  12. Mazur A 1980 The Dynamics of Technical Controversy. Communications Press, Washington, DC
  13. Perrow C 1984 Normal Accidents: Living With High-risk Technologies. Basic Books, New York
  14. Rosa E A 1998 Metatheoretical foundations for post-normal risk. Journal of Risk Research 1: 15–44
  15. Short Jr J F 1984 The social fabric at risk: Toward the social transformation of risk analysis. American Sociological Review 49: 711–25
  16. Slovic P 1987 Perception of risk. Science 236: 280–5
  17. US Atomic Energy Commission (AEC) 1975 Reactor Safety Study: An Assessment of Accident Risks in US Commercial Nuclear Power Plants. US Nuclear Regulatory Commission (WASH-1400), Washington, DC
  18. Wynne B 1992 Risk and social learning: Reification to engagement. In: Krimsky S, Golding D (eds.) Social Theories of Risk. Praeger, Westport, CT, pp. 276–97
Sociology And Politics Of Risk Research Paper
Risk, Decision, And Choice Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!