Sociology And Politics Of Risk Research Paper

Academic Writing Service

View sample Sociology And Politics Of Risk Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Risk and chance have long been held to be key elements of the human predicament; but risk aware-ness besets whole cultures as well as individuals. Social scientists typically have been interested in risk, from both the social-psychological stance of particular actors and in terms of cultural responses to collective exposures. At one level, sociologists have discerned a historical trend towards the taming of risk, both in the sense that risk and probability have become better understood, and that certain forms of exposure to risk have been reduced. Nineteenth and early twentieth century confidence in progress and control over nature suggested both that risks were diminishing—diseases could be controlled, dangers to food supply countered through scientific farm management, and so on—and that better understanding of risk and chance was possible. But this modernist confidence about the prospects for the control of risk has been moderated and to some extent undermined by subsequent developments. As an example, twentieth-century technologies were associated with new forms of risk, including risks to the whole ecosystem. It seemed that the forces of progress could lead to novel hazards as well as to enhanced security. Furthermore, the legal frameworks and intellectual tools developed for the regulation of risk made it possible to elaborate arguments about risk in ever more sophisticated ways, ironically engendering a more vivid appreciation of risk. Finally, the complexity of social development fostered distinctively social risks, related to crime and the increasing uncertainty of social life. This discussion will begin by focusing on the regulatory treatment of risk before moving on to consider the broader social experience of risk.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. Risk Assessment: Regulators And Risks

One systematic and extensive literature on risk grew out of concerns to make policy and to legislate for risk; this literature can be categorized as concerned with ‘risk assessment.’ Given the ubiquity of risks, the question for regulators was customarily thought of as: ‘how safe is safe enough?’ Governments and official agencies recognize that, regrettably, train crashes occur, that drivers are daily involved in automobile collisions, that leaks will sometimes happen even from well-managed chemical plants, and that farm chemicals may impact the environment, agricultural workers, or even occasional consumers. Given that no complex system can be guaranteed to be perfectly safe, the leading approach was to ask about the price of additional safety. From this consequentialist world view, risk is thought of as the mathematical product of the likelihood of a hazard occurring, and the costs to which that hazard gives rise. The policy maker’s art is then to minimize risk within the budget that society is willing to spend on safety.

In practical terms, this task was always much more difficult than might be supposed, and certainly more intractable than economists routinely implied. In part, difficulties arose because of the inherent problems of trying to harmonize risk data across various fields. It may be possible to compare the costs and risks of various railway signaling systems, but (in the absence of a super-ministry of risk reduction) transport, industrial, medical, and agricultural risks cannot be brought within the same calculus. Worse still, both the likelihood of the problem and its consequences commonly defy exact specification. While there are good data on the medical and related risks of typical motor accidents on US and European freeways, the probability and the consequences of nuclear power station incidents can only be calculated in hypothetical ways. Similarly, the risks and costs of ‘mad-cow disease’ (bovine spongiform encephalopathy, or BSE), an unprecedented form of infection apparently spread through the food chain, could not be subjected to the risk calculus in the standard way. Further, the only way to obtain quantitative measures of many risks to humans is to extrapolate from indirect observations, such as rodent bioassays, thereby introducing additional sources of distortion into risk assessment. Without good data, all calculations will be ‘rough and ready.’ Nor is it even clear that there is a single ‘currency’ into which all sorts of harms can be converted for the purposes of cost–benefit calculations.




Despite these persistent difficulties, one line of the literature about risk has been devoted to exploring what are thought to be the public’s shortcomings in relation to risk understandings. Lay people appear to make irrational and statistically unsupported assessments of the relative risks of different hazards, and to demand high levels of risk reduction without being aware of the costs. In the extreme, the public appears unwilling even to accept the cost–benefit approach, but has no systematic alternative with which to replace it. As an added complication, despite frequent public protestations of anxieties about risk, market mechanisms seem to imply that people are not necessarily very risk averse: The automobile market has tended to sell on aesthetic and performance criteria rather than on safety; and consumers continue to eat fatty foods and to avoid exercise despite well publicized links to heart disease.

The apparent discrepancies between public and expert versions of risk have given rise to a series of psychometric studies (Slovic 1992). For example, researchers have shown a tendency by people to undervalue systematically the risks to which they are routinely exposed (because of their lifestyle or occupation), and to overvalue the seriousness of risks that are novel or could be catastrophic. Moreover, there appear to be significant differences between people’s assessment of the risks to which they expose themselves and those to which they believe they are subjected by others. These perceptions translate into corresponding pressures in policy. People seem more likely to demand that automobiles be engineered to lower traffic fatalities than they are to constrain themselves to drive those automobiles more cautiously.

The resulting tension between public and expert interpretations of risk has caused a problem for policy makers. If they base regulations on expert judgments—that is, keyed only to the statistical probability of harm—policies may be unpopular or even subverted, whereas basing policies on the public’s apparent preferences threatens to make regulations arbitrary, unscientific, or too costly (Breyer 1993). If the citizenry really is more tolerant of self-imposed risks than of risks visited on them by others, then there is an argument for reflecting this in public policy, whatever the ‘actual’ exposure to risk. Similarly, if, as Slovic and colleagues report (Slovic 1992, p. 121), the public is more concerned about certain ‘dread’ risks than other risks which experts hold to be of equal hazard, then maybe the cost–benefit equation has to be widened to take into account people’s manifest preferences. Regulators find themselves confronting a tension familiar to liberal democracies: That between people’s ‘revealed’ preferences and the recommendations supported by expert opinion.

2. Risk Expertise: The Reflexivity Of Risk

These problems have become amplified in particular institutional contexts in ways that social science research has sought to explicate. Risk assessments have been developed, particularly by official regulatory agencies, so as to apply to statistically representative, or in some other sense, ‘average’ people (Jasanoff 1990). Calculations of how long it takes to evacuate an aircraft, and thus of the requisite number of emergency exits, the width of aisles and so on, are supported by evacuation trials, but these in turn depend on notions of what is a typical passenger cohort. Similarly, arguments about exposure to pollutants have had to construct some notion of the average person as the unit for measuring the at-risk population (with, in many cases, the need to substitute the average risk animal for the average human). Yet this ideal type hardly exists in reality and may obscure threats to specific subpopulations. Women may be different from men; pregnant women are more clearly different. The young may differ from the elderly, the housebound from the active, and so on. These differences may prove to be far from ‘academic’ in particular contexts of exposure. For example, people who live near one of the UK’s most controversial nuclear power plant sites, Sellafield, and who happen to favor a shellfish-rich diet may be exposed to a greater nuclear hazard than the average person, because of the way in which shellfish filter material from sea water. In this case, a behavioral choice exacerbates other risk factors. In other cases, however, numerous background factors may act in synergy for the poor or socially disadvantaged, as in the case of American inner-city minority populations, who may be exposed to toxic pollutants from multiple sources (Bullard 1994). In neither the UK nor the US cases would standard risk assessment methodologies, based on average characteristics, compute adequately the risk to specially vulnerable populations. Indeed, technical risk assessment has tended to obscure the distributive implications of risk production. It is, in this sense, a social technology that reinforces existing structures of power (Winner 1986).

Ironically, the great increase in formal examinations of risks has not so much diminished as contributed to the exacerbation of risk concerns (Beck 1986). Given the complexities of risk assessment and the high stakes involved, it is understandable that risk assessment methodologies have been subject to legal and other formal challenges (see Jasanoff 1990). These proceedings have subjected risk assessments to critical deconstruction, questioning the precise basis for the choice of methodologies. Where these methods have had consequences injurious to some but not all sections of the population, the procedure has been made to appear discriminatory. Environmental justice movements have formed to press the point that risk assessments of individual chemicals are not equivalent to a demonstration of the impact of a cocktail of pollutants. Communities experiencing extensive hazardous exposures—each element of which may be deemed not to be unduly risky on its own—argue that the overall impact of multiple sources is not well gauged by standard methodologies.

Furthermore, as Jasanoff (1990) has demonstrated, particularly in the US, the combination of adversarial cross-examination with the separation of powers and scope for judicial review of executive agencies, has meant that extensive financial and intellectual resources have been directed at deconstructing risk assessments. Given all the complexities outlined above (the hypothetical nature of many risks, the difficulty of identifying an average case, the impossibility of most human experimentation, the need for surrogate measures, and so on), there has been no prospect of finding an incontestable scientific basis for defending particular risk assessments.

Sociological research has focused also on the strategic constructions of certainty and uncertainty associated with risk assessments. Thus, Wynne (1992) has argued that risks and probabilities are made up of many kinds of not-knowing, only a fraction of which (e.g., uncertainties of extrapolation from rodent bioassays) may be acknowledged openly in formal risk assessment processes. In some cases, it is possible to establish a hierarchy of uncertainty, as between ‘risks’ and ‘uncertainties’: With risks one knows the odds, but with uncertainties, only the general parameters of the problem. Most practical questions which sciencein-public has to face, however, involve an additional kind of noncertainty. This Wynne terms ignorance. Ignorance refers to aspects of a problem which are consciously or unconsciously bracketed off, and commonly not further investigated. These may be issues which lie outside the disciplinary paradigms of the sciences and are thus in a sense necessarily, rather than perniciously, excluded from day-to-day research. Nonetheless, ignorance in this sense is a different form of not-knowing than mere uncertainty; it is not adequately captured by being treated simply as extreme uncertainty.

In principle at least, more knowledge might assist in handling these kinds of not-knowing. Uncertainties might be turned into risks. New understandings might clarify specific areas of former ignorance, though, of course, there is no prospect of ignorance being overcome in general. But, in addition, Wynne argues that there is a fourth consideration, ‘indeterminacy,’ resulting from ‘real open-endedness in the sense that outcomes depend on how intermediate actors will behave’ (Wynne 1992, p. 117). In other words, the safety or reliability of systems with an organizational or human component are deeply dependent on how the systems are operated. By treating most systems as determinate, conventional risk-assessment practices typically adopt tacit, unexamined and untested sociological hypotheses about those social practices that are central to the risk-producing activities. Thus, evaluations of risks from potentially hazardous technologies do not depend only on the behavior of physical and biological systems (difficult to model though they may be) but every bit as much on the behavior of plant managers, operators, and even those exposed.

Several insights into the sociology of expertise follow from these claims. First Wynne (1992) observes that experts, when faced with the need or chance to regulate in an area of noncertainty, are tempted to treat all forms of not-knowing as statistically treatable uncertainty, even though things of which they are ignorant cannot, by definition, be quantified. Second, he argues that the public may be significantly more expert than the scientists in relation to some of the matters covered by indeterminacy. His view, as expressed with Irwin, is that ‘science offers a framework which is unavoidably social as well as technical since in public domains scientific knowledge embodies implicit models or assumptions about the social world’ (Irwin and Wynne 1996, pp. 2–3). In other words, insofar as expert assessments depend on assumptions about particular social, cultural, or occupational practices of lay groups, it is likely that these publics will be more expert in these matters than technical ‘experts’ more distant from the relevant experiential insights.

A related body of research ties this manifestation of hazard within complex sociotechnical systems to features of organizational practice and culture. Thus, Perrow’s discussion of ‘normal accidents’(1984) makes the case convincingly that many hazardous occurrences are a routine (if often unpredictable) outcome of the interaction of the manifold technical and organizational components of modern power stations and production plants. Accidents are to be expected even if no particular accident is predictable. More recently, Vaughan has applied associated insights to understanding the decision to launch the ill-fated Challenger mission (Vaughan 1996). Of course, questions still remain about the extent to which organizational arrangements give priority to concerns for production, or for safety (Clarke 1989), and many recent case studies show corporations placing the most value on production and profitability.

In sum, though risk discourse has been promoted by scientists and by scientific attitudes to the valuation of the natural world, it is clear that public experience of risk assessment has encouraged successful challenges to the official orthodoxy, as in the emergence of an environmental justice agenda, and that ‘scientific’ risk assessment has been undermined dialectically by the advance of legal–rational reflection. In response to such difficulties, official agencies commonly are left with no alternative but to demand ‘more and better’ science; yet there are few grounds for thinking that further research or rationalization will resolve the problems outlined above. Ironically, social movement campaigners find themselves confronting a similar impasse. In opposing the establishment they commonly produce counter-claims about health or environmental risks, yet find related difficulties in underpinning their risk claims. Such experiences often generate an ambivalent attitude toward scientific expertise (Yearley 1992). These phenomena serve as a good example of what Ulrich Beck (1986) has termed ‘reflexive modernization.’ The institutions of modernity, notably the traditions of scientific analysis and legal fact-finding, have been turned upon themselves with deconstructive consequences. For Beck, this reflexive modernization is just one facet of the ‘risk society’ thesis, which portrays late-millennial industrialized societies as being unusually and overwhelmingly concerned with the distribution of risks and other ‘bads.’ In this view, social scientific interest in risk consists not so much in the study of societal responses to particular hazards; rather it is the key to characterizing the dynamics of society as a whole.

3. Risks And Cultures

Beck’s thesis in its widest form asserts that the most modern risks (termed by others ‘late’ or ‘high’ modern (Giddens 1991)) are reflexive in the sense that they are self-induced. Risks in the early modern period were external to the self-conscious control of social actors. Diseases would spread, bad weather would damage harvests, fires would consume urban areas as though under the influence of external, natural forces. Even if some of these risks were exacerbated by human interventions, the contemporary perception was that they were uncontrollable. In high-modernity, by contrast, risks such as the threat of disastrous nuclear reactor incidents are plainly the consequences of human activities (Beck 1986). In this view, Victorian and early-twentieth-century confidence about the progressive diminution of risk marks, in an ironic fashion, the transition from apparently external to societally induced risk. Scientific and technological developments are caught up in this reflexive pattern because modern risks are typically the result of technological ventures (nuclear power, ozone-depleting chemicals in the earth’s atmosphere); science and technology are involved in the cause, the diagnosis, and, with luck, eventual rectification of the problem.

Beck’s overall analytical claim has been greeted with widespread enthusiasm by many social scientists, and the term ‘risk society’ has popularly been adopted. However, the details of his analysis have not met with the same welcome. For one thing, it is unclear how ‘modern’ all present-day risks are. The incidence of ‘mad-cow disease’—eventually apparently transmit- ted to humans—is thought to have arisen from the low-technology business of producing cattle feed from animal protein, specifically in the context of energy- saving, low-temperature process innovations. More significantly, Beck’s favored examples—such as the risk of fall-out from the Chernobyl reactor explosion—have a primitively ‘democratic’ quality. On the face of it, the fall-out may descend upon the poor and wealthy alike. In that sense, the ‘risk society’ is everyone’s problem (Beck 1986). But as the environmental justice movement, particularly in the USA, has made clear, environmental ‘bads’ are still often distributed quite unequally along ethnic, gender, and class lines (Bullard 1994). The risks of the risk society may not be shared as evenly as Beck implies. Finally, Beck has rather little to say about the reasons for cultural differences in the salience of different kinds of risk, although it is clear, for example, that the framing of, and policy responses to, risks vary widely even between socially and economically comparable regions, such as Europe and North America (Jasanoff 1986).

The relationship of risk and culture has been explored by the anthropologist Mary Douglas and several of her colleagues. Developing Durkheim’s celebrated suggestion that religious cosmologies reflect social structures, Douglas proposed that cultural views of the characteristics of nature indicate as much about the reflected character of society as about the under- lying features of nature itself. Carefree, individualistic, dynamic cultures tend to view nature as resilient and able to look after itself, while cultures which are precarious or worry about protecting their boundaries tend to view nature as fragile and in need of protection. Douglas subsequently developed this approach for the examination of risk in advanced industrial societies. A society’s risk anxieties, in her view, relate as much to the cultural ‘insecurities’ of that society as to the actual extent of hazards (Douglas and Wildavsky 1982). Douglas’ ‘cultural theory’ has proven difficult to test (but see Marris et al. 1998) and it raises a host of theoretical problems, such as the unit of cultural analysis and the sources of cultural change. However, her approach does serve to underline the extent to which a culture’s awareness of risks may be an imputation on to nature of more general cultural anxieties.

Finally, it is clear that not all modern risks arise from human interventions in nature. Though concerns about medical and environmental risks have increased conspicuously since the mid-1970s, worries about crime, immigration, internet security, financial markets and global social dislocation have also characterized the networked and industrialized world. Risk anxieties can thus be inscribed not only to nature and to the human body, but also to the self-understanding of modern societies.

Bibliography:

  1. Beck U 1986 Risikogesellschaft: Auf dem Weg in eine andere Moderne. Suhrkamp, Frankfurt am Main [Trans. 1992 as Risk Society: Towards a New Modernity. Sage, London]
  2. Breyer S G 1993 Breaking the Vicious Circle: Toward Effective Risk Regulation. Harvard University Press, Cambridge, MA
  3. Bullard R D 1994 Dumping in Dixie: Race, Class, and Environmental Quality. Westview Press, Boulder, CO
  4. Clarke L 1989 Acceptable Risk? Making Decisions in a Toxic Environment. University of California Press, Berkeley, CA
  5. Douglas M, Wildavsky A 1982 Risk and Culture: An Essay on the Selection of Technological and Environmental Dangers. University of California Press, Berkeley, CA
  6. Giddens A 1991 Modernity and Self-Identity. Polity Press, Cambridge, UK
  7. Irwin A, Wynne B 1996 Introduction. In: Irwin A, Wynne B (eds.) Misunderstanding Science? The Public Reconstruction of Science and Technology. Cambridge University Press, Cambridge, UK
  8. Jasanoff S S 1986 Risk Management and Political Culture. Russell Sage, New York
  9. Jasanoff S S 1990 The Fifth Branch: Science Advisers as Policymakers. Harvard University Press, Cambridge, MA
  10. Marris C, Langford I, O’Riordan T 1998 A quantitative test of the cultural theory of risk perception: Comparisons with the psychometric paradigm. Risk Analysis 18: 635–48
  11. Perrow C 1984 Normal Accidents: Living with High Risk Technologies. Basic Books, New York
  12. Slovic P 1992 Perception of risk: Reflections on the psychometric paradigm. In: Krimsky S, Golding D (eds.) Social Theories of Risk. Praeger, London
  13. Vaughan D 1996 The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA. University of Chicago Press, Chicago, IL
  14. Winner L 1986 The Whale and the Reactor: A Search for Limits in an Age of High Technology. University of Chicago Press, Chicago, IL
  15. Wynne B 1992 Uncertainty and environmental learning. Global Environmental Change 2: 111–27
  16. Yearley S 1992 Green ambivalence about science: Legal–rational authority and the scientific legitimation of a social movement. British Journal of Sociology 43: 511–32
Theories Of Risk, Decision, And Choice Research Paper
Sociological Study Of Risk Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!