Organizational Catastrophe Research Paper

Academic Writing Service

Sample Organizational Catastrophe Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

While it has always been possible for organizations to make errors, only in recent times has it been possible for them to make errors that can have absolutely staggering outcomes for organizations and their environments. Such errors are the normal consequences of technological development and increasing organizational interdependence, both locally and globally. In the latter part of the twentieth century science and technology introduced and identified a new type of human-made error: low probability events with enormous consequences. Examples of such errors are nuclear reactor accidents at Three Mile Island and Chernobyl, the industrial tragedy at Bhopal, the Exxon Valdez accident, and the collapse of Baring’s Bank.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Only recently have social scientists focused on organizational predilections toward making error or insuring reliability. Some researchers look for causes of major catastrophes, while others focus on situations that could go badly but which haven’t. Both are looking for the kernels of organizational processes that produce disaster and contribute to highly reliable performance. The purpose of this research paper is to summarize their main contributions.

1. Three Approaches To The Study Of Organizational Catastrophe

There are three approaches to identifying processes that can result in organizational catastrophe. The first is human factors research born from the man machine interface problems identified during World War II. This approach combines the methodologies of engineering and experimental psychology and is now in its third research generation. The first generation is referred to as ‘knobs and dials’ human factors and dealt initially with man–machine interfaces in aviation (though it spread quickly to other industries). The second generation focused on the cognitive nature of work. The shift in emphasis was driven by technological innovation in the late 1960s and 1970s, in particular the development of computers. Most of the work of the first two generations focused on the design of specific jobs, work groups, and related human–machine interfaces to obtain reliable and safe performance. Neither generation was even remotely interested in organizational maladies that might affect performance. The emerging third generation, macroergonomics, concentrates on the overall organization machine interface.




The second approach is the multidisciplinary science of risk analysis, born in the 1970s. The foundation of this science lies in the musings of the seventeenth century French monk, Blaise Pascal. He proposed a rigorous approach to thinking about the unfolding of future events called probability theory. Probability theory enables analysts to quantify the odds of two different events occurring and then compare them. Probabilistic risk assessment (PRA) models abound and are very good at predicting events that occur with some regularity in large samples or in populations. While their practitioners often use these models to predict infrequent events, they are not very good at it. Further, the scientists with engineering and statistical backgrounds who use them typically are not interested in organizational processes, and thus, fail to include organizational variables.

The third approach is that of behavioral researchers (psychologists, sociologists, and political scientists), who examine organizations, using various methods including observations, surveys, and archival data collection. The first people in this arena were accident researchers and analysts who most frequently took a ‘name and blame’ the operator perspective (because then they could focus on dismissal and/or training). However, the next set of researchers interested in why organizations fail took a larger perspective and attempted to identify organizational processes that contribute to catastrophic error. It soon became clear that errors are usually caused by a combination of organizational failures.

2. Psychological Approaches To Understanding Catastrophe

Among the first of these was Turner (1978). His conceptualizations were based on his analysis of a large number of disasters. Before Turner’s work, disaster investigations focused on the impacts of disaster. The only people interested in preconditions were engineers who were solely interested in technical determinants of technical failures. Dispelling the theory that disasters are ‘bolts from the blue,’ Turner introduced the notion of incubation periods leading to disaster and identified some components of incubation. These included organizational culture, the intertwining of formal and informal communication processes, hierarchy, power, and the fact that decision-making suffered from bounded rationality. Turner used catastrophe theory to explain disasters as abrupt transitions, particularly focusing on discontinuities in information. This kind of discontinuous process has been noted by a number of other researchers.

At a more microlevel, Helmreich and co-workers’ early research focused on commercial aviation safety and flight deck team performance. Recently, they examined teams in operating rooms. They identify individual and team factors that influence performance, and they situate their teams within the design and equipment environment of the operating room (Helmreich and Schaefer 1994). Given their social psychological perspective, it is not surprising that these researchers find that interpersonal and communication issues are responsible for many inefficiencies, errors, and frustrations.

3. Sociological Approaches To Understanding Catastrophe

After the Three Mile Island tragedy in 1979 the United States President’s Commission on the Accident decided to invite social science input into what threatened to be an entirely engineering oriented investigation. That invitation lead to the publication of Normal Accidents: Living with High Risk Technologies by Perrow (1984). Perrow added to the Three Mile Island story archival analyses of petrochemical plants, aircraft and airways, marine systems, and other systems. He introduced the term ‘normal accident’ to signal that, in systems like Three Mile Island’s, multiple and unexpected interactions of failures are inevitable. Perrow noted that as systems grow in size and in the number of diverse functions they serve, they experience more incomprehensible or unexpected interactions among components and processes. Among these are unfamiliar sequences of activity, or unplanned and unexpected sequences that are either not visible or not immediately comprehensible. The problem is accentuated when systems are coupled tightly, that is, they incorporate numerous time-dependent processes, invariant sequences (A must follow B), and little slack. When complexity and tight coupling combine, the likelihood of a normal accident increases.

James Short’s 1984 presidential address to the American Sociological Association was one of the first highly visible treatments of risk in sociology. Short and Clarke (1992) collected a set of authors who made and continue to make seminal arguments about organizations and the dangers they create. Over the years Clarke has stressed repeatedly the ambiguous nature of risk and danger, as has Karl Weick. Clarke’s recent book (1999) focuses attention on the symbolic representations organizations use to justify high-risk systems.

4. High Reliability Organizations

Also in 1984 a group of scholars at the University of California at Berkeley formed a loosely-knit consortium with a few scholars from other universities to examine organizations that appeared susceptible to adverse error but that seemed to operate without major incidents. They called them ‘High Reliability Organizations’ (HROs). This group does not contend, however, that these organizations cannot get into serious trouble. They began by using a behavioral observational approach, and by examining the United States Federal Aviation Administration’s air traffic control system, a well-run nuclear power plant, and the United States Navy’s carrier aviation program. The purpose of the research was to identify organizational processes that result in high reliability operations.

The group’s early findings (see Roberts 1993) underscored the point that operator errors can be reduced and organization contributions can be kept separate if good design and management techniques are followed. An example is the simultaneous launch and recovery of aircraft aboard an aircraft carrier. Good design and management results in these processes being separated physically from one another. This requires that ‘(a) political elites and organizational leaders place high priority on safety and reliability; (b) significant levels of redundancy exist, permitting back up or overlapping units to compensate for failures; (c) error rates are reduced through decentralization of authority, strong organizational culture, and continuous operations and training; and (d) organizational learning takes place through a trial-and-error process, supplemented by anticipation and simulation’ (Sagan 1993).

While members of the group no longer work together many continue to develop theories about, and do further empirical work in, high reliability organizations and in organizations that have, sadly, broken apart. They have also examined many more successful and unsuccessful organization, including the commercial maritime system, medical settings, the commercial airline industry, non-US nuclear power plants, community emergency services, and banking. One stream of the work examines our gradual loss of control of information systems. Rochlin (1997) calls this the computer trap and identifies its components. Another stream (Roberts et al. 1994) looks more specifically at how decisions must migrate around organizations to the people with expertise rather than formal or position authority, and the structural processes that unfold as the organization enters different evolutions. This work finds, contrary to Perrow, that both tightly and loosely coupled organizations are subject to massive failure if they are tightly or loosely coupled at the wrong time.

A further stream of research (Grabowski and Roberts 1999) examines systems of interdependent organizations and is interested in whether the current mania for virtual and other temporary organizations can appropriately manage in situations demanding highly reliable operations. Larger interdependent systems do not necessarily produce new problems, but they offer more opportunities for old problems to crop up simultaneously. A final stream of the high reliability research attempts to integrate its theoretical development with the main stream of organizational theory (Weick et al. 1999).

Sagan (1993) argued that what he calls ‘normal accidents theory’ and ‘high reliability organizations theory’ are separate theories, and in the case of nuclear armament finds high reliability theory wanting. That comparison seems a bit odd since ‘normal accidents theory’ talks about the inevitability of accidents and ‘high reliability theory’ talks about creating conditions to reduce the probability of accidents. The two conceptualizations should be complementary and Sagan’s test ground for reaching his conclusion is an arena in which there have been no serous accidents. More recently, Sagan (1995) pits the political sciencederived theory about why more nuclear arms are better against organizational theory which says if an organization can make errors it probably will. After all, nuclear weapons are controlled (and sometimes not controlled) by organizational processes, and to the extent these break down there is no assurance we can manage nuclear weapons safely.

5. Other Research On Organizational Catastrophes

At the same time the high reliability project was getting under way, and in 1984, Shrivastava (1987) examined the world’s most severe industrial accident, to that date, Union Carbide’s chemical plant accident at Bhopal. He found that the Bhopal accident was caused by a complex set of factors: human (e.g., morale and labor relations), organizational (e.g., the plant’s low importance to the parent company, mistakes the parent company made in establishing the plant, and discontinuous plant top management), and technological (design, equipment, supplies, and procedures). Together he called them ‘HOT’ factors.

In her 1996 analysis of the Challenger launch decision, Vaughan (1996) uncovered an incremental descent into poor judgment supported by a culture of high-risk technology. The culture included, among other things, the fact that flying with known flaws was normal, as was deviation from industry standards. Deviations were normalized so they became accept- able and nondeviant. This occurred within a structural secrecy. Vaughan defines structural secrecy as ‘the way patterns of information, organizational structure, processes and transactions, and the structure of regulatory relations systematically undermine the attempt to know and interpret situations in all organizations’ (1996).

One group of behavioral researchers focuses on crisis prevention or crisis management. While they make ameliorative prescriptions they have also uncovered additional processes that contribute to crises or their prevention. In Europe this group is lead by Reason and Rasmussen. Reason (1997) begins with a model (the ‘Swiss cheese’ model) of defense. Organizations build layers of defense. In an ideal world the layers are all intact, allowing no penetrations by accident trajectories. In the real world each layer has weaknesses and gaps, which can be conceived of as holes in each of a number of horizontal planes of defense. These holes are in constant flux, rather than being fixed and static. Occasionally they line up with one another and the bullet that penetrates them causes an accident. The holes are created by both active and latent failure. Defenses in depth have made modern organizations relatively immune to isolated failures but such defenses are also the single feature most responsible for organizational accidents.

Reason points out that errors and rule violations committed by those at what he calls ‘the sharp end’ of the organization (i.e., the operational end) are common contributors to organizational accidents but they are neither necessary nor sufficient accident causes. He notes that proceduralizing jobs is one way organizations attempt to reduce risk. The degree to which jobs can be proceduralized depends on where they are in organizations. Higher level jobs are more difficult to proceduralize than lower level jobs. There is usually a trade-off between procedures and training, the more training an individual has the less likely their job is to be proceduralized.

Rasmussen et al. (1994) build on Reason’s thinking by focusing on three levels of human performance he labels skill-based (SB), rule-based (RB), and knowledge based (KB). Errors can occur at any of these levels. Reason then summarizes the varieties of rulebased behaviors and how they can lead to organizational accidents. He begins by asking the question, was the task or situation covered by procedures or training? If not improvisation occurs which is knowledge based. In this situation there is a better than 50 percent chance of mistakes. If the procedures and/or training are appropriate and they were followed as intended there is a high probability of successful performance. If the procedures are inappropriate to the task at hand latent failures are likely. If they were not followed as intended mistakes are likely to occur.

Taking a different perspective, Lagadec (1993), also European, focuses on preventing chaos in crisis. He states that a crisis situation is like a kaleidoscope, if you touch its smallest element, the entire structure changes. Crises involve large-scale, destabilizing breakdowns that degrade exponentially. Such events begin to find echoes in their contexts. They feed on all the problems and instabilities present in their contexts. Crisis situations have the following components: initial destabilization and defeat, late or poorly processed warning, massive and shattering challenge, completely unfamiliar events, insufficient or faulty warning systems, individual inertia, sluggishness of all administrative operations, saturated operators, weak warning signals, mental stumbling blocks, and desperate struggling not to admit something could be wrong.

The organizational impediments that hamper emergency action include the absence of prior reflection (thinking about what could happen), stupefying organizational gaps (no one feels qualified to take charge), no emergency plans or imaginary capacity, a wealth of defiance, and typical mistakes and behaviors that include imprudent, uncooperative, and hostile behavior. Lagadec focuses on how to manage crises and how to develop a learning process for individuals, teams, and organizations involved to prevent crises.

In the United States, Pauchant and Mitroff (1992) have investigated crises and crisis preparedness for many years. They present a four layered onion model of crisis management. The outer layer comprises an organization’s strategies for crisis management. It usually does not have any. Level three evaluates how well the everyday operating structure of the organization contributes to or inhibits crisis and examines how symbolic functions, such as the organization’s formal structure, reveals the perceptions of its members. Level two addresses an organization’s culture, and level one the subjective experiences of those who form the organization.

6. Summary

In sum, there are a variety of perspectives about the antecedents to organizational crisis. Many different roads are taken and there is little move toward consensus. Of only peripheral interest to organizational researchers are the statistical and human-factors approaches. Turner’s (1978) groundbreaking work in the United Kingdom provides the springboard for organizational researchers. This was followed by a number of disparate approaches by primarily US based researchers, complimentary with work done in Europe. Finally, there is a group of researchers more interested in crisis management. All of these writers have taken the organization or groups in it as their units of analyses, though many have alluded to larger interdependent systems. Only recently have researchers attended the larger system in which organizational errors occur because of things that happen within systems of organizations. An example of this is the nuclear reactor accident at Chernobyl in which reactor problems were embedded in a culture of secrecy, and a system that selected top level managers based more on their political affiliations than their expertise.

Bibliography:

  1. Clarke L 1999 Mission Improbable: Using Fantasy Documents to Tame Disaster. University of Chicago Press, Chicago
  2. Grabowski M, Roberts K H 1999 Risk mitigation in virtual organizations. Organization Science 10: 704–21
  3. Helmreich R L, Schaefer H G 1994 Team performance in the operating room. In: Bogner M S (ed.) Human Error in Medicine. Erlbaum, Hillsdale, NJ, pp. 225–54
  4. Lagadec P 1993 Preventing Chaos in a Crisis: Strategies for Prevention, Control and Damage Limitation. McGraw-Hill, Berkshire, UK
  5. Pauchant T C, Mitroff I I 1992 Transforming the Crisis-Prone Organization: Preventing Individual, Organizational, and Environmental Tragedies. Jossey-Bass, San Francisco
  6. Perrow C 1984 Normal Accidents: Living with High Risk Technologies. Basic Books, New York
  7. Rassmussen J, Pejtersen A M, Goodstein L P 1994 Cognitive Systems Engineering. Wiley, New York
  8. Reason J 1997 Managing the Risks of Organizational Accidents. Ashgate, Aldershot, UK
  9. Roberts K H (ed.) 1993 New Challenges to Understanding Organizations. Macmillan, New York
  10. Roberts K H, Stout S K, Halpern J J 1994 Decision dynamics in high-reliability military organizations. Management Science 40: 614–24
  11. Rochlin G I 1997 Trapped in the Net: The Unanticipated Consequences of Computerization. Princeton University Press, Princeton, NJ
  12. Sagan S D 1993 The Limits of Safety: Organizations, Accidents and Nuclear Weapons. Princeton University Press, Princeton, NJ
  13. Sagan S D 1995 The Spread of Nuclear Weapons: A Debate. Norton, New York
  14. Short J F, Clarke L 1992 Organizations, Uncertainties, and Risk. Westview Press, Boulder, CO
  15. Shrivastava P 1987 Bhopal: Anatomy of a Crisis. Ballinger, Cambridge, MA
  16. Turner B A 1978 Man-made Disasters. Wykeham, London
  17. Vaughan D 1996 Thc Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. University of Chicago Press, Chicago
  18. Weick K E, Sutcliffe K M, Obstfeld D 1999 Organizing for high reliability: Processes of collective mindfulness. In: Staw B M, Sutton R (eds.) Research in Organizational Behavior. JAI Press, Greenwich, CT, pp. 81–123

 

Organizational Size Research Paper
Organizational Decision Making Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!