Technology Assessment Research Paper

View sample Technology Assessment Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing service for professional assistance. We offer high-quality assignments for reasonable rates.

The term ‘technology assessment,’ often abbreviated as TA, covers a variety of usages. The main meaning is the early identification and assessment of eventual impacts of technological change and applications, as a service to policy making and decision making more generally. The term TA is also used when firms, consultancies and technological institutes, agencies in health care and other social sectors, want to assess the promise, and the profit, of new technological options, and/or carry out a broad version of cost–benefit analysis, sometimes also including risks. In both cases, there is an element of anticipation of future developments (of technology, and its relation to markets and society) and an element of feedback of such anticipations into relevant decision arenas. This combination is the defining characteristic of technology assessment exercises.

1. Context

Practical and political concerns about uncertainties involved in developing and applying new technologies were the breeding ground for TA in the 1960s. One response was to try to manage the uncertainties through a combination of technological forecasting and the development of techniques to address and evaluate impacts. US organizations such as the Rand Corporation pioneered such TA techniques. This managerial response could be broadened to include wider societal impacts; the impacts of the US space program, for example, were explored by making comparisons with the impact of the railroad on American society.

The other response was policy-oriented and political, taking the emerging concern about the negative impacts of technology, for example, on the environment or in relation to defense and security issues, as a starting point to try to evaluate such impacts at an early stage. Ways to do so could include the techniques developed in the managerial approach, but would also build on policy analysis and on action-oriented research.

Special agencies could be made responsible for such a task, as when in 1970 in the USA a federal law established a Congressional Office of Technology Assessment (OTA), as ‘an aid in the identification of existing and possible impacts of technological application.’ Congress was interested also because such analytic support would increase its power vis-a-vis the Executive. The example of OTA was noticed in European countries, and followed, with some delay, the specific format depending on the political tradition of each country.

By the mid 1980s, TA was a recognized activity, both professionally (cf. Porter et al. 1980) and politically. Gradually, the earlier and almost exclusive emphasis on analytic approaches and the production of reports, was complemented with an interest in linking up more closely with decision making, or at least contributing to setting an agenda. Public debates about energy and environmental issues helped to make this aspect of TA more prominent.

By the 1980s, a philosophy of TA, already recognizable in the 1960s, had become widely accepted: the anticipation of impacts and feedback in order to reduce the human and social costs of learning how to handle technology in society compared to when this happens by trial and error. The experience of a number of chemical and nuclear accidents, as well as uncertainties about the new biotechnology (and life sciences more generally) since the mid-1970s, contributed to the legitimacy of TA exercises. This philosophy is shared by what would otherwise seem to be a wide variety of approaches, ranging from technology forecasting to the consensus conferences, popular as we enter the twenty-first century. This philosophy is also visible in debates about technology policy (implicit in the USA, explicit in most other countries), with the suggestion that technology policy should include a strong TA component. Some economists have argued that TA is the only justifiable component of government technology policy in a market economy (Freeman and Soete 1997).

2. Main Strands In The Variety Of TA Approaches And Activities

By the late 1990s, four established, plus one emerging, strands of TA could be distinguished, each with its own style and addressing different audiences:

(a) TA in firms and in technological institutes, oriented toward mapping future technological developments and their value to the firm or institute, and used as an input in strategy development (Hastbacka and Greenwald 1994). ‘Picking the winners’ (or ‘avoiding the losers’) used to be the overriding orientation. This strand of TA has developed relatively independently of ‘public domain’ TA, but links are emerging because of the need of firms to take possible societal impacts and public acceptance into account. The biotechnology sector became a clear example by the late 1990s. The ‘insiders’ are interacting with the ‘outsiders,’ and the effect is that prudent firms are receptive to public-interest assessments.

(b) TA in specific sectors, such as environment or health care and regulation, as an instrument to determine acceptability of a project, or a new product or technological option, in relation to the public interest in, and policies about, that sector. Environmental Impact Statements are now required in many countries before projects can be implemented (Vanclay and Bronstein 1995). Medical and health care TA is a recognized specialty with its own international society and meetings, focusing on evidence-based evaluations of concrete medical and health-care activities and options (cost–benefit analysis, risk analysis), not on wider societal impacts.

(c) TA for policy development and political decision making about projects or broad programmes with a strong technological component (for example, the electronic superhighway or modern agriculture) or important technologies (for example, genetic modification). One can call this ‘public service’ TA, and consider the now-defunct US OTA as the embodiment of this type of TA. OTA, during its lifetime, developed a robust approach to TA studies (see Wood 1997, Bimber 1996). Other TA bodies serving national parliaments and/or national governments tend to include participatory TA methods in addition to expert-based approaches.

(d) TA exercises can be oriented to the public arena more generally, and focus on articulation and building an agenda for handling new technology in society. This most recent strand takes up the increasing calls for participation (at least by so-called new stakeholders such as environmental groups). While it is particularly visible and more or less institutionalized in some European countries (Denmark, the Netherlands), participatory methods such as consensus conferences have been taken up all over the world (Guston and Bimber 1997). Agenda-building TA has a longer history, however, given that controversies over new projects or new technologies (and the studies and documents produced in the course of the controversy) induce learning about potential impacts and articulation of the value of the technology. Agenda-building TA merges into informed consultation processes to reach agreement on the value of new technology. Thus, there is overlap between TA and more general political and policy approaches for articulation and learning (e.g., hermeneutic policy making).

The contrast between private domain TA and public domain TA seems strong, because of the difference in goals and in the actors involved. The scope of private domain TA is less broad than public domain TA, but the assessments try to be more precise, and their outcomes are fed back into strategy development and decision making. Building on the broadening of private domain TA (strand 1) and attempts of agenda building TA to include various private actors (strand 4), a fifth strand in TA has emerged, even if it is still somewhat programmatic.

Constructive TA emphasizes that impacts of new technology are a joint product of the technology, the actors involved, and wider interactions. The experience with introducing new information and communication technologies within and between organizations was important in this respect (Orlikowski 1992), as well as the attempts of private actors to broaden the scope of their assessment processes, already at an early stage of new product or process development (cf. strand 1). Societal experiments with the introduction of new technologies, such as electric cars, mix private and public actors, and are occasions for societal learning about new technologies and for feedback into further development and uptake. While still programmatic in parts, Constructive TA has also developed generic strategies such as Strategic Niche Management (Schot and Rip 1997).

3. Methodological Issues

The combination of anticipation and feedback, characteristic of all varieties of TA, entails a methodology which combines, so to speak, writing a history of the future, supported by judgments of experts and by social-science insights and data, and informing action or preparation for action. Relevant social-science insights derive from sociology and economics of technology (Schot and Rip 1997) and from organizational sociology (Orlikowski 1992), and emphasize the co-evolution of technology and society. Estimating impacts is speculative, of course, but the speculation is controlled, and there are bodies of relevant experience (Vanclay and Bronstein 1995). Uptake into action remains precarious, exactly because of the fallibility of such estimates: Who could have predicted the further development and impact of the Internet in the early 1990s? On the other hand, efforts at foresight can be productive even when they turn out not to be correct, when they stimulate joint learning.

Methods of TA can emphasize data collection and analysis about present impacts and trends, and some counterfactual speculation, or consultations of experts and refinement of the results, and/or sociotechnical scenarios. The toolkit of TA contains a wide variety of tools, from forecasting broadened with social and sociotechnical mapping, multi-criteria analysis, and integrated assessment, up to mediation and communication techniques. The selection of appropriate tools is related to the sponsor of the exercise (private promoter, sectorial regulator, public policy, the public domain more generally), and is often eclectic. While there are recognized TA professionals, TA exercises are carried out by a wide range of consultants, academics, dedicated agencies and their committees, and volunteers.

There is a fundamental dilemma, which has been called the anticipation and control dilemma (Collingridge 1980). At an early stage of technology development, the nature of the technology and the articulation of interests are still malleable—but it is unclear what the effects and impacts will be. By the time these become clear, the technology is entrenched and vested interests make it difficult to change the technology. As with the pesticide DDT, the only possible response is to forbid further deployment of the technology. More is possible, however, than in this stark version of the control dilemma.

Recent economics and sociology of technology have shown how path dependency increases over time in technological development, and how co-production of impacts occurs along those paths. The QWERTY keyboard of typewriters, and now also of computers, is a well-known example of path dependency. From the point of view of TA philosophy, increasing path dependencies are an opportunity: they allow better anticipation of future developments. Furthermore, understanding the dynamics which generate path dependency allows one to modulate them to some extent.

A generic strategy to avoid decision regret is to remain flexible as long as possible, at the risk that none of the various options will be pursued sufficiently far so as to see what they are worth. A concrete example is the 1991 French Law on nuclear waste handling which requires three options to be maintained for a period of perhaps 15 years, and with annual public evaluation of progress and prospects. The Parliamentary Office d’Evaluation des Choix Scientifiques et Techniques is involved, and over time, modifications can be made to the trajectories.

When there is an addressee or identifiable audience, feedback of TA exercises on policy and on implementation of technology will occur. The direct impact of public service TA on science and technology policy and decision making has not been large, except when there are controversial issues, and the TA exercise provides ammunition for the contending parties.

Government technology policy most often focuses on promotion of (selected) technologies, as when stimulating the electronic superhighway. The main question for such technology policy is to ‘pick the winners,’ now for society as a whole. On the other hand, other agencies of the same government may be occupied with reducing the human and social costs of the introduction of new technology, for example, through safety and environmental regulation. This dichotomy between promotion and control of new technology is part of the de facto constitution of modern societies, and is reflected not only in the division of labor between government agencies, but also in cultural and political views, as in the assumption that there will be proponents and opponents a new technology (Rip et al. 1995). TA becomes a victim of these views when technology promoters see, and condemn, technology assessment as technology harassment or arrestment. While contestation will continue, there is now more interest in constructive approaches (cf. strand 5).

4. Situating TA, And Future Directions

One key point, related to divisions of labor in modern society, is the asymmetry between ‘impactors’ (those at the source of impacts) and ‘impactees.’ The asymmetry can be due to a difference in power, but it also always involves a difference in timing. Initiators of technological development know more, and have invested more, at an early stage, and impactees and TA agents have to wait and, in a sense, follow their lead.

The asymmetry has another component: technology developers are insiders and do not necessarily know very much about the outside. Adoption and diffusion, however, are up to ‘outsiders’ who have other knowledge, interests, and expectations. The story of nuclear energy since the 1960s is in part one of a struggle between insiders and outsiders. On a much smaller scale, the same storyline is visible in the development of cochlear implants for deaf people, where it turned out—unexpectedly for the insiders—that the deaf community was very negative about taking deaf people out of their own culture by providing them with implants (Garud and Ahlstrom 1997).

A second key point is that there are cumulative effects of living in a technological society. Risk repertoires emerge in which the impacts of a new technological development are treated similarly to those of earlier ones. In the 1970s and early 1980s, the risks of recombinant DNA research were seen as those of a runaway organism and addressed with probabilistic risk analysis, exactly as had become accepted for runaway accidents in nuclear power plants. Other more appropriate framings, such as loss of biodiversity, were slower to emerge .

A third key point is that anticipation and feedback can occur without dedicated TA efforts. Early warnings and controversies over new technologies are examples, and may be starting points for TA exercises. We live in a risk society and modernity has become reflexive (Adam et al. 2000). The rise of TA is then an indicator of reflexive modernity, and a conduit to enhance reflexivity—while also channeling it in certain ways. There were criticisms of the instrumentalization of reflexivity and possible technocratic tendencies of TA in the 1970s and 1980s, and of the symbolic nature of participatory TA exercises in the 1990s. Both the critiques and the concrete TA exercises actually build on the TA philosophy, but emphasize different parts of it.

Uncertainties about new technology and society are associated with risks of the unknown and the need to do something, anything, about them, and with the variety of views and values and interests involved— which make decision making and de facto acceptance of new technologies difficult, and difficult to anticipate.

A striking new development is the increasing acceptance of the so-called precautionary principle (since the Rio Declaration of 1992): action is justified even if there is not yet any conclusive evidence about risks, or negative impacts more generally. Precaution is predicated on speculative scenarios about what might happen. Quality control of such scenarios is what TA can offer.

Clearly, TA is not just a new professional and managerially oriented activity since the late 1960s, and an approach to interactive agenda-building since the late 1980s. It is part and parcel of larger developments. The co-evolution of technology with organizations and with society is being taken more seriously than in the earlier ‘cannon-ball’ approaches to impact assessment. This is particularly visible in assessments of information and communication technologies, and their embedding in society, but also in assessments and debates about energy, transport, and infrastructure, even if vested interests and established sociotechnical regimes can be obstacles to a balanced debate. The rapid developments in the life sciences and their applications, with genetics as the most striking example, are promising and a cause for concern at the same time. There, the ‘cannon-ball’ storyline is still prominent, and concerns about traditional ethical values reinforce a proponent–opponent dichotomy. It is a major challenge for TA to overcome such limited definitions of the problematique, and to do so in productive interaction with key actors in the relevant areas.


  1. Adam B, Beck U, Van Loon J (eds.) 2000 The Risk Society and Beyond. Critical Issues for Social Theory. Sage, London
  2. Bimber B 1996 The Politics of Expertise in Congress. SUNY Press, Albany, NY
  3. Collingridge D 1980 The Social Control of Technology. Pinter, London
  4. Freeman C, Soete L 1997 The Economics of Industrial Innovation, 3rd edn. MIT Press, Cambridge, MA
  5. Garud R, Ahlstrom D 1997 Technology assessment: A sociocognitive perspective. Journal of Engineering and Technology Management 14: 25–48
  6. Guston D, Bimber B (eds.) 1997 Technological forecasting and social change 54(2–3): 125–308
  7. Hastbacka M A, Greenwald C G 1994 Technology assessment—Are you doing it right? Arthur D. Little – PRISM (Fourth Quarter): 35–45
  8. Orlikowski W J 1992 The duality of technology: Rethinking the concept of technology in organizations. Organization Science 3: 398–427
  9. Porter A L, Rossini F A, Carpenter S R, Roper A T 1980 A Guidebook for Technology Assessment and Impact Analysis. North-Holland, New York
  10. Rip A, Misa Th, Schot J W (eds.) 1995 Managing Technology in Society. The Approach of Constructive Technology Assessment. Pinter, London
  11. Schot J, Rip A 1997 The past and the future of constructive technology assessment. Technological Forecasting and Social Change 54: 251–68
  12. Vanclay F, Bronstein D A (eds.) 1995 Environmental and Social Impact Assessment. Wiley, Chichester, UK
  13. Wood F B 1997 Lessons in technology assessment methodology and management at OTA. Technological Forecasting and Social Change 54: 145–62
Technology Districts Research Paper
Technology And Social Control Research Paper


Always on-time


100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655