Energy and Power Technology Research Paper Topics

Academic Writing Service

This list of energy and power technology research paper topics provides the list of 19 potential topics for research papers and an overview article on the history of energy and power technologies.

1. Biomass Power Generation

Biomass, or biofuels, are essentially clean fuels in that they contain no sulfur and the burning of them does not increase the long-term carbon dioxide (CO2) levels in the atmosphere, since they are the product of recent photosynthesis (note that peat is not a biofuel in this sense). This is by no means an unimportant attribute when seen in the context of the growing awareness across the globe of the pollution and environmental problems caused by current energy production methods, and the demand for renewable energy technologies.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Biomass can be used to provide heat, make fuels, and generate electricity. The major sources of biomass include:

  • Standing forests
  • Wood-bark and logging residues
  • Crop residues
  • Short rotation coppice timber or plants
  • Wood-bark mill residues
  • Manures from confined livestock
  • Agricultural process residues
  • Seaweed
  • Freshwater weed
  • Algae

A few facts and figures might help to put the land-based biomass sources in perspective. The first three of the above list produce in the U.S. approximately the equivalent of 4 million barrels of oil per day in usable form. If all crop residues were collected and utilized to the full, almost 10 percent of the total U.S. energy consumption could be provided for. Although the other land-based sources of biomass are perhaps not on the same scale as this, the combined resource represents a huge untapped reservoir of potential energy. An interesting point to note is that current practices in forestry and food crop farming are aimed directly at optimizing the production of specific parts of a plant. Since biomass used for energy would make use of the whole plant, some significant advantage might be gained by growing specifically adapted crops designed to maximize the energy yield rate. It is from this origin that the energy farm concept is born.




2. Early Fusion Nuclear Reactors

The production of nuclear energy through the fusion of two light chemical elements is better known as a controlled thermonuclear reaction (CTR). In the 1950s, explosive or uncontrolled thermonuclear reaction was achieved with the manufacture of hydrogen bombs, but CTR was never successfully accomplished.

In order to reach the fusion point, a gaseous mixture containing deuterium and tritium should be heated to 100,000,000C and hold that temperature for enough time to activate a self-sustaining reaction. At elevated temperatures, a gaseous mixture becomes plasma, a state in which electrons and ions are no longer physically bonded. (The term plasma was first used in 1922 by the American physical chemist Irving Langmuir because the properties of a super-heated gas reminded him of blood plasma.)

Heating and confinement of plasma are the two main features of any fusion reactor. Plasma must avoid any contact with the walls of the vessel containing it in order to avoid the loss of temperature and subsequent instability that makes a controlled thermonuclear reaction impossible to achieve. Early designs of fusion reactors focused on confinement of plasma using magnetic fields.

3. Electrical Power Distribution

While the first commercial power station in San Francisco in 1879 was used for arc lighting (using a spark jumping a gap as the source of light) for street lamps, these had limited application. Edison’s carbon filament lamp was the stimulus for the spread of electric lighting. A few of Edison’s buildings and some private residences had their own generators, but Edison also recognized there was a need for a generating and distribution system. Edison’s distribution system was first demonstrated in London, with a temporary installation running cables under the Holburn Viaduct in early 1882 that provided power for the surrounding district. The first permanent central electric generating station was Edison’s Pearl Street Station in New York that went into operation in September 1882 and provided electricity (with a meter) to 85 customers in a 1 square mile (2.6 square kilometers) area. The Pearl Street Station used direct current (DC). In DC systems, the current flows in one direction, with a constant voltage. The dissipation of energy limits the size of DC systems and requires the source of electric generation to be close to the customer. Alternating current (AC) systems, in which the current changes direction (in today’s public electricity supply, 50 or 60 times per second), overcame this limitation.

4. Electricity Generation and the Environment

Fossil fuel thermal generating technologies were a mainstay of both twentieth century electricity generation and environmental attention. While concern with declining urban air quality, initially at the center of this attention, dated back to the nineteenth century, it was the substantial post- World War II rise in electricity consumption that resulted in the later prominence of these concerns. The impacts of fossil fuel extraction and transportation were also a source of significant twentieth century environmental attention, but concern over atmospheric emissions dominated. Although initial concern focused on particulate emissions, attention shifted to acidic emissions from the 1970s onward, and the final decade of the century was dominated by concern with the impact of fossil fuel emissions on climate. This later concern with fossil fuel greenhouse gas emissions, primarily thermally produced carbon dioxide (CO2) but also fugitive emissions (i.e., not caught by a capture system) such as methane from coal seams and from gas extraction and distribution systems, reinforced an increasing emphasis on alternative generating technologies. Some of these, notably macro-hydro and nuclear fission, were significant twentieth century technologies in their own right, and their environmental impacts are briefly discussed below. However, as the twentieth century closed this emphasis was increasingly turning to renewable energy technologies and the potential for significant further efficiencies in both electricity generation and consumption, including the drive to ‘‘decarbonize’’ electricity generation by turning away from fossil fuel technologies.

5. Fast Breeders Nuclear Reactors

The idea of a fast breeder reactor (FBR) was first conceived in 1946 by the Canadian physicist Walter H. Zinn at the Argonne National Laboratory in the U.S. On the basis of wartime developments in nuclear reactor research, Zinn thought a combination of two options within reactor technologies was feasible: fast neutron nuclear fission and the breeding principle. Fast reactors produce nuclear fission with fast neutrons rather than thermal neutrons. Fast neutrons prompt critical reactions with a large energy release in a short time and without a moderator operating in the core. Breeder reactors have a core of fissile material (i.e., uranium-235 or plutonium-239) produced in nuclear reactors or through chemical separation, and a blanket of fertile material (i.e., uranium-238), the treated radioactive mineral. Once in operation, breeder reactors incinerate the fissile material in the core and emit neutrons as fission products. Thus the blanket is neutron bombarded, the fertile material is irradiated, and afterward transformed into fissile material (i.e., plutonium-239) by neutron capture and following decay. In optimal operation conditions, the fissile material produced through breeding equals the fissile material incinerated in the core, so that the reactor perpetuates indefinitely the production of its fuel.

6. Fossil Fuel Power Stations

Until the last third of the twentieth century, fossil fuels—coal, oil, natural gas—were the primary source of energy in the industrialized world. Large thermal power stations supplied from fossil fuel resources have capacities ranging up to 4000 to 5000 megawatts. Gas-fired combined-cycle power stations tend to be somewhat smaller, perhaps no larger than 1000 to 1500 megawatts in capacity.

Concerns in the 1970s over degradation of urban air quality due to particulate emissions and acid rain from sulfur dioxide emissions from fossil fuel power stations were joined from the 1990s by an awareness of the potential global warming effect of greenhouse gases such as carbon dioxide (CO2), produced from the combustion of fossil fuels (see Electricity Generation and the Environment). However, despite a move towards carbon-free electricity generation, for example from nuclear power stations and wind and solar plants, fossil fuels remain the most significant source of electrical energy generation.

7. Fuel Cells

The principle of the fuel cell (FC) is similar to that of the electrical storage battery. However, whereas the battery has a fixed stock of chemical reactants and can ‘‘run down,’’ the fuel cell is continuously supplied (from a separate tank) with a stream of oxidizer and fuel from which it generates electricity. Electrolysis—in which passage of an electrical current through water decomposes it into its constituents, H2 and O2—was still novel in 1839 when a young Welsh lawyer–scientist, William Grove, demonstrated that it could be made to run in reverse. That is, if the H2 and O2 were not driven off but rather allowed to recombine in the presence of the electrodes, the result was water— and electrical current.

Over the 20th century FCs moved from laboratory curiosity to practical application in limited roles and quantities. It is very possible that the twenty-first century will see them assume a major or even dominant position as power sources in a broad array of applications. Obstacles are largely economic and the outcome will be influenced by success in development of competing systems as well as FCs themselves.

8. Gas Turbines

During the 20th century, the gas turbine was developed to fit many applications on land, sea and in the air. From early beginnings, the gas turbine came alongside, competed with, and often replaced the existing technologies of steam, water, and reciprocating internal combustion engines. Initial problems stemmed from a lack of knowledge and the techniques; the fundamentals were well enough understood, but what were lacking were the design techniques. Materials also held up developments; but after extensive experimentation, successful turbine designs were being constructed in the first ten years of the 20th century.

The gas turbine has the advantage over traditional engines in that its combustion process is continuous and thus the equipment is less subject to cyclic heat stresses and its power is less limited— power is limited by combustion knock in spark ignition engines, but in diesel engines it is only limited by structural strength and maximum working pressures in the fuel injection systems. It also has fewer moving parts, so wear and tear is lessened. Despite these differences, the gas turbine still has the basic four functions of the four-stroke cycle but operates continuously: air is admitted, compressed, heated by burning fuel so that it expands and does work, and then the spent gases are expelled. However, unlike an ordinary engine, each of these processes takes place in a separate part of the engine and happens continuously; the oil engine has all processes within the cylinder and they follow on from each other.

9. Gas Turbines in Land Vehicles

The gas turbine has found widespread use in the aviation, marine, and stationary power areas. However, the gas turbine has only seen limited use in land transportation.

As various companies began to experiment with gas turbines in the 1920s and 1930s some gave thought to using the turbine as a source of motive power for land vehicles. The turbine promised much higher power-to-weight ratios than conventional reciprocating engines and also had the capability of using cheaper fuels such as industrial heating oil, diesel fuel, and even powered coal. As with gas turbines in aviation, most development has occurred since World War II.

10. Hydroelectric Power Generation

It is estimated that about 50 percent of the economically exploitable hydroelectric resources, not including tidal resources, of North America and Western Europe have already been developed. Worldwide, however, the proportion is less than 15 percent.

The size of hydroelectric power plants covers an extremely wide range, from small plants of a few megawatts to large schemes such as Kariba in Zimbabwe, which comprises eight 125 megawatt generating sets. More recently, power stations such as Itaipu on the Parana River between Brazil and Paraguay in South America were built with a capacity of 12,600 megawatts, comprising eighteen generating sets each having a rated discharge of approximately 700 cubic meters per second.

Hydroelectric power has traditionally been regarded as an attractive option for power generation since fuel costs are zero; operating and maintenance costs are low; and plants have a long life—an economic life of 30 to 50 years for mechanical and electrical plant and 60 to 100 years for civil works is not unusual.

11. Large Scale Electrical Energy Generation and Supply

Public supply of electricity at the close of the nineteenth century was typically confined to the larger towns and cities where either a local entrepreneur, or a far-sighted municipality, established relatively small generating stations to supply local lighting loads. Many of these local power stations employed reciprocating engines to drive direct current (DC) dynamos. Overhead circuits generally carried the power no more than a kilometer or two to local businesses or the larger households in the district. Sometimes, where water-powered mills had existed previously, hydroelectric generators were established to supply the electricity consumers. As more and more people began to appreciate the convenience of electrical power and, moreover, could afford to pay for it, demand on local supplies increased and larger power stations began to be established. The invention of the electrical transformer to step-up the voltage at the generating station and step it down again to a safe level for use by the consumers, meant that higher speed alternators, often driven by steam turbines, could be employed to produce the power. High-voltage distribution reduced the losses in the circuits between the generating stations and the loads.

12. Later Fusion Nuclear Reactors

In the early 1950s, the Soviet physicists Andrei Sakharov and Igor Tamm proposed a reactor that generated both internal plasma and external toroidal magnetic fields. This concept was adopted by their colleague Lev Artsimovich in his T-3 reactor, the first ‘‘tokamak’’ (the Russian acronym for toroidal chamber and magnetic coil), unveiled in 1968. The tokamak magnetic field is thus the combination of two magnetic fields: the stronger horizontal, toroidal field interacts with the weaker vertical, poloidal plasma field to produce a helical magnetic field. In confining its plasma for 0.01 to 0.02 seconds and heating it to 10,000,000C, the T- 3 produced results that suggested fusion energy was feasible.

The tokamak reactor subsequently became the standard tool for fusion research. The energy crisis of the 1970s resulted in state support for major projects in a number of industrialized countries including France, Japan, the U.K., and the U.S. The largest and most notable were the American Tokamak Fusion Test Reactor (TFTR), approved by the Atomic Energy Commission in 1974 and completed in 1982 at Princeton University, and the British–European Joint European Torus (JET), which began operations in 1983 in Culham, Oxfordshire, U.K. Other important tokamaks include Japan’s JT-60 and General Atomics’ DIII-D.

13. Power Generation and Recycling

Recovering energy from wastes from municipal or industrial sources can turn the problem of waste disposal into an opportunity for generating income from heat and power sales. The safe and cost-effective disposal of these wastes is becoming increasingly important worldwide, especially with the demand for higher environmental standards of waste disposal and the pressure on municipalities to minimize the quantities of waste generated that must be disposed.

14. Primary and Secondary Batteries

The battery is a device that converts chemical energy into electrical energy and generally consists of two or more connected cells. A cell consists of two electrodes, one positive and one negative, and an electrolyte that works chemically on the electrodes by functioning as a conductor transferring electrons between the electrodes.

Primary cells, most often ‘‘dry cells,’’ are exhausted (i.e., one or both of the electrodes are consumed) when they convert the chemical energy into electrical energy. These battery types are widely used in flashlights and similar devices. They generally contain carbon and zinc electrodes and an electrolyte solution of ammonium chloride and zinc chloride. Another form of primary cell, often called the mercury battery, has zinc and mercuric oxide electrodes and an electrolyte of potassium hydroxide. The mercury battery is suitable for use in electronic wristwatches and similar devices.

Secondary cells convert chemical energy into electrical energy through a chemical reaction that is essentially reversible. In ‘‘charging,’’ the cell is forced to operate in reverse of its discharging operation by pushing a current through in the opposite direction of the one normal in discharge. Energy is thus ‘‘stored’’ in these cells as chemical, not electrical, energy. They may be ‘‘recharged’’ by an electrical current passing through them in the opposite direction of their discharge. Secondary, or storage, cells are generally wet cells, which use a liquid electrolyte.

15. Solar Power Generation

The emergence of solar power generation is part of the overall movement toward renewable energy production. Interest in this type of energy production grew in the early 1970s with an increased public awareness of the negative impact of technological developments on the environment. The use of solar power, of course, was not new. Heat produced by the sun was used for all sorts of purposes from the early history of humankind. In the search for renewable energy sources, the direct use of the sun’s heat has continued in the use of solar panels. In these panels, heat from the sun is absorbed by water flowing in pipes, and the hot water can then be used for heating purposes. In the twentieth century, two types of thermal solar energy systems developed: (1) active systems that used pumps or fans to transport the heat; and (2) passive systems that use natural heat transfer processes. In 1948 a school in Tucson, Arizona, with a passive solar energy system was built by Arthur Brown. In 1976 the Aspen-Pitkin County airport was opened as the first large commercial building in the U.S. that used a passive solar energy system for heating. However, the original idea of using passive solar energy goes back to ancient times. Archeologists have found houses with passive solar energy systems dating back to the fifth century AD.

16. Steam Turbines

The first steam turbine, of which there is any record, was made by Hero of Alexandria more than 2000 years ago. This simply demonstrated that a jet of steam, impinging on a paddle wheel, could convert heat energy into mechanical energy. In the late nineteenth century significant improvements in the efficiency of conversion were made by, among others, Sir Charles Parsons on Tyneside, U.K. and Charles G. Curtis in the U.S.

Early steam engines up to that time had involved very high rotational speed, which was difficult to utilize for many purposes unless speed-reducing gearboxes were employed. Parsons had deduced that moderate surface velocities and speeds of rotation were essential if the ‘‘turbine motor’’ was to receive general acceptance as a prime mover. His early designs arranged to divide the fall in pressure of the steam into small fractional expansions over a large number of turbine wheels in series so that the velocity of the steam over each wheel was not excessive.

At the close of the 19th century, many local power stations employed reciprocating steam engines to drive electric generators. Steam turbines had the advantage over reciprocating steam engines, which were based on the movement of a piston in a cylinder, of being lighter and more efficient. The Curtis multiple-stage steam turbine (patented in 1896, sold rights to General Electric in 1901) occupied a smaller space and cost much less than contemporary reciprocating steam engine-driven generators of the same output. The Curtis turbine was also shorter than the Parsons turbine, and was thus less susceptible to distortion of the central shaft.

The work that Curtis, Parsons, and others carried out in the development of steam turbines allowed large central power stations to be developed, providing electricity for the growing demand during the early 1900s. Early machines at the beginning of the 20th century

17. Thermal Graphite Moderated Nuclear Reactors

In a nuclear reactor, an element low on the atomic scale such as carbon or hydrogen is used to absorb kinetic energy to slow down naturally emitted neutrons from the radioactive fuel. In most power reactors, refined but unenriched natural uranium (238U or uranium-238) is the preferred fuel over 99 percent of the time. When the neutrons move more slowly or at a ‘‘moderated’’ speed, the chances of collision between the neutrons and other uranium nuclei, leading to fission and a chain reaction, are increased. Reactor designs are often named for the type of moderator used.

The first reactors, including the experimental pile built in 1942 at Chicago during World War II and the early production reactors built in 1943 at Hanford in Washington state, used graphite as a moderator. Later reactors used water, heavy water, sodium, or other materials as moderators. In the U.S., almost all power reactors and all submarine and ship propulsion reactors relied on pressurized water systems or boiling water systems, first installed in the late 1950s. Acronyms for all these systems have become conventional, with the most common being the boiling water reactor (BWR), pressurized water reactor (PWR), and light water-cooled graphite-moderated reactor (LWGR).

Accidents involving graphite reactors are particularly dangerous, since graphite is flammable. A release of radioactivity in 1957 at the British Windscale Reactor near Sellafield, Cumbria, a graphite production reactor, was not immediately disclosed. Even accidents with water-cooled reactors, such as that at Three Mile Island in Pennsylvania on March 28, 1979, cause national and international concern. However, far more serious was the Chernobyl fire of April 26 1986, in a 1000-megawatt rated RBMK graphite-moderated reactor. That fire spread radioactive contamination across not only the Ukraine but also much of eastern and northern Europe as well. As at Windscale, details of the Chernobyl accident were temporarily suppressed. Gas-cooled graphite reactors are prevented from burning by the fact that they are cooled with carbon dioxide. However, if oxygen-containing air leaks into the system and the cooling system fails, the graphite can ignite.

18. Thermal Water Moderated Nuclear Reactors

Nuclear reactors are usually classified by their coolant and their moderators. The moderator is a material, low in the atomic scale, whose atomic nucleus has the effect of slowing down or moderating the speed of fast neutrons emitted during nuclear fission. By slowing the speed of neutrons, the moderator increases the chance of collision of neutrons with the nuclei of fissionable nuclear fuel atoms. The original reactor designed by Enrico Fermi during the Manhattan Project at Chicago, known as Chicago Pile One, or CP-1, was a graphite-moderated, air-cooled reactor. Many British and French nuclear reactors for the generation of electrical power use carbon in the form of graphite, and they are cooled with carbon dioxide gas. These types are known as Magnox reactors. However, the common designs for power generation developed in the U.S. used water both as coolant and as a moderator.

Water-cooled reactors fall into two large families. Heavy water reactors contain water in which the hydrogen atom is replaced with the hydrogen isotope deuterium. This type of reactor is manufactured for export by Canada. The pressurized heavy water reactor (PHWR) has been exported and installed in India, Romania, and elsewhere. The U.S. built five heavy water reactors at Savannah River, South Carolina, in the 1950s to serve as production reactors for the manufacture of plutonium and tritium for nuclear weapons. By the late 1980s, all the Savannah River production reactors had been closed. After some experimentation with graphite-moderated gas-cooled designs and with heavy-water moderation during the 1950s, the U.S. followed the ‘‘light water’’ path.

19. Wind Power Generation

Wind is essentially the movement of substantial air masses from regions of high pressure to regions of low pressure induced by the differential heating of the Earth’s surface. This simplistic view belies the complexity of atmospheric weather systems but serves to indicate the origin of climatic airflow.

The first attempts to harness wind power for electricity production date back to the 1930s. In Germany, Honnef planned a monstrous five turbine, 20 megawatt (MW), wind tower, several hundred meters high, a far cry from the sleek aerospace wind turbine generators (WTGs) of today. The design of a large scale WTG is limited to one of two formats realistically held to have good prospects. First, the horizontal axis type descended from those encountered by Don Quixote and common until recently in the flat lands of Europe; and second, the vertical axis machines of which the Darrieus rotor is perhaps the most common. Of the two, horizontal axis machines predominate, although the vertical axis type has many positive attributes, not the least of these being simplicity.

Energy and Power Technology

Energy and PowerAt the close of the 20th century, electricity was so commonplace that it would be difficult to imagine an existence without light, heat, and music bowing to our command at the flick of a switch. Children who could barely stretch high enough to toggle a light switch now have dominion over phenomena that less than a century ago would have been considered inconceivable. The temptations of such power have proved hard to resist.

The repercussions of the rapacious appetite for control of energy among Western industrial nations have not been confined to the lot of the individual, however. As in previous eras, when the control of mechanical or biological power carried financial, geographical, and social significance, the use and abuse of electrical energy now additionally carries environmental, political, and moral implications. Developments in energy and power in the twentieth century must therefore be considered within these broader thematic areas as the generation and consumption of energy are inextricably linked with practically the whole spectrum of human existence.

At the beginning of the 20th century, despite the fact that many components of modern electronics such as the battery had already been invented 100 years earlier, body power was still the norm, especially in rural areas. Horses, carriages, tow paths, water mills, and the like were the standard means of transport and power for a large proportion of the population, despite the growth of electricity and the 130 supply companies that were operating by 1896 in Britain. Even in urban settings, only lighting and telegraphy were advanced to the stage where the benefits were generally enjoyed as a result of Thomas Edison’s invention of the light bulb in 1879 and Alexander Graham Bell’s first telephone transmission in 1878.

By 1900 in Britain the main features of an electricity supply industry had been established. The system was based on the generation of high-voltage alternating current (AC), with transformers stepping down voltages for local use. However, one obstacle that the industry had to overcome was the lack of standardization across local areas. In some parts, direct current (DC) equipment was still installed, and local voltage levels and frequencies varied considerably. Despite problems posed by these variations, at the start of the century most of the appliances that are now taken for granted had appeared. Space heaters, cookers, and lighting equipment were not yet in every home, but the very speed at which their use was adopted was testament to the flexibility and popularity of electricity. In 1918 electric washing machines became available, and in 1919 the first refrigerator appeared in Britain. They had already been introduced for domestic use in the U.S. in 1913. Electricity had been firmly accepted as the energy of the future. Demand from the residential sector started to boom and spurred further research. Most importantly, perhaps, by the 1920s in Britain the domestic immersion heater began to take over the duties of coal. The use of electric trolleys and trains, which had been running since the end of the nineteenth century, also continued to expand, and underground travel developed swiftly. Electricity also made advances in communications possible, from the telegraph and the telephone, to the broadcasting boom of the 1920s. In 1928 the construction of a British national grid system began, and it took less than ten years before the system was in operation. This alacrity is partly to be explained by the influence of World War I. The war’s heavy demands on manufacturing acted as a great incentive for the rapidly evolving electricity industry, particularly with regard to improving the efficiency of supply. Thereafter, the rebuilding and expansion of industry across the industrialized world began. In Russia, Lenin was moved to state, ‘‘Communism equals Soviet power plus electrification,’’ as part of the propaganda for industrialization. Electricity took over the driving of fans, elevators, and cranes, driving coal-mining equipment, for example, and rolling mills in steel factories. The use of individual electric motors allowed astonishing advances in speed control, precision, and productivity of machine tools.

With World War II came devastation. Power stations and fuel supplies were inevitably considered as strategic targets for the bombers during the destructive aerial attacks by both the Axis powers and the Allies. By 1946, the estimated deficiency of generating capacity in Europe was 10,000 megawatts. According to anecdotal evidence, the victory bells in Paris were only able to ring out in 1945 because of electricity transmitted from Germany, where more industrial capacity of all kinds, including power stations, had survived. Whatever the truth of this may be, the security of electricity supply quickly became an issue of undisputed importance throughout Europe, and the fuels used in electrical generation were valuable resources indeed.

Coal

At the start of the twentieth century, there was a new worldwide optimism about coal as a resource that seemed to be available in almost unlimited amounts. Coal consumption levels rose steeply both in the U.S. and Europe, to reach a peak around 1914 and the outbreak of World War I. Between the world wars, consumption quantities remained almost static, particularly in the U.S., as other fuel types started to dominate the market. Reasons for this slow-down include the rising popularity of the four-stroke ‘‘Otto’’ cycle engine that is widely used in transportation even today as well as the commercialization of the diesel engine. These two technologies pushed fuel sources swiftly from solid to liquid fuels.

Nuclear Power

Nuclear fission was discovered in the 1930s. Considerable research occurred in those early years, particularly in the U.S., the U.K., France, Canada, and the former Soviet Union, in the design and construction of commercial nuclear power stations. In the early 1940s, U.S. intelligence regarding Germany’s promising nuclear research activities dramatically hastened the U.S. resolve to build a nuclear weapon. The Manhattan Project was established for this purpose in August 1942. In July 1945, Manhattan Project scientists tested the first nuclear device in Alamagordo, New Mexico, using plutonium produced from a uranium and graphite-pile reactor in Richland, Washington. A month later a highly enriched uranium nuclear bomb was dropped on the Japanese city of Hiroshima, and a plutonium nuclear bomb was dropped on Nagasaki, effectively ending World War II.

The nuclear power industry suffered some notable disasters during its years of technological development. In 1979, the Three Mile Island Unit 2 (TMI-2) nuclear power plant in Pennsylvania suffered damage due to mechanical or electrical failure of parts of the cooling system. Just seven years later, on the opposite side of the Iron Curtain near an obscure city on the Pripiat River in northcentral Ukraine, another disaster occurred. This accident became a metaphor not only for the horror of uncontrolled nuclear power but also for the collapsing Soviet system and its disregard for the safety and welfare of workers. On April 26, 1986, the No. 4 reactor at Chernobyl exploded and released 30 to 40 times the radioactivity of the atomic bombs dropped on Hiroshima and Nagasaki. The Western world first learned of history’s worst nuclear accident from Sweden where abnormal radiation levels, the result of deposits carried by prevailing winds, were registered.

Ranking as one of the greatest industrial accidents of all time, the Chernobyl disaster and its impact on the course of Soviet events can scarcely be exaggerated. No-one can predict what will finally be the exact number of human victims. Thirty-one lives were lost immediately. Hundreds of thousands of Ukrainians, Russians, and Belo Russians had to abandon entire cities and settlements within the 30 kilometer zone of extreme contamination. Estimates vary, but it is likely that over 15 years after the event, some 3 million people, more than 2 million in Belarus alone, continued to live in contaminated areas.

Oil

Often accused of being one of the two great evils in the energy sector along with nuclear power, the oil industry grew over the course of the twentieth century to acquire significance and influence previously unimagined for any industrial sector. As the century opened, the U.S. was the largest oil producer in the world, but the discovery and exploitation of reserves in the Middle East, South America, and Mexico soon shifted the balance of the market away from the U.S., which by 1950 produced less than half the world’s oil. This trend continued and by the year 2000, oil production was almost equally divided between OPEC (Organization of Petroleum-Exporting Countries) and non-OPEC countries. Even in the early years of the century, the geographical spread of supply and demand quickly created the need for a system of distribution of unprecedented scale. The distances and quantities involved led to the construction of pipelines and huge ocean-going ships and tanker trucks. The capital intensive nature of these infrastructure projects, as well as the costs of exploration and exploitation of oil fields, concentrated control of resources in the hands of a few companies with vast coffers. As the reserves from easily exploitable sites dwindled, the pockets even of governments were insufficiently deep to invest in new drilling projects, and Royal Dutch Shell, Standard Oil, British Petroleum, and others were born.

Concern about fossil fuel depletion began to be voiced around the world in the 1960s, but the issue created headlines on the international political circuit in 1970 following the publication of the Club of Rome’s report ‘‘Limits to Growth.’’ This document warned of the impending exhaustion of the world’s 550 billion barrels of oil reserves. ‘‘We could use up all of the proven reserves of oil in the entire world by the end of the next decade,’’ said U.S. President Jimmy Carter. And although between 1970 and 1990 the world did indeed use 600 billion barrels of oil, and according to the Club of Rome reserves should have dwindled to less than zero by then, in fact, the unexploited reserves in 1990 amounted to 900 billion barrels not including tar shale.

Hydroelectric Power

Not a recent development by any stretch of the imagination, hydroelectric power was used extensively at the start of the twentieth century for mechanical work in mills and has a pedigree stretching back to ancient Egyptian times. Indeed, water power produces 24 percent of the world’s electricity and supplies more than 1 billion people with power. At the end of the twentieth century, hydroelectric power plants generally ranged in size from several hundred kilowatts to many hundreds of megawatts, but a few mammoth plants supplied up to 10,000 megawatts and electricity to millions of people. These leviathans, or ‘‘temples of modern India,’’ as India’s first prime minister Jawaharlal Nehru declared, were also the cause of massive discontent from social and environmental standpoints. The displacement of local indigenous populations and failure to deliver promised benefits were just two of the many complaints. By comparison, and despite hydroelectric power’s renewable credentials, the use of conventional fossil fuel technologies such as natural gas remained relatively uncontroversial.

Coal-Gas Technology

A derivative of coal as its name implies, coal-gas is produced through the carbonization of coal and has played a not insignificant role in the development of power and energy in the twentieth century. It was an important and well-established industry product as the century opened, although electricity had already started to make inroads into some of the markets that coal-gas served. Coal-gas enjoyed widespread use in domestic heating and cooking and some industrial facilities, but despite the invention of the Welsbach Mantle in 1885, electricity soon started to dominate the lighting market. The Ruhrgebeit in Germany was the most active coal-gas producing area in the world. It was here that the Lurgi process, in which low-grade brown coal is gasified by a mixture of superheated steam and oxygen at high pressure, flourished for many years. However, as the coal supplies necessary for the process became increasingly expensive, and as oil fractions with similar properties became available, the coal-gas industry swiftly declined. In fact, when the coal industry seemed to have reached a pinnacle, another rival industry—natural gas—was being born.

Natural Gas

The American gas industry developed along different lines from the European market. Each started from a different basis at the dawn of the twentieth century. The U.S. had been quick to adopt the production of coal-gas, which was used for lighting as early as 1816. After the discovery of fields of largely compatible natural gas in relatively shallow sites when searching for oil reserves, the natural gas industry expanded swiftly. Large-scale transmission mechanisms were developed with alacrity, and one noteworthy example of this came from the Trans-Continental Gas Pipeline Corporation, which completed a link from fields in Texas and Louisiana to the demand-intensive area around New York in 1951. By contrast, in Europe the exploitation of natural gas began in earnest in the years following World War II. In the Soviet Union, for example, the rich fields around Baku in Azerbaijan were connected to both their Eastern Bloc allies by 1971 and also to West Germany and Italy by over 680,000 kilometers of pipelines.

In Western Europe developments on the geopolitical level benefited Britain, which officially acquired the mineral rights for the western section of the North Sea in 1964. Just one year later, the West Sole field was discovered. Britain had already imported some natural gas from the U.S., and within 12 years had switched almost entirely from manufactured coal-gas to natural gas. This conversion was no simple operation. The differing properties of manufactured and natural gas meant that domestic and industrial appliances numbering in the tens of millions had to be altered. The British conversion scheme, which lasted ten years, is estimated to have cost £1000 million. Other similar conversion programs were carried out in Holland, Hungary, and even in the Far East.

In October 1973, panic gripped the U.S. The crude-oil rich Middle Eastern countries had cut off exports of petroleum to Western nations as punishment for their involvement in recent Arab–Israeli conflicts. Although the oil embargo would not ordinarily have made a tremendous impact on the U.S., panicking investors and oil companies caused a gigantic surge in oil prices.

There were more oil scares throughout the next two decades. When the Shah of Iran was deposed during a revolution, petroleum exports were diminished to virtually negligible levels, causing crude oil prices to soar once again. Iraq’s invasion of Kuwait in the 1990s also inflated oil prices, albeit for only a short time. These events highlighted the world’s dependence on Middle Eastern oil and raised political awareness about the security of oil supplies.

The ‘‘dash for gas’’ in the U.K.—the rapid switch from coal to gas as the dominant source of power generation fuel—was no doubt partly instigated by the discovery of home reserves there. Worldwide the new application of an old technology, combined cycle gas turbines, or CCGTs, played a significant role. During the last decades of the twentieth century, the gas turbine emerged as the world’s dominant technology for electricity generation. Gas turbine power plants thrived in countries as diverse as the U.S., Thailand, Spain, and Argentina. In the U.K., the changeover began in the late 1980s and resulted in the closure of many coal mines and coal-fired power stations. As electricity industries were privatized and liberalized, the CCGT in particular became more and more attractive because of its low capital cost, high thermal efficiency, and relatively low environmental impact. Indeed, this technology contributed to the trend identified by Cesare Marchetti, which depicts the chronological shift of the world’s sources of primary power from wood to coal to oil to gas during the last century and a half. Each of these fuels is successively richer in hydrogen and poorer in carbon than its predecessor, supporting the hypothesis that we are progressing toward a pure hydrogen economy.

Distributed Generation

Embedded or distributed generation refers to power plants that feed electricity into a local distribution network. By saving transmission and distribution losses, it is generally considered to be an environmentally and socially beneficial option compared with centralized generation. Technologies that contributed to the expansion of this mode of generation include wind turbines, which developed to the point where their cost of generation rivaled that of central power stations, photovoltaic cells, and combined heat and power units. These industries expanded massively in the latter years of the century, particularly in Europe where regulatory measures gave impetus and a degree of commercial security to the fledgling industries.

Hydrogen

Many industries worldwide began producing hydrogen, hydrogen-powered vehicles, hydrogen fuel cells, and other hydrogen products toward the end of the twentieth century. Hydrogen is intrinsically ‘‘cleaner’’ than any other fuel used to date because combustion of hydrogen with oxygen produces energy with only water, no greenhouse gases or particulate exhaust fumes, as a byproduct. At the close of the twentieth century, however, although prototypes and demonstration projects abounded, commercial competitiveness with conventional fuels was still only a distant prospect.

From almost wholly somatic sources of power in 1900, energy and power developed at an astonishing pace through the century. As the century closed, despite support for ‘‘green’’ power, particularly in developed nations, the worldwide generation of energy was still dominated by fossil fuels. Nevertheless, unprecedented changes seemed possible, driven for the first time by environmental and social concerns rather than technological possibilities or purely commercial considerations. Awareness of energy-related carbon emissions issues addressed by the Kyoto protocol raised questions concerning the institutional arrangements on both national and international levels, and their capacity for action in responding to public demand. After a century of development, a wide variety of institutional and regulatory regimes evolved around electricity supply. These most often took the form of a franchised, regulated monopoly within clearly defined administrative boundaries, in a functional symbiosis with government. However, each has the same basic technical model at its heart; Large, central generators produce AC electricity, and deliver it to consumers over a network. The continuing stable operation of this system on which many millions of people rely, once considered the responsibility of central governments, is changing. The increasing shift toward liberalization and internationalization is moving responsibility for energy supplies away from state-owned organizations, a trend compounded by the environmental and institutional implications of renewable energy technologies.

Browse other Technology Research Paper Topics.

Electronics Research Paper Topics
Military Technology Research Paper Topics

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!