This list of military technology research paper topics provides the list of 28 potential topics for research papers and an overview article on the history of military technology development.
1. Aircraft Carriers
Three nations built fleets of aircraft carriers— Britain, Japan and the United States—and each contributed to carrier design trends. Experiments began before World War I when, in November 1910, Eugene Ely flew a Curtiss biplane from a specially built forward deck of the cruiser USS Birmingham moored off Hampton Roads, Virginia. Two months later he accomplished the more difficult task of landing on a deck built over the stern of the cruiser Pennsylvania. Sandbags were used to anchor ropes stretched across the deck to help stop the airplane, which trailed a crude hook to catch the ropes.
The Enterprise, America’s first atomic-powered carrier, entered service in 1961 with a range of 320,000 kilometers, or capable of four years’ cruising. She was similar to the Forrestal carriers except for her small square island structure that originally featured ‘‘billboard’’ radar installations. Despite a huge cost increase (about 70 percent more than the Forrestals), she became the prototype for the ultimate Nimitz class of nuclear carriers that began to enter fleet service in the mid-1970s. Displacing nearly 95,000 tons, each had a crew of some 6,500 men. Driven by concerns about the growing expense of building and operating the huge American fleet carriers and their vulnerability, research into smaller carrier designs continued.
2. Air-to-Air Missiles
Interest in air-to-air missiles (AAMs, also known as air intercept missiles or AIMs) was initially prompted by the need to defend against heavy bombers in World War II. Unguided rockets were deployed for the purpose during the war, but the firing aircraft had to get dangerously close, and even so the rockets’ probability of approaching within killing range of their targets was poor. Nazi Germany developed two types of rocket-propelled missiles employing command guidance and produced some examples, but neither saw service use.
3. Air-to-Surface Missiles
Precision attack of ground targets was envisioned as a major mission of air forces from their first conception, even before the advent of practicable airplanes. Until the 1970s most air forces believed that this could be best accomplished through exact aiming of cannon, unguided rockets, or freelyfalling bombs, at least for most targets. But although impressive results were sometimes achieved through these methods in tests and exercises, combat performance was generally disappointing, with average miss distances on the order of scores, hundreds, or even thousands of meters.
The battleship dates back to the final decade of the 19th century when the term came into general use in English for the most powerfully armed and armored surface warships. Material improvements allowed the construction of ships with high freeboard and good sea keeping capable of effectively fighting similar ships at sea, like the line of battleships of the sailing era. British battleships were the archetypes of the era. They displaced around 13,000 to 15,000 tons and their most useful armament was a battery of six 6-inch quick-firing guns on each side. These stood the best chance of successful hitting given the primitive fire control techniques of the day, although skilled gunnery officers might use them to gain the range for accurate shooting by the slow-firing 12-inch guns, two of which were mounted in covered barbette turrets (armored structures to protect the guns) at each end.
5. Biological Warfare
In addition to the military use of natural or synthesized plant and animal toxins as poisons, biological warfare involves the use of disease-causing bacteria, viruses, rickettsia, or fungi to cause incapacitation or death in man, animals, or plants. Over the course of the twentieth century, biological weapons scientists, engineers, and physicians in various countries adopted existing technological and scientific practices, techniques, and instrumentation found in academic and industrial research to create a new weapon of mass destruction. Unlike the production of nuclear weapons, biological weapons research involves a synergistic relationship between the separate offensive and defensive components of each individual weapon system. Offensive research involves the identification, isolation, modification, and mass production of various pathogenic organisms and the creation of organismal delivery and storage systems. Offensive research is dependent in many cases upon the simultaneous success of a parallel defensive research program involving the creation of vaccines and protective health measures for researchers, military personnel, and civilians. In addition, defensive research involves the construction of accurate detection devices to indicate the existence of biological weapons whose presence can be masked during the initial phases of a natural epidemic.
6. Bomber Warplanes
Bombers apply aerospace technology to defeat an enemy through destruction of his will or ability to continue the conflict. In the twentieth century, the U.S. and the U.K found bombing particularly attractive because they were leaders in aerospace technology and disliked mobilizing large armies and suffering heavy casualties. Bombing requires aircraft that can carry sufficient bomb loads over great distances, penetrate enemy defenses, find targets in darkness and poor weather, and bomb accurately. Effective campaigns require adequate bases, trained personnel, fuel, munitions, replacement aircraft, spare parts, and the intelligence capability to select and assess damage to the proper targets.
7. Chemical Warfare
Popular fiction forecast the use of poison gas in warfare from the 1890s. While an effort was made to ban the wartime use of gas at The Hague International Peace Conference in 1899, military strategists and tacticians dismissed chemical weapons as a fanciful notion. The stalemate of World War I changed this mindset. Under Fritz Haber, a chemist at the Kaiser Wilhelm Institute, Germany’s chemical industry began making gas weapons. Compressed chlorine gas in 5730 cylinders was released against French Algerian and Canadian troops at Ypres, Belgium, on April 22, 1915. The gas attack resulted in approximately 3000 casualties, including some 800 deaths. Within months the British and French developed both gas agents of their own and protective gear, ensuring that chemical warfare would become a regular feature of the war. A variety of lethal and nonlethal chemical agents were developed in World War I. Lethal agents included the asphyxiating gases such as chlorine, phosgene, and diphosgene that drown their victims in mucous, choking off the supply of oxygen from the lungs. A second type were blood gases like hydrogen cyanide, which block the body’s ability to absorb oxygen from red corpuscles. Incapacitating gases included lachrymatorics (tear gases) and vesicants (blistering gases). The most notorious of these is mustard gas (Bis-[2- chloroethyl] sulphide), a blistering agent that produces horrible burns on the exposed skin and destroys mucous tissue and also persists on the soil for as long as 48 hours after its initial dispersion.
8. Defensive Missiles
Missile defenses are complex systems composed of three major components: sensors to detect the launch of missiles and track them as they advance toward their targets, weapon systems to destroy the attacking missiles, and a command and control system that interconnects sensors and weapons. As a result of technological advances, these three components have evolved over the years since World War II, producing two major periods in the history of missile defense and suggesting the advent of a third by about 2025.
All chemical explosives obtain their energy from the almost instantaneous transformation from an inherently unstable chemical compound into more stable molecules. The breakthrough from the 2000- year old ‘‘black powder’’ to the high explosive of today was achieved with the discovery of the molecular explosive nitroglycerine, produced by nitrating glycerin with a mixture of strong nitric and sulfuric acids. Nitroglycerin, because of its extreme sensitivity and instability, remained a laboratory curiosity until Alfred Nobel solved the problem of how to safely and reliably initiate it with the discovery of the detonator in 1863, a discovery that has been hailed as key to both the principle and practice of explosives. Apart from the detonator, Nobel’s major contribution was the invention of dynamite in 1865. This invention tamed nitroglycerine by simply mixing it with an absorbent material called kieselguhr (diatomous earth) as 75 percent nitroglycerin and 25 percent kieselguhr. These two inventions were the basis for the twentieth century explosives industry. Explosives are ideally suited to provide high energy in airless conditions. For that reason explosives have played and will continue to play a vital role in the exploration of space.
10. Fighter and Fighter Bomber Warplanes
Although new as weapons, fighters played an important role in World War I. Early in the war, reconnaissance planes and bombers were joined by fighters whose task it was to engage the enemy in aerial combat. Light machine guns were synchronized to fire through aircraft propellers. It was the German firm of Fokker which developed the first effective synchronizing device; this gave the Fokker planes, agile monoplanes, superiority over the Allies comparatively slow and less maneuverable biplanes. Aircraft development was then marked by a continuous catching-up process between German fighters on the one hand and French and British fighters on the other.
Research during World War II, especially in Germany, had shown that swept-back wings eased shockwave problems at high speeds. Important U.S. and Soviet aircraft developed shortly after the war, such as the Lockheed Sabre and MiG-15, had swept-back wings, and others adopted delta-wing layouts. Research and development in aerodynamics, structural engineering, materials science, and related fields led to the development of fighters and fighter–bombers with improved performance characteristics.
11. Fission and Fusion Bombs
Fission weapons were developed first in the U.S., then in the Soviet Union, and later in Britain, France, China, India, and Pakistan. By the first decade of the twenty-first century, there were seven countries that announced that they had nuclear weapons, and several others suspected of developing them.
12. High Explosive Shells and Bombs
Among the most baleful of twentieth century technological accomplishments was the vast elaboration of the means for inflicting death and destruction in war. While nuclear and chemical weapons occasioned more revulsion, conventional high-explosive weapons wrought far wider harm. A revolution began in the nineteenth century with the introduction of rifled cannon and effective explosive shells. This, in turn, brought an escalating contest between weapons and protection both for fortifications and ships. At the beginning of the twentieth century, shells were beginning to move from black powder fill to modern high explosives such as ammonium picrate and trinitrotoluene (TNT). High-explosive (HE) shells needed steel walls thick enough to withstand the shock of firing, limiting weights of bursting charges to no more than about 25 percent of the whole. Depending on the target, they might use either point-detonating or time fuses. The early time fuses continued, as they had in the nineteenth century, to depend on the time taken for a powder train of precut length to burn to its end.
13. High-Frequency and High-Power Radars
While early radar designers were driven to frequencies of more than 1000 megahertz by considerations of the availability of high-power components, it was appreciated very early on that higher frequencies and thus shorter wavelengths would allow better precision. Frequency and wavelength are inversely related according to the equation
Wavelength = c/frequency
where c = velocity of light.
Radars operating in the high-frequency (HF) band (3 to 30 megahertz) may detect targets well beyond the nominal horizon through two mechanisms: ‘‘sky wave’’ and ‘‘surface wave.’’ Early in the century, it was discovered that high-frequency radio waves were strongly refracted by the ionosphere. A HF beam aimed near the horizon would, under suitable conditions, be effectively reflected, returning to sea level some hundreds to thousands of kilometers from its transmission site. From the 1940s, interest developed in using this sky-wave transmission phenomenon to provide surveillance at great ranges. Early HF over-the-horizon radars (OTHRs) were bistatic ‘‘forward scatter’’ systems in which a widely separated transmitter and receiver detected and tracked targets lying between them. Ballistic missile tracking was a major application.
14. Long Range Ballistic Missiles
During the 1960s, the U.S. and the Soviet Union began to develop and deploy long-range ballistic missiles, both intercontinental ballistic missiles (ICBMs) and intermediate-range ballistic missiles (IRBMs). The former would have ranges over 8000 kilometers, and the latter would be limited to about 2400 kilometers. The German V-2 rocket built during World War II represented a short- or medium-range ballistic missile. The efficiency and long range of these missiles derived from the fact that they required fuel only to be launched up through the atmosphere and directed towards the target. They used virtually no fuel traveling through near outer space. They were ‘‘ballistic’’ rather than guided in that they fell at their target after a ballistic arc, like a bullet.
15. Long Range Cruise Missiles
A cruise missile is an air-breathing missile that can carry a high-explosive warhead or a weapon of mass destruction such as a nuclear warhead for an intermediate range of up to several hundred kilometers. When launched from the ground, such missiles are known as ground-launched cruise missiles (GLCMs). Some historians of weapons technology regard the German V-1 or ‘‘buzzbomb’’ operated in World War II, propelled with a ram-jet, air-breathing engine, as the first GLCM. The weapons do not require remote guidance, but automatically home in on pre-assigned targets, acting autonomously.
16. Long Range Radars and Early Warning Systems
During the 1930s, Great Britain was one of several countries, including most notably Germany and the U.S. that experimented with radar for early warning of air attacks. The British ‘‘Chain Home’’ system, designed by Sir Robert Watson-Watt and established by 1939, included a string of stations along the east and south coasts. By mid-1940, most of the stations featured two 73-meter wooden towers, one holding fixed transmitter aerials and the other receivers. When it was discovered that low-flying aircraft could slip undetected beneath the original fence, Britain created a second string of ‘‘Chain Home Low’’ stations, beginning with Truleigh Hill. The latter sites consisted of two separate aerials, one to transmit and the other to receive, mounted on 6-meter-high gantries and short enough to allow an operator inside the equipment hut beneath the gantry to manually rotate the arrays. Together, Chain Home and Chain Home Low provided a detection range of 40 to 190 kilometers depending on an incoming aircraft’s altitude. This early warning capability contributed immeasurably to the RAF victory over the Luftwaffe in the Battle of Britain.
17. Military Versus Civil Technologies
The exchange of technical ideas between the military world and the civilian world can be found throughout the history of technology, from the defensive machines of Archimedes in Syracuse about 250 BC, through the first application of the telescope by Galileo in military and commercial intelligence, to the application of nuclear fission to both weaponry and power production. In the twentieth century, as the military establishments of the great powers sought to harness inventive capabilities, they turned to precedents in the commercial and academic world, seeking new ways to organize research and development. By the 1960s, the phrase ‘‘technology transfer’’ described the exchange of technique and device between civilian and military cultures, as well as between one nation and another, and provided a name for the phenomenon that had always characterized the development of tools, technique, process, and application.
18. Nuclear Reactors and Weapons Material
The first successful nuclear reactor, called an ‘‘atomic pile’’ because of its structure of graphite bricks, was completed and operational on December 2, 1942, in Chicago in the U.S. Although originally built to demonstrate a controlled nuclear reaction, the reactor was later dismantled and the depleted uranium removed in order to recover minute amounts of plutonium for use in a nuclear weapon. In effect, Chicago Pile- One (CP-1) was not only the world’s first nuclear reactor but also the world’s first reactor used to produce material for a nuclear weapon.
19. Origins of Radar
Reflection was an important part of Heinrich Hertz’s 1887 demonstration of the existence of electromagnetic waves, and the idea of using that property to ‘‘see’’ in darkness or fog was developed shortly afterwards.
By the early 1930s, serious efforts were underway in the U.S., Germany, and Britain to construct radio-location devices using relatively long wavelengths. (Russian efforts were ahead in the early 1930s, but they yielded little as a result of serious organizational problems and purges that sent key engineers to the gulag.) The German company GEMA built the first device that can be called a functioning radar set in 1935 with Britain and America following only months behind. Two groups in the U.S.—the Signal Corps and the Naval Research Laboratories—proceeded independently but on lines very similar to those of the Germans in using dipole arrays. They had air-warning and searchlight-pointing prototype sets ready for production in 1939.
The British physicists Robert Watson Watt and Arnold Wilkins proceeded along a different line using wavelengths of tens of meters with broadcast rather than ‘‘searchlight’’ transmission. This equipment, although inferior to that working on shorter wavelengths, was seen by Air Vice-Marshal Hugh Dowding as the key to the air defense of Britain from expected German attack. As commander of the newly created Fighter Command, he created a system of radar stations and ground observers linked by secure telephone lines to the fighter units. He drilled Fighter Command to use the new technique, and when the Luftwaffe came in the summer of 1940, the attacking squadrons were ambushed by defending fighters positioned by radar.
20. Radar Aboard Aircraft
As with much else in radar, airborne radar sets were first developed during World War II, and most of the modern uses for such sets were explored during that war. While airborne radar shares much in common with surface and naval sets, there are many factors involved that make airborne installations very different from either of the latter.
At the beginning of the 21st century, combat aircraft continue to use radar for the same purposes as in World War II: navigation, air and surface search, and targeting. Using computers and guided munitions, they could also automatically release bombs or launch missiles at the appropriate time. The big difference between 1945 and 2018 is that most, if not all, of these functions can be done by a single aircraft carrying a single radar with a range and resolution much greater than any airborne set used during the war.
21. Radar Displays
Those who first conceived of radar early in the century often envisioned systems that would simply indicate, perhaps by sounding a buzzer or lighting a lamp, that a target had been detected and where it was located. Those who first reduced radar to practice in the 1930s, however, were radio scientists who knew that the returning radar signals would somehow have to be distinguished against a background of radio-frequency interference and noise. They were accustomed to displaying signals visually on cathode ray tube (CRT) oscilloscopes, and they naturally turned to such means for radar. This made the operator an essential part of the radar system, responsible for the final stages of the detection process and extraction of target data.
22. Radar Systems in World War II
With the onset of war in September 1939, Britain, Germany, and the U.S. had advanced radar designs while France, Russia, The Netherlands, Italy, and Japan had little of value in comparison although they had made research efforts along those lines. Of these endeavors, only Britain had proceeded past the prototype stage into a state of war readiness in the form of the Chain Home air defense. Germany had technically the best radar designs, but the Wehrmacht intended to wage a war of aggression and initially gave little support to a technology whose strength lay overwhelmingly in defense. In the U.S., because of the contentious battleship–bomber disputes of the 1920s, the Navy had pressed for any new technical method to defend ships against air attack, and the Army had sought to perfect its anti-aircraft artillery with methods of combating bombers at night.
23. Reconnaissance Warplanes
After World War I and during World War II, technological effort was aimed at putting the camera at higher altitudes, theoretically out of the ability of the enemy to reach and destroy it, and to further increase its operational effectiveness by allowing it to operate in the dark. This led to development of electrical heating apparatus that prevented camera shutters from being adversely affected by the cold at high altitudes and to the slit camera that adjusted the speed at which film was fed through the camera to the speed of the aircraft, an advance that improved the production of maps of enemy territory. Nighttime operations were aided by aerial flash equipment designed by Harold Edgerton of the Massachusetts Institute of Technology, which provided not only well-lit scenery, but also frozen imagery of the target, an asset vital to effective bomb sighting.
With the opening of the Cold War, the mindset that kept pushing reconnaissance to increasingly high altitudes and greater speeds took on a new importance as it not only kept the camera out of the enemy’s physical reach, but out of his legal and political reach as well. The Royal Air Force’s first jet bomber, the Canberra, counted on both speed and altitude to keep it away from enemy fighters. These advantages were to prove useful to reconnaissance as well and the Canberra still serves in the RAF inventory. The quest for speed and height led ultimately to the two best-known Cold War reconnaissance aircraft, the U-2 and the SR-71. Capable of cruising at 740 kilometers per hour (km/h), with a range of 3540 kilometers, and a ceiling of 17,000 meters. (21,000 meters and above in the later models), the U-2 represented the cutting edge in aerial intelligence gathering until it was superceded by the faster and higher flying SR-71. The Blackbird pushed the altitude envelope to over 26,000 meters and was able to maintain speeds of Mach 3.2.
24. Short Range and Guided Missiles
With most nations surrounding their military capabilities with considerable secrecy, different published range figures and guidance types sometimes contradicted each other. Furthermore, the distinction between an intermediate-range missile (up to about 2400 kilometers) and an intercontinental- or long-range missile was a matter of definition over which there was never complete agreement. Some publications would include submarine- launched missiles up to intermediate range in short- and medium-range ballistic missile listings. Although the missiles listed here were generally capable of carrying a nuclear warhead, most were also loaded with conventional explosive warheads.
The word ‘‘sonar’’ originated in the U.S. Navy during World War II as an acronym for ‘‘SOund NAvigation and Ranging,’’ which referred to the systematic use of sound waves, transmitted or reflected, to determine water depths as well as to detect and locate submerged objects. Until it adopted that term in 1963, the British Admiralty had used ‘‘ASDIC,’’ an abbreviation for the Anti- Submarine Detection Investigation Committee that led the effort among British, French, and American scientists during World War I to locate submarines and icebergs using acoustic echoes. American shipbuilder Lewis Nixon invented the first sonar-type device in 1906. Physicist Karl Alexander Behm in Kiel, Germany, disturbed by the Titanic disaster of April 1912, invented an echo depth sounder for iceberg detection in July 1913. Although developed and improved primarily for military purposes in World War I, sonar devices became useful in such fields as oceanography and medical practice (e.g., ultrasound).
The basic technology of the submarine is quite simple and has remained constant since its inception. The boat submerges by taking on water through vents to decrease its buoyancy and surfaces by expelling the water with compressed air. The outward appearance of the military submarine has remained remarkably constant throughout its modern development—a cigar-shaped hull topped by the immediately recognizable conning tower with a periscope for viewing the surface.
We can break submarine technology into five categories:
- Hull design
- Ancillary technologies.
27. Surface-to-Air and Anti-Ballistic Missiles
In World War II, when Japanese kamikaze aircraft showed the amount of damage that could be inflicted with a single explosive-laden plane, it became apparent that machine gun and antiaircraft fire were insufficient protection against current and future weapons. The answer was to combine radar detection, guided rockets, and the proximity fuse into surface-to-air missiles or SAMs. Intensive development in the postwar years produced the Sea Sparrow in the 1950s as one of the first, successful SAMS. When identifying a Warsaw Pact weapon as a surface-to-air missile, NATO forces would assign it an ‘‘SA’’ or surface– air, number.
Despite some curiosity on the part of a handful of other nations, the development of the tank in the twentieth century was largely a British affair. Yet even Britain did not intentionally set out to develop it. Like many things, the tank was the result of other technologies being developed as well as a response to the dangers some of those very technologies presented. Although tanks were plagued with problems of power, protection, and a lack of vision about their use at the beginning of the 20th century, and despite the massive advances in weapons of all kinds during the century, at the beginning of the 21st century the tank remained a vital instrument of warfare.
Warfare and Military Technology in the 20th Century
Twentieth-century warfare begins with World War I (1914–1918) even though this conflict had more in common with wars of the previous century than it did with those that followed. The Great War opened with maneuvers by huge field armies that culminated in frontal assaults by masses of infantry. After only a few months of mobile warfare, heavy casualties forced opposing armies to take shelter in trench systems that stretched all across France. Faced with the ensuing stalemate of the trenches, both sides adopted an attrition strategy that would defeat the opposition by bleeding his manpower and depleting his material resources. The strategy finally succeeded when an exhausted Germany surrendered in November 1918.
In spite of its similarity to nineteenth-century warfare, World War I witnessed several new developments, most notably the airplane, the tank, and the truck. Between 1919 and 1939, the implications of these new developments were worked out, producing new operational approaches that transformed warfare.
During World War II (1939–1945), European land warfare was dominated by mobile armored forces that swept back and forth across the continent. While armies fought on the ground, air forces contended for control of European skies. In this massive air war, Allied bombers devastated Germany’s industrial base and population centers.
Meanwhile, in the Pacific region, the war centered on aircraft carrier task forces that battled each other and supported amphibious operations. The war started with Japan conquering much of the western Pacific, only to be pushed back by superior Allied arms and forced to surrender when American B-29 bombers dropped the only two atomic bombs ever used in war.
By the time World War II ended in the Pacific, Japan’s military resources had been severely reduced by Allied military actions. The reduction of Japanese resources, along with the progressive weakening of Germany in the European theater, suggest that World War II, like the first, was an attrition war in which industrial capacity was as important as military forces.
Two of the most revolutionary developments of World War II were the atomic bomb and the long-range ballistic missile. When more fully developed and mated to each other during the Cold War (1946–1991), they became what is perhaps the most revolutionary weapon in military history, the nuclear-tipped, intercontinental-range ballistic missile (ICBM). In the end, the Cold War was another attrition conflict, ending with the economic exhaustion and collapse of the Soviet Empire.
The end of the Cold War reduced the tensions that had kept nuclear strike forces on hair-trigger alert since the 1950s. Although nuclear weapons still existed, relations between the U.S. and the Russian Federation that emerged from the defunct Soviet Union were no longer based on mutually assured destruction (MAD), as both sides reduced their nuclear forces and the U.S. continued developing missile defenses.
While there were a number of significant ‘‘limited’’ wars during the twentieth century, the five major episodes described above unleashed the greatest national energies. These energies were molded into major new military systems through the process of command technology that is rooted in England of the 1880s according to historian William McNeill. Before this time, weapons were either developed in government-owned arsenals or by private entrepreneur inventors. A major change began in 1886 when the British Admiralty, dissatisfied with the performance of the government arsenal at Woolwich, started contracting with private arms makers for the development of new weapons. Under this approach, the Admiralty established the specifications for a new weapon and effectively challenged the contractor to produce it. This contracting system marks the beginning of command technology. Tantamount to invention on demand, this process of state-sponsored research and development spread throughout the West, becoming the dominant paradigm for weapons acquisition by 1945.
One product of command technology during World War I was the tank, which was developed by the British to cross fire-swept terrain between the trenches and breech the German defenses. While the tank proved capable of completing its mission, its successes were limited due to technical limitations and a lack of understanding of how best to use the new weapon.
The principal enabling technology for the tank was the internal combustion engine, which also powered World War I trucks and airplanes. The former improved logistics by connecting troops in forward positions with railheads and supply depots in the rear. The latter opened an entirely new realm of warfare and, over the course of the war, suggested all the missions the airplane would perform in future wars.
Building on the lessons of World War I, air power advocates used the period between the two world wars to develop a rigorous body of air power doctrine. At the same time, the world’s leading powers developed aircraft of increasing capabilities to execute the missions defined in their doctrines.
The U.S. emphasized long-range bombers to execute daylight, precision bombardment—the dominant doctrine in America’s air force. England also developed bombers, but she also pursued fighter development because of the threat posed by the air force of a rearming Germany. In addition to bombers, Germany developed tactical aircraft to support its new approach to ground warfare—Blitzkrieg.
The basic ideas behind Blitzkrieg had emerged by the end of World War I, as the capabilities of tanks and aircraft improved. After the war, the Germans developed these ideas further and mated them to the panzer division, which included tanks, mechanized artillery, and motorized infantry. Through radio communications, these elements were integrated into coherent units that also used their radios to coordinate supporting air attacks by Germany’s tactical fighters. Using their air support, the panzers would execute deep, penetrating attacks to unbalance opponents and keep them from shoring up their defenses once these had been breeched.
World War II in Europe opened with Blitzkrieg attacks that swiftly overran Poland in 1939 and France in 1940. It ended with Allied air forces supporting mechanized operations that pushed German forces out of their conquered territories prior to overrunning Germany itself.
While the Germans were perfecting Blitzkrieg, naval officers around the world were integrating aviation into naval operations. This entailed developing true aircraft carriers with landing decks that ran the full length of vessels, allowing aircraft to both take-off and land on the carrier. The advent of these carriers prompted a debate over which ship, the carrier or the battleship, would dominate the next war.
This question was settled decisively at Pearl Harbor on December 7, 1941, when Japanese carrier aircraft damaged or sank every American battleship in the harbor. Throughout the remainder of the war in the Pacific, the principal measure of naval power was the carrier task force in which battleships, cruisers, and destroyers used their firepower principally to protect their carriers from attack by enemy planes and submarines. The impact of the carrier on naval warfare is clearly illustrated by the May 1942 Battle of the Coral Sea, the first naval engagement in which the surface forces never sighted each other. Throughout the remainder of the century, the carrier task force dominated naval operations.
Three years before the Battle of the Coral Sea, physicist Albert Einstein alerted U.S. President Franklin Roosevelt to the potential of nuclear fission. After having this concept evaluated by a panel of scientists, Roosevelt launched the Manhattan District Project to develop an atomic bomb.
There were two major facets to this project: developing an industrial base to produce fissionable materials and designing the bomb itself. On July 16, 1945, less than three years after the project began, the world’s first atomic bomb was detonated in New Mexico. Within a month, the U.S. had dropped two atomic bombs on Japan, forcing the Japanese to surrender. About a decade before the U.S. launched its atomic bomb project, the Germans began work on what would become the world’s first long-range ballistic missile. In 1937, this program was greatly expanded with the establishment of a vast, new rocket development center at Peenemunde. The German program employed several hundred scientists and technicians who were supported by a large budget that could be coupled to Germany’s industrial base and its university research facilities through a flexible contracting system.
This rocket program is a classic example of command technology. Guided by specifications established by the army’s ordnance office, the Peenemunde team made rapid progress after 1937. In June 1942, the team completed the first successful test of the V-2 rocket, which became the world’s first operational long-range missile when it began hitting allied cities in September 1944. This choice of targets, which was dictated by the missile’s inaccuracy and the limited size of its warhead, meant that the V-2 was essentially a terror weapon with little real military value.
Immediately after World War II, both the U.S. and the Soviet Union absorbed German rocket developments and began working energetically to produce long-range missiles that could be used for military purposes. A major breakthrough came in the 1950s when both countries demonstrated the ability to produce thermonuclear bombs. This meant that warheads could be made that were light enough to be carried by a missile, yet powerful enough to compensate for missile inaccuracies. Moreover, the advent of the hydrogen bomb ushered in an era of ‘‘nuclear plenty,’’ since fusion fuel is plentiful and inexpensive when compared to fission fuel.
At the same time, work was progressing on inertial guidance systems that would be much more accurate than the system used in the V-2. A major breakthrough here was the development of more sensitive inertial measuring units that were based on complex mechanical structures, computer advances, and improved electro-optical technologies.
The simultaneous resolution of guidance and warhead problems made the ICBM feasible. Paradoxically, because these weapons could destroy civilization, the doctrine governing their employment, mutual assured destruction (MAD), aimed to deter their use. MAD required each side to have enough nuclear weapons to absorb a nuclear attack and still be able to inflict unacceptable losses on the attacker.
America’s first ICBM became operational in 1959. In developing this missile, the U.S. Air Force pioneered a new management discipline that was based on insights into the functioning of complex weapons.
Until well into the nineteenth century, weapons were largely simple, stand-alone devices. However, by World War I, they were often amalgams of complex components as in the case of the giant dreadnought class battleships that dominated naval warfare during the first two decades of the twentieth century.
During the World War II, air defenses and aircraft carriers raised the complexity of weaponry another order of magnitude. It was at this point that the pioneers of operational analysis made the point that optimizing the performance of complex weapons required a thorough understanding of how their components interacted with each other and with their operational environment. Assuring a proper ‘‘fit’’ between system components became the work of systems engineering. Bringing operational analysis and systems engineering together to create an effective weapon was the function of systems management, a discipline that was more fully developed and formalized in the U.S.’s huge ICBM program that was launched in the 1950s. The success of the ICBM program transformed systems management into the principal paradigm for managing major weapons programs, including those for self-guided and precision-guided munitions (PGMs).
A major inspiration for self-guided munitions was the airplane. Before the advent of artificial sensors, computers, and advanced servo motors, the presence of a pilot offered one means, beyond initial aiming, to guide a weapon to its target. Indeed, one of the best known early efforts to achieve precision guidance was the Japanese use of suicide pilots who attempted to fly their planes into U.S. ships during the World War II. Less well known are U.S. and German efforts to develop unmanned glide bombs and vertical bombs that could be controlled from the aircraft that dropped them.
Germany’s desperate efforts to down Allied bombers near the end of the World War II spawned several innovative concepts in the area of precision-guided surface-to-air missiles or SAMs. Included here was the use of a simple infrared sensor to allow SAMs to home in on hot bomber engines. Another SAM was to have been guided by commands from the ground that reached the interceptor via a thin wire that played out as the missile flew toward its target. Fortunately for Allied bombers, these ideas came too late in the war to be implemented.
More fully developed after World War II, wire-guided missiles were used extensively in limited and regional wars such as the Vietnam War (1965–1973) in which an American wire-guided missile achieved an 80 percent hit rate. Soviet wire-guided missiles were used extensively by the Egyptians to inflict heavy losses on Israeli armor during the early phase of the 1973 Arab–Israeli War.
Infrared heat-seeking technology was widely applied in missile guidance after 1945. By 1953, the U.S. had developed the world’s first heat-seeking air-to-air missile. Widely used throughout the rest of the century, these missiles generally employed a small, nose-mounted infrared sensor to guide them to the hot engine tailpipe of enemy aircraft. Shoulder-held heat-seekers were also developed to protect soldiers against air attacks. By the end of the century, the spread of these small, portable missiles was causing concern that terrorists might use them against commercial jetliners.
Other precision-guided missiles used radar in their guidance systems. While some were designed for air-to-air combat, others were built to home in on the signal from air defense radars. Radar-guided SAMs also became central to effective air defenses.
Systematic efforts to develop defenses against aircraft began during World War I when the British tried to stop German bomber attacks on England. Twenty years later, with England facing the prospect of air attacks from a rearming Nazi Germany, Sir Robert Watson-Watt advised the British government that reflected radio waves could be used to locate attacking aircraft. This principal became the basis for a radar system that the British began deploying in the mid-1930s. By the time German planes attacked London in 1940, England had deployed an air defense system that used radar plots and radio communications to guide defensive fighters to attacking German planes.
The use of radar here is an important departure. The increasing speed and range of the airplane collapsed time and threatened to deprive the defender of adequate response time. Using instruments such as binoculars and listening devices to increase the power of human senses was no longer adequate for locating an attacking force. Radar marks the first effort in military affairs to extend human perception by using phenomena outside the normal range of man’s five senses. The British Chain Home radar system could detect aircraft approaching at an altitude of 6000 meters at a range of 145 kilometers, providing a warning time of 15 minutes for planes flying at 580 kilometers per hour.
Faced with the threat of nuclear-armed Soviet bombers in the 1950, the US. developed a continental- wide air defense system with a forward-based radar system to provide the earliest possible warning of attack. Radar data were fed to computerized control centers that automated the manual process of vectoring interceptors to their targets. These centers could simultaneously track 200 attacking bombers while vectoring 200 interceptors to their intercept points.
As this system was becoming operational, both the U.S. and the Soviet Union began deploying ICBMs. Against these weapons, bomber defenses were essentially useless. The ICBM’s speed allowed it to traverse thousands of kilometers in a matter of minutes, further compressing the time for defensive actions. Some way had to be found to recapture the lost response time.
Improved ground-based radars provided fifteen minutes of warning time in the case of an ICBM attack. An additional fifteen minutes were gained by deploying satellite-based, infrared sensors that surveilled enemy missile fields around the clock. More time was recovered by parsing time into picoseconds, providing billions of time units that could be effectively managed by high-speed computers to optimize defensive reactions.
Later missile defense concepts pursued under the Strategic Defense Initiative, which was launched in 1984 by the US., sought to improve the odds for a successful defense by placing interceptor missiles in space. Furthermore, the U.S. pursued various concepts for directed energy weapons, which promised a near instantaneous kill, since beam velocities approached the speed of light. Combining orbiting lasers with space-based interceptors would produce a defense capable of destroying enemy missiles during the boost phase before they released their multiple warheads and decoys.
As the twentieth century was ending, the U.S. was developing an airborne laser that could also destroy ballistic missiles during their boost phases. This weapon also promised to be effective against attacking aircraft.
The high-speed computer, so crucial to the prospects of missile defense, was also central to the development and proliferation of command and control systems after 1950. These systems formed an integrated ‘‘picture’’ of current situations based on information from a wide variety of sources. Included among these sources are battlefield sensors, overhead satellites, electronic intelligence, and units engaged in combat. This picture provided the basis for extending and tightening the control exerted of senior political and military leaders. Computerized systems also played a major role in managing military logistics, so essential to modern military forces.
Developments such as high-speed computers, lasers, radar, and infrared sensors point toward a fundamentally new departure in twentieth-century weaponry: the creation of advanced military capabilities based on esoteric scientific principles. These principles are generated through abstract, mathematical reasoning and are not readily discoverable through the traditional methods of careful observation and the manipulation of materials. Without the highly mathematical electromagnetic field theory of James Clerk Maxwell there would be no radio or radar. Without the work of scientists like J. J. Thompson, Ernest Rutherford, and Niels Bohr there would have been no atomic theory and no basis for conceiving nuclear fission.
Introducing scientists into the mix of engineers, technicians, and managers that was central to earlier forms of command technology greatly increased government’s power to invent on command. As the century was ending, this enhanced form of command technology had created in military affairs a situation similar to what historian Walter McDougall described as a perpetual technological revolution.
Change had become one of the few constants in military affairs. Making effective ‘‘transformations’’ in force structures and doctrines to ensure success in future wars was more clearly than ever a core concern for military professionals and their civilian leaders.
Browse other Technology Research Paper Topics.