History of Signal Detection Theory Research Paper

View sample History of Signal Detection Theory Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing services for professional assistance. We offer high-quality assignments for reasonable rates.

1. Introduction And Scope

Signal detection theory (SDT) sprouted from World War II research on radar into a probability-based theory in the early 1950s. It specifies the optimal observation and decision processes for detecting electronic signals against a background of random interference or noise. The engineering theory, culminating in the work of Wesley W. Peterson and Theodore G. Birdsall (Peterson et al. 1954), had foundations in mathematical developments for theories of statistical inference, beginning with those advanced by Jerzy Neyman and E. vs. Pearson (1933). SDT was taken into psychophysics, then a century-old branch of psychology, when the human observer’s detection of weak signals, or discrimination between similar signals, was seen by psychologists as a problem of inference. In psychology, SDT is a model for a theory of how organisms make fine discriminations and it specifies model-based methods of data collection and analysis. Notably, through its analytical technique called the receiver operating characteristic (ROC), it separates sensory and decision factors and provides independent measures of them. SDT’s approach is now used in many areas in which discrimination is studied in psychology, including cognitive as well as sensory processes. From psychology, SDT and the ROC came to be applied in a wide range of practical diagnostic tasks, in which a decision is made between two confusable alternatives (Swets 1996).

2. Electronic Detection And Statistical Inference

Central to both electronics and statistics is a conception of a pair of overlapping, bell-shaped, probability (density) functions arrayed along a unidimensional variable, which is the weight of evidence derived from observation. In statistical theory, the probabilities are conditional on the null hypothesis or the alternative hypothesis, while in electronic detection theory they are observational probabilities conditional on noise-alone or signal-plus-noise. The greater the positive weight of evidence, the more likely a signal, or significant experimental effect, is present. A cutpoint must be set along the variable to identify those observed values above the cutpoint that would lead to rejection of the null hypothesis or acceptance of the presence of the designated signal. Errors are of commission or omission: type I and type II errors in statistics, and false alarms and misses in detection. Correct outcomes in detection are hits and correct rejections. Various decision rules specify optimal cutpoints for different decision goals. The weight of evidence is unidimensional because just two decision alternatives are considered. Optimal weights of evidence are monotone increasing with the likelihood ratio, the ratio of the two overlapping bell-shaped probability functions. It is this ratio, and not the shapes of the functions, that is important.

3. Modern Signal Detection Theory

In the early 1950s, Peterson and Birdsall, then graduate students in electrical engineering and mathematics, respectively, at the University of Michigan, developed the general mathematical theory of signal detection that is still current. In the process they devised the ROC—a graphical technique permitting measurement of two independent aspects of detection performance: (a) the location of the observer’s cutpoint or decision criterion and (b) the observer’s sensitivity, or ability to discriminate between signal-plus-noise and noisealone, irrespective of any chosen cutpoint. The two measures characterize performance better than the single one often used, namely, the signal-to-noise ratio (SNR) expressed in energy terms that is necessary to provide a 50 percent hit (correct signal acceptance) probability at a fixed decision cutpoint, say, one yielding a conditional false-alarm probability of 0.05. A curve showing conditional hit probability versus SNR is familiar in detection theory and is equivalent to the power function of statistical theory. These curves can be derived as a special case of the ROCs for the SNR values of interest.

4. The ROC

The ROC is a plot of the conditional hit (true–positive) proportion against the conditional false-alarm (false– positive) proportion for all possible locations of the decision cutpoint. Peterson’s and Birdsall’s SDT specifies a form of ROC that begins at the lower left corner (where both proportions are zero) and rises with smoothly decreasing slope to the upper right corner (where both proportions are 1.0)—as the decision cutpoint varies from very strict (at the right end of the decision variable) to very lenient (at the left end). This is a ‘proper’ ROC, and specifically not a ‘singular’ ROC, or one intersecting axes at points between 0 and 1.0. The locus of the curve, or just the proportion of area beneath it, gives a measure of discrimination capacity. An index of a point along the curve, for example, the slope of the curve at the point, gives a measure of the decision cutpoint that yielded that point (the particular slope equals the criterial value of likelihood ratio). SDT specifies the discrimination capacity attainable by an ‘ideal observer’ for any SNR for various practical combinations of signal and noise parameters (e.g., signal specified exactly and signal specified statistically) and hence the human observer’s efficiency can be calculated under various signal conditions. SDT also specifies the optimal cutoff as a function of the signal’s prior probability and the benefits and costs of the decision outcomes.

5. Psychology’s Need For SDT And The ROC

Wilson P. Tanner, Jr., and John A. Swets, graduate students in psychology at Michigan in the early 1950s, became aware of Peterson’s and Birdsall’s work on the same campus as it began. Tanner became acquainted through his interest in mathematical and electronic concepts as models for psychological and neural processes and Swets was attracted because of his interest in quantifying the ‘instruction stimulus’ in psychophysical tasks, in order to control the observer’s response criterion. As they pursued studies in sensory psychology, these psychologists had become dissatisfied with psychophysical and psychological theories based on overlapping bell-shaped functions that assumed fixed decision cutpoints and incorporated measures only of discrimination capacity, which were possibly confounded by attitudinal or decision processes. Prominent among such theories were those of Gustav Theodor Fechner (1860), Louis Leon Thurstone (1927), and H. Richard Blackwell (1963); see Swets (1996, Chap. 1).

Fechner and Thurstone conceived of a symmetrical cutpoint, where the two distributions cross, because they presented two stimuli on each trial with no particular valence; that is, both stimuli were ‘signals’ (lifted weights, samples of handwriting). Fechner also studied detection of single signals, but did not compare them to a noise-alone alternative and his theory was essentially one of variable representations of a signal compared to an invariant cutpoint. This cutpoint was viewed as a physiologically determined sensory threshold (akin to all-or-nothing nerve firing) and hence the values of the sensory variable beneath it were thought to be indistinguishable from one another.

Blackwell’s task and model were explicitly of signals in noise, but the observer was thought to have a fixed cutpoint near the top of the noise function, for which the false-alarm probability was negligible, and, again, the values below that cutpoint were deemed indistinguishable. For both threshold theorists, the single measure of performance was the signal strength required to yield 50 percent correct positive responses, like the effective SNR of electronic detection theory, and it was taken as a statistical estimate of a sensory threshold. In psychology, the curve relating percentage of signals detected to signal strength is called the psychometric function. There had been some earlier concern in psychology for the effect of the observer’s attitude on the measured threshold, e.g., Graham’s (1951) call for quantification of instruction stimuli in the psychophysical equation, but no good way to deal with non-sensory factors had emerged.

In 1952, Tanner and Swets joined Peterson and Birdsall as staff in a laboratory of the electrical engineering department called the Electronic Defense Group. They were aware of then-new conceptions of neural functioning in which stimulus inputs found an already active nervous system and neurons were not lying quiescent till fired at full force. In short, neural noise as well as environmental noise was likely to be a factor in detection and then the observer’s task is readily conceived as a choice between statistical hypotheses. Though a minority idea in the history of psychophysics (see Corso 1963), the possibility that the observer deliberately sets a cutpoint on a continuous variable (weight of evidence) seemed likely to Tanner and Swets. A budding cognitive psychology (e.g., ‘new look’ in perception) supported the notion of extrasensory determinants of perceptual phenomena, as represented in SDT by expectancies (prior probabilities) and motivation (benefits and costs).

6. Psychophysical Experiments

The new SDT was first tested in Swets’s doctoral thesis in Blackwell’s vision laboratory (Tanner and Swets

1954; Swets et al. 1961–see Swets 1964, Chap. 1). It was then tested with greater control of the physical signal and noise parameters in new facilities for auditory research in the electrical engineering laboratory. Sufficiently neat empirical ROCs were obtained in the form of the curved arc specified by SDT. ROCs found to be predicted by Blackwell’s ‘high-threshold’ theory—straight lines extending from some point along the left axis, that depended on signal strength, to the upper right hand corner—clearly did not fit the data. Other threshold theories, e.g., ‘low-threshold’ theories, did not fare much better in experimental tests (Swets 1961–see Swets 1964, Chap. 4). It was recognized later that linear ROCs of slope 1, intersecting left and upper edges of the graph symmetrically, were predicted by several other measures and their implicit models (Swets 1996, Chap. 3) and also gave very poor fits to sensory data (and other ROC data to be described; Swets 1996, Chap. 2). After a stint as observer in Swets’s thesis studies, David M. Green joined the engineering laboratory as an under- graduate research assistant. He soon became a full partner in research, and co-authored a laboratory technical report with Tanner and Swets in 1956 that included the first auditory studies testing SDT. Birdsall has collaborated in research with the psychologists over the following decades and commented on a draft of this research paper.

The measure of discrimination performance used first was denoted d’ (‘d prime’) and defined as the difference between the means of two implicit, over- lapping, normal (Gaussian) functions for signal-plus- noise and noise-alone, of equal variance, divided by their common standard deviation. A value of d’ can be calculated for each ROC point, using the so-called single-stimulus yes–no method of data collection. Similarly, a value of d’ can be calculated for an observer’s performance under two other methods of data collection: the multiple-stimulus forced-choice method (one stimulus being signal-plus noise and the rest noise-alone) and the single-stimulus confidence-rating method. Experimental tests showed that when the same signal and nonsignal stimuli were used with the three methods, essentially the same values of d’ were obtained. The measures specified by threshold theories had not achieved that degree of consistency or internal validity.

For many, the most conclusive evidence favoring SDT’s continuous decision variable and rejecting the high-threshold theory came from an experiment suggested by Robert Z. Norman, a Michigan mathematics student. Carried out with visual stimuli, the critical finding was that a second choice made in a four alternative forced-choice test, when the first choice was incorrect, was correct with probability greater than 1/3 and that the probability of it being correct increased with signal strength (Swets et al. 1961, Swets 1964, Chap. 1).

Validating the rating method, as mentioned above, had the important effect of providing an efficient method for obtaining empirical ROCs. Whereas under the yes–no method the observer is induced to set a different cutpoint in each of several observing sessions (each providing an ROC point), under the rating method the observer effectively maintains several decision cutpoints simultaneously (the boundaries of the rating categories) so that several empirical ROC points (enough to define a curve) can be obtained in one observing session.

7. Dissemination Of SDT In Psychology

Despite the growing evidence for it, acceptance of SDT was not rapid or broad in psychophysical circles. This was partly due to threshold theory having been ingrained in the field for a century (as a concept and a collection of methods and measures), and probably because SDT arose in an engineering context (and retained the engineers’ terminology), the early visual data were noisy, and the first article (Tanner and Swets 1954) was cryptic. Dissemination was assisted when J. C. R. Licklider brought Swets and Green to the Massachusetts Institute of Technology where they participated in Cambridge’s hotbed of psychophysics and mathematical psychology, and where offering a special summer course for postgraduates led to a published collection of approximately 35 of the articles on SDT in psychology that had appeared by then, along with tables of d’ (Swets 1964). Licklider also hired both of them part-time at the research firm of Bolt Beranek and Newman Inc., where they obtained contract support from the National Aeronautics and Space Administration to write a systematic textbook (Green and Swets 1966). Meanwhile, Tanner mentored a series of exceptional graduate students at Michigan. All were introduced by Licklider to the Acoustical Society of America where they enjoyed a feverishly intense venue for discussion of their ideas at biannual meetings and in the Society’s journal. Another factor was the active collaboration with Tanner of James P. Egan and his students at Indiana University.

8. Extensions Of SDT In Psychology

Egan recognized the potential for applying SDT in psychology beyond the traditional psychophysical tasks. He extended it to the less tightly defined vigilance task, that is, the practical military and industrial observing task of ‘low-probability watch’ (Egan et al. 1961–see Swets 1964, Chap. 15), and also to speech communication (Egan and Clarke 1957–see Swets 1964, Chap. 30). He also brought SDT to purely cognitive tasks with experiments in recognition memory (see Green and Swets 1966, Sect. 12.9). Applications were then made by others to those tasks, and also to tasks of conceptual judgment, animal discrimination and learning, word recognition, attention, visual imagery, manual control, and reaction time (see Swets 1996, Chap. 1). A broad sample of empirical ROCs obtained in these areas indicates their very similar forms (Swets 1996, Chap. 2). An Annual Review chapter reviews applications in clinical psychology, for example, to predicting acts of violence (McFall and Treat 1999).

9. Applications In Diagnostics

An analysis of performance measurement in information retrieval suggested that two-by-two contingency tables for any diagnostic task were grist for SDT’s mill (Swets 1996, Chap. 9). This idea was advanced considerably by Lusted’s (1968) application to medical diagnosis. A standard protocol for evaluating diagnostic performance via SDT methods, with an emphasis on medical images, was sponsored by the National Cancer Institute (Swets and Pickett 1982). By 2000, ‘ROC’ is specified as a key word in over 1000 medical articles each year, ranging from radiology to blood analysis. Other diagnostic applications are being made to aptitude testing, materials testing, weather forecasting, and polygraph lie detection (Swets 1996, Chap. 4).

A recent development is to use SDT to improve, as well as to evaluate, diagnostic accuracy. Observers’ ratings of the relevant dimensions of a diagnostic task—e.g., perceptual features in an X-ray—are merged in a manner specified by the theory to yield an optimal estimate of the probability that a ‘signal’ is present (Swets 1996, Chap. 8). The latest work both in psychology and diagnostics has been described didactically (Swets 1998).


  1. Blackwell H R 1963 Neural theories of simple visual discriminations. Journal of the Optical Society of America 53: 129–60
  2. Corso J F 1963 A theoretico-historical review of the threshold concept. Psychological Bulletin 60: 356–70
  3. Fechner G T 1860 Elemente der Psychophysik Breitkopf & Hartel, Leipzig, Germany [English translation of Vol. 1 by
  4. Adler H E 1966. In: Howes D H, Boring E G (eds.) Elements of Psychophysics. Holt, Rinehart and Winston, New York]
  5. Graham C H 1951 Visual perception. In: Stevens S S (ed.) Handbook of Experimental Psychology. Wiley, New York
  6. Green D M, Swets J A 1966 Signal Detection Theory and
  7. Wiley, New York. Reprinted 1988 by Peninsula, Los Altos Hills, CA
  8. Lusted L B 1968 Introduction to Medical Decision Making. C. C. Thomas, Springfield, IL
  9. McFall R M, Treat T A 1999 Quantifying the information value of clinical assessments with signal detection theory. Annual Review of Psychology 50: 215–41
  10. Neyman J, Pearson E S 1933 On the problem of the most efficient tests of statistical hypotheses. Philosophical Trans-actions of the Royal Society of London A231: 289–311
  11. Peterson W W, Birdsall T G, Fox W C 1954 The theory of signal detectability. Transactions of the Institute of Radio Engineers Professional Group on Information Theory, PGIT 4: 171–212. Also In: Luce R D, Bush R R, Galanter E 1963 (eds.) Readings in Mathematical Psychology. Wiley, New York, Vol. 1
  12. Swets J A (ed.) 1964 Signal Detection and Recognition by Human Observers. Wiley, New York. Reprinted 1988 by Peninsula, Los Altos Hills, CA
  13. Swets J A 1996 Signal Detection Theory and ROC Analysis in Psychology and Diagnostics: Collected Papers. L. Erlbaum Associates, Mahwah, NJ
  14. Swets J A 1998 Separating discrimination and decision in detection, recognition, and matters of life and death. In: Osherson D (series ed.) Scarborough D, Sternberg S (vol. eds.) An Invitation to Cognitive Science: Vol 4, Methods, Models, and Conceptual Issues. MIT, Cambridge, MA
  15. Swets J A, Pickett R M 1982 Evaluation of Diagnostic Systems: Methods from Signal Detection Theory. Academic Press, New York
  16. Tanner W P Jr, Swets J A 1954 A decision making theory of visual detection. Psychological Review 61: 401–9
  17. Thurstone L L 1927 A law of comparative judgment. Psychological Review 34: 273–86
Multidimensional Signal Detection Theory Research Paper
Signal Detection Theory Research Paper


Always on-time


100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655