Neural Basis Of Visual Perception Research Paper

Academic Writing Service

View sample Neural Basis Of Visual Perception Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Visual perception requires that patterns of light energy are encoded into neural signals in the retina of the eye, and that these signals are transformed as they pass through the neural network of the retina and are transmitted to a series of interconnected brain areas. These transformations create patterns of neural activity that can represent the visual information needed for recognizing objects, people, and events; for location in the environment; and for guiding actions.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. Structure Of The Visual Pathway

Encoding begins in the photoreceptor cells (rods and cones) in the retina, which signal the local light intensity falling on each receptor by their levels of electrical polarization. A network of horizontal, bipolar, and amacrine cells connect these receptors to the retinal ganglion cells. The axons of these cells form the optic nerve, and so the signals that they carry constitute the information that the eye sends to the brain; the nature of this information is discussed in Sect. 3. At the optic chiasm, these axons are routed so that signals from the two eyes can be transmitted to each side of the brain, making binocular integration at a higher level possible.

The lateral geniculate nucleus (LGN) of the thalamus provides a relay station where visual signals can be modulated, but the next major reorganization occurs when the signals arrive in the occipital lobe of the cerebral cortex. The primary receiving area, at the posterior pole of the brain, is known as striate cortex, or cortical area V1. It is surrounded by a series of extra-striate visual areas. These areas can be distinguished both by specializations of function (see Sect. 2.4) and by the fact that many of them form distinct, ordered maps of the visual field. Area V2 receives much of the input from V1, and sends information on to areas V3, V4, and V5. Pathways through these areas send information to the temporal and parietal lobes of the brain.




Within each cortical area, most connections between neurons are quite local. However, information is passed between cortical areas in fiber tracts, in an orderly way. Specific layers of the cortex receive input from areas at a lower hierarchical level, while other layers either transmit to higher levels, or receive feedback from higher levels. Thus, cortical areas are commonly thought of as a set of modules, each performing particular visual computations and passing the results of these computations to other modules.

A minority of fibers in the optic nerve transmit information not to the cerebral cortex, but to midbrain structures, in particular the superior colliculus which also incorporates a topographic map of the visual field. In lower vertebrates, the homologous structures provide the main central visual pathway, but in mammals, certainly in primates, this pathway serves the function of orienting to stimuli by eye movements rather than any complex analysis of the visual image, and probably does not contribute directly to conscious visual perception. However, there are connections running both ways between this subcortical visual pathway and the much more elaborate cortical net- work for visual information processing.

2. Key Concepts Of Neural Representation And Their Evolution

2.1 Specific Nerve Energies And Place Coding

In developing modern concepts of the neural basis of vision, an early key idea was that messages from the eye to the brain could convey distinct information according to which element of the nervous system carried them. Thomas Young, around 1800, proposed that different colors were represented according to the degree that three forms of ‘sensitive filament’ in the eye were stimulated, a view that in its essentials is accepted today. Developments of this idea in the nineteenth century were that each separate fiber in the optic nerve is associated with a ‘local sign’ (Lotze)—that is, we can know which point in space stimulated our eyes from which particular fiber is active—and the ‘doctrine of specific nerve energies’ (Muller) that the quality of stimulation (e.g., sound versus light) is conveyed by which fibers convey a signal, not by any distinct quality in the signals themselves. This principle (sometimes called ‘place coding’ or ‘labeled line’) is now recognized as fundamental for understanding how visual information is represented within the brain, as well as in the sensory pathways leading to it.

2.2 Univariant Coding By Impulse Frequency

The work of Adrian in the 1920s established that sensory information is carried by a series of nerve impulses (action potentials). Since the impulses are identical all-or-none events, quantitative variations, e.g., in light contrast, have to be encoded by the rate at which impulses occur. In principle, more complex statistical properties of the train of impulses could carry rich information, but visual neurophysiology has not yet yielded convincing examples of this. Thus, the activity of a single neuron is univariant—it can encode only a single variable in the visual image. The classic example is in color vision: a particular receptor may respond relatively more strongly to green light than to red light. However, because its response is univariant, it can give indistinguishable responses to a dim green light and a bright red light. Thus, to represent even a single variable, within a multidimensional domain such as color, requires information to be combined or compared from two or more neural signals.

2.3 Receptive Fields And Tuned Filters

These principles of neural coding have been particularly applied in considering how the visual system represents spatial information. A concept which developed early was that each visual neuron had a receptive field—a region of the field of view within which light could influence the activity of the neuron. Further work showed that receptive fields had internal structure, such that a particular spatial pattern of light and dark within this region was optimal for activating a given neuron. This led to the idea that neurons could be ‘feature detectors,’ signaling the presence of a specific pattern element, such as a small black moving spot that might be a fly in the visual field of a frog. However, in the mammalian brain at least, it currently seems unlikely that any neuron on its own can provide such a signal; in all cases that have been investigated, the distribution of activity across some population of neurons would be needed to determine reliably the presence of a particular feature or object. This idea of population coding should not obscure the fact that, in visual brain areas, coding is sparse. That is, for any particular visual stimulus, only a small fraction of the available neurons are active, and so it is appropriate to think of particular subsets of neurons as being involved in representing particular visual attributes.

Neurophysiological and psychophysical studies in the 1960s and 1970s concentrated on how populations of neurons in V1 encoded particular visual properties, such as contour orientation. Each neuron was found to be a ‘tuned filter,’ which responded to a limited range of orientations. This makes it possible (a) to derive orientation information from the peak of activation across a distribution of differently tuned filters; and (b) to segregate visual information according to the local orientation, which may be valuable for segmenting and grouping the scene into distinct objects and surfaces. Cortical neurons also act as arrays of tuned filters for other properties, such as direction of motion and color.

2.4 The Hierarchy Of Multiple Visual Areas

The finding, that neurons in area V1 were tuned to a range of stimulus properties, led to exploration of higher levels of cortical processing. Experiments with macaques, from the mid-1970s onwards, showed that there is a multiplicity of distinct visual areas beyond V1. As well as progressive transformations through a hierarchy of visual areas, there was found to be parallel processing by areas such as V4 and V5, which receive information from V1 in parallel via separate routes. These areas are specialized, so that, for example, V5 contains a high proportion of neurons tuned to specific directions of motion but few that are wavelength tuned, while V4 shows the opposite balance. This has led to a picture of functional segregation, within the cortex, of representations which emphasize different aspects of visual information and which presumably serve distinct purposes.

It was recognized that this picture of functional segregation was consistent with findings in human patients who had localized brain damage; for instance, such damage could abolish color discrimination with little effect on other aspects of visual perception. In the 1990s new methods of functional brain imaging made it possible to explore localization of function in the normal human brain. These have confirmed the multiplicity of visual areas, each responding to particular stimulus properties and activated in different visual tasks. The fact that brain activation depends on the particular task the subject is set, and on his/her attention to the stimulus, indicates that vision cannot be thought of as a purely ‘bottom-up’ neural process, in which information originating in the receptors is progressively transformed. Rather, the results of high-level processes can modulate how low-level processes operate. An anatomical basis for this is provided by ‘descending’ connections from higher to lower visual areas, which are at least as rich and complex as those transmitting information from lower to higher processing.

These concepts of neural information processing inform current research on all aspects of vision. In this research paper they are illustrated primarily with respect to the representation of pattern, shape, and spatial layout. They also underlie current understanding of other perceptual functions which, however, will not be developed here.

3. Representation Of Spatial Pattern And Shape

3.1 Receptive Field Structure And Specificity

The richness of vision as a source of information comes largely through the detailed pattern of light and dark that arises from the spatial structure of the environment. Encoding of this pattern begins in the retina. Receptive fields of retinal bipolar and ganglion cells have a center–surround opponent organization. In an ‘on-center’ cell, light in the receptive-field center activates the cell, but this is countered by inhibition from any light in the surround. Off-center cells show the opposite relationship. In either case, the activity of the cell signals the presence of spatial contrast, with little or no response to a uniform field of light covering the receptive field.

The signals leaving the retina in the optic nerve therefore highlight the field locations where spatial structure is present, but carry little explicit information about the nature of this structure. For instance, ganglion cell receptive fields are circular, and so respond equally to the contrast in an edge crossing the field whether it is horizontal, vertical, or oblique. Neurons in area V1 also respond to local contrast, but mostly have receptive fields with a preferred orientation. For example, if a vertical contour yields the maximum response, this response falls off sharply for a contour oriented 30 either side of vertical. Neurons with different preferred orientations ensure that contours at any angle around the clock are represented by activity within V1. Orientation seems to be a key organizing principle for V1, indicated by the layout of neurons in a semi-regular array of ‘orientation stripes’ across the cortical surface; presumably this provides the initial basis for the neural representation of shapes and textures.

A second important organizing principle is that of multi-scale representation. Some cortical neurons respond to very fine-scale spatial detail; others, covering the same region of space with larger receptive fields, respond to similar features on a much coarser scale. Mathematically, this scale variation has the effect of a local analysis in terms of component spatial frequencies of the image. Computational studies show that this is an efficient way to capture the significant structure of scenes.

3.2 From Measurements To Spatial Primitives

The multi-scale array of orientation-selective neurons in V1 can be considered as making a comprehensive set of measurements of the image. To develop a useful representation of scenes and objects, these measurements need to define primitives—elements of an ‘alphabet’ of spatial structure through which more complex entities can be represented. Thus, a measurement of contrast in a particular orientation and scale does not itself unambiguously define an edge segment; but in conjunction with nearby measurements at a range of scales, it can provide the basis for an assertion of a local edge-segment primitive. While such primitives are important in the computational description of what the visual system is doing, it is unclear whether neurons in visual cortex explicitly represent primitives, or rather whether they are implicit in the pattern of connections amongst V1 cells and from V1 to higher levels.

3.3 Grouping Processes

To yield descriptions of object shapes and bounded surfaces, local edge primitives must be grouped together to form extended contours and areas with a common texture—effects whose importance was first recognized by the Gestalt psychologists. A plausible neural basis for grouping exists in the horizontal connections between V1 cells tuned to a similar orientation. Contours, in turn, define the boundaries of surfaces and objects. Neurons have been reported in V2 which respond to the characteristic alignments of elements where one surface occludes a background surface. Local neural activity in V1 and V2 also reflects the key perceptual distinction between regions that form a distinct figure, and those that form the ground.

3.4 Recognition

One important end point of the processing of spatial pattern is the visual recognition of objects, faces, and scenes. Object recognition can be specifically impaired (agnosia) by damage to the temporal lobe of the brain, and a specific loss of the ability to recognize faces (prosopagnosia) may also occur. Functional imaging studies shows an area that is selectively activated during the processing of faces (the fusiform face area), and another that is active in visual recognition of locations (the parahippocampal place area). Human studies cannot show how individual neurons are responding, but in the temporal lobe of the monkey, single neurons are found to respond specifically to faces. The pattern of activation over a group of such cells conveys sufficient information to distinguish one individual from another. In the inferotemporal region, cells respond to features that can define different types of object. In both cases, the response shows position generalization; that is, it is selective for a particular class of stimulus, irrespective of its position within a wide receptive field.

Thus, the principles of a sparse population code, found in the representation of local pattern in V1, appear to apply also to the higher level representation of visual identity. However, the hierarchy of neural transformations which connects the former with the latter is very poorly understood.

4. Spatial Location And Layout

As well as allowing the identification of objects and scenes, vision defines the space within which objects are located and are the targets of actions. Disturbances of spatial abilities are found with damage to the parietal lobe in both human and monkey brains. These disturbances are quite distinct from the impairment of object recognition, leading to the proposal of a broad division between a ventral stream of cortical visual processing, from V1 to temporal lobe and concerned with what we are looking at, versus a dorsal stream from V1 to parietal lobe which conveys the information needed to know where it is and how we can act upon it. Different specialized visual areas contribute to the two streams; in particular the motion area V5 (MT) sends information primarily via the dorsal stream.

Spatial information requires a frame of reference. Area V1, and other early visual cortical areas, appear to be organized within a retinotopic frame—that is, the receptive fields, and their topographic arrangement across the cortex, are defined in terms of image locations on the retina. Since the eyes are constantly moving in the head, a stable representation of space requires more than retinotopic information. Eye direction must also be factored in, to establish a head-centered frame. Such combination of retinal and eyedirection information occurs in a number of parietal areas. However, the properties of cells in different parts of parietal cortex suggest that they are preparing visual information in an appropriate form for controlling different action systems—e.g., eye movements, reaching and grasping by the hands, and locomotion. Each of these may require a different frame of reference. An alternative frame of reference is allocentric, i.e., referred to the layout of the environment rather than any part of the body of the observer. It has been suggested that this is a feature of the ventral stream which has to identify objects independently of any particular viewpoint.

Parietal areas transmit information to motorrelated areas of the frontal lobe, both directly and via the cerebellum. Comparison of cells in the parietal lobe with those in the premotor frontal areas to which they project emphasizes the difficulty of defining any boundary between visual and motor processing. For instance, activity in some parietal neurons is associated both with viewing a particular three-dimensional shape (e.g., cylinder) and with the grasping action required by that particular shape. ‘Mirror neurons’ in premotor cortex respond both in association with the hand action of the monkey, and when the monkey is passively viewing another hand executing the same action. It is best to think in terms of visuo–motor circuits which embody the spatial information needed to control and define a particular type of action. It must be recognized that the brain contains multiple circuits for different action types, each of which may work within a spatial frame of reference appropriate for its own requirements.

5. The Binding Problem

Research on visual neuroscience has emphasized the fragmentation of brain processing. Many different neurons are activated by any visual stimulus. Distinct brain areas and subsets of neurons process different attributes such as shape, color, and motion, and there are multiple spatial representations in different visuo– motor circuits. In contrast, our experience of the visual world appears unified, and the different attributes of a visual object are bound together—if a white cup and a green book are together on the table, we do not confuse which color belongs to which object. How perceptual unity arises from a fragmented neural representation is known as the binding problem (Treisman 1996).

Two, not necessarily incompatible, answers have been widely discussed. One is based on Anne Treisman’s feature integration theory of attention. This theory, supported by evidence from visual search, proposes that a ‘spotlight’ of focal attention modulates processing within distinct ‘feature maps’ for different attributes, and so multiple feature attributes are accessed only for the object that is the current focus of attention. The second idea is that cortical neurons show oscillations in their activity at frequencies around 40 Hz, and that temporal coherence between these oscillations acts as a label for different signals contributing to the representation to a common object. Such coding by temporal synchrony has attractions; it could provide a basis for understanding how features at different locations within a single map can be linked or segregated in object-based representations, as well as how binding occurs across different maps. However, the evidence for it remains quite limited.

6. Neural Basis Of Visual Sensitivity And Thresholds

Much detailed knowledge of vision comes from the analysis of psychophysical thresholds—measures of how sensitive the visual system is in making fine discriminations. If these data are to be integrated with knowledge from neurophysiology, the relation be-tween psychophysical threshold and patterns of neural activity needs to be understood.

‘Threshold’ conveys the suggestion of a stimulus intensity below which no neural activity is elicited. However, in practice, neurons are rarely silent, and their activity, even with a fixed stimulus, shows variability or ‘noise.’ Thus, the ability to detect weak stimuli is determined by the statistical problem of distinguishing a stimulus-driven change in activity from those due to noise.

Several studies have determined the ‘thresholds’ of single neurons in monkey visual cortex, in terms of the stimulus level which achieves a statistically defined level of reliable response. These can be compared with behavioral performance in detection; in some cases behavioral and neural measures can be made simultaneously. The information carried by single neurons proves to be remarkably close to that implied by the behavioral responses. Remarkable, because the animal has the opportunity to pool the information from thousands of neurons, which would be expected to yield much more reliable information. The data can be reconciled by realizing that although large numbers of neurons may be involved, the random variations in their activity are correlated so the number of genuinely independent neural signals is much smaller (Newsome et al. 1995).

Whether a particular judgment depends on pooling activity in tens, hundreds, or thousands of neurons, there must be many more neurons in the same brain area whose activity is irrelevant because they do not respond selectively to the stimulus concerned. If activity of such neurons was included in the pooling process, it would introduce noise and degrade sensitivity. Thus sensitive discrimination, either in the laboratory or in real-life situations, must depend on selecting a small minority of neurons whose output is informative for the task in hand.

7. Modulation Of Neural Activity By Attention And Task Demands

It could be conceived that neural activity, relevant to a particular task, was channeled to appropriate decision and response mechanisms through a purely ‘bottom-up’ process. However, Sect. 3 has mentioned the ample evidence for ‘top-down’ control of visual processing, modulating activity at many levels according to the central decisions about what is relevant. For instance: (a) neural activity, as early in the pathway as the LGN, is strongly modulated according to the sleeping– waking state of the organism; (b) the response of single neurons in areas such as V4 and V5 in monkeys can be changed radically according to whether the preferred stimulus of the neuron is behaviorally relevant or irrelevant on a particular occasion; (c) functional brain imaging shows that the overall activation of an area such as V5 in humans is enhanced by attention to the property that it analyses (in this case, motion), even if the stimulus is unchanged; and (d) within a topographically organized area such as V1, activation of a particular region is enhanced when attention is directed to the corresponding region of visual space.

8. Plasticity And Learning In Visual Neural Organization

Section 7 discussed how the function of the visual system can be dynamically modulated from moment to moment. Visual neural function shows long-term adaptive changes, in response to the inputs it receives. For instance, if the input to visual cortex has an anomalous distribution across the retina (e.g., as a result of a congenital absence of cones in the central fovea) the usual retinotopic map in V1 is distorted to match this distribution. When the balance of input from the two eyes is disturbed (e.g., when the input of one eye is reduced in a critical period during early development) the cortex adjusts to receive less input from the less active eye. The plasticity seen in these abnormal cases must also play a role in normal development: development of the normal neural organization of visual brain areas must depend on the regularities and correlations in the visual input that these areas receive.

Visual abilities show strong learning effects; almost any visual discrimination becomes strikingly more sensitive with practice. This learning can be highly specific, for example showing improvement for stimuli only at the orientation that has been practiced, suggesting that changes occur at relatively low levels of visual processing such as V1. The discussion in Sect. 6 suggests that such learning may be based on establishing highly specific connections to decision mechanisms from the neurons most sensitive to the stimulus concerned. However, while changes with learning have been demonstrated in the responses of visual cortical neurons, the basis of these changes in modified connections is still speculative.

9. The Neural Basis Of Visual Awareness

The detailed understanding of how visual information is represented in the nervous system does not answer the question: what aspect of neural activity corresponds to the conscious awareness of the subject? Many aspects of vision (for example, which eye received a monocular stimulus) are not accessible to consciousness, even though the relevant information exists within the nervous system. Is there some subset of visual neurons whose activity contributes specifically to awareness?

Individuals with damage to V1 report that they are blind in the region of the visual field corresponding to the damage. They can sometimes make reliable responses to stimuli they do not report seeing. Thus there must be routes for visual information, probably through the superior colliculus in the midbrain, whose activity has no conscious correlate. However, it is not known whether V1 activity is itself a necessary correlate of conscious visual experience, or whether V1 simply serves as the only access route to higher processes that have such a correlate.

In binocular rivalry, discrepant patterns are presented to the two eyes, and alternate in perception. Neuronal activity can be monitored in monkeys who are also signaling which pattern is perceptually dominant. For some cortical neurons, activity fluctuates in correspondence with perceptual dominance, but for others there is no correlation. The proportion in the former group increases in going from lower to higher cortical areas (Logothetis 1998). These results suggest that awareness is not uniquely associated with particular cortical areas, but may emerge cumulatively as information is processed through the system. The true relation between brain and consciousness will remain speculative for many years, but visual processing is likely to provide the key test cases for this central problem of the human sciences.

Bibliography:

  1. Barlow H B 1995 The neuron doctrine in perception. In: Gazzaniga M S (ed.) The Cognitive Neurosciences. MIT Press, Cambridge, MA, pp. 415–35
  2. Churchland P S, Sejnowski T J 1992 The Computational Brain. MIT Press, Cambridge, MA
  3. De Valois R L, De Valois K K 1988 Spatial Vision. Oxford University Press, New York
  4. Gazzaniga M S, Ivry B, Mangun G R 1998 In: Cognitive Neuroscience: The Biology of the Mind. W. W. Norton, New York, Chap. 4–6
  5. Logothetis N K 1998 Object vision and visual awareness. Current Opinion in Neurobiology 8: 536–44
  6. Newsome W T, Shadlen M N, Zohary E, Britten K H, Movshon J A 1995 Visual motion: linking neuronal activity to psychophysical performance. In: Gazzaniga M S (ed.) The Cognitive Neurosciences. MIT Press, Cambridge, MA, pp. 401–3
  7. Treisman A 1996 The binding problem. Current Opinion in Neurobiology 6: 171–78
Geometry Of Visual Space Research Paper
Psychology Of Visual Memory Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!