Virtual Reality Research Paper

Academic Writing Service

View sample media research paper on virtual reality. Browse media research paper topics for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

The Experience of Virtual Reality

Virtual reality (VR) creates a sensory and psychological experience for users as an alternative to reality. More than just one technology, VR is an ever-growing set of tools and techniques that can be used to create the psychological sensation of being in an alternate space. Underpinning the techniques used to create compelling virtual environments is the basic observation that information is fated for processing by a human sensory and perceptual system that has evolved to interact with regularities occurring in the physical world (Gibson, 1966, 1979). The more one can provide the system with sensory inputs that simulate and effectively mimic those encountered in nature, the more convincing the resulting perceptual and cognitive experience will be for the user. The ultimate goal of designers and users of VR environments is a computer-generated simulation that is indistinguishable to the user from its real-world equivalent. Reaching toward this goal has already enabled us to realize some of VR’s potential for use in training, engineering, and scientific research and for providing uniquely gratifying entertainment experiences (Biocca, 1996; Hawkins, 1995).

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Illusions for the Senses

The hardware and software used to create a VR system are designed to replicate the information available to the sensory/perceptual system in the physical world. In other words, a computer and its peripheral devices produce outputs that impinge on the body’s various senses, resulting in convincing illusions for each of these senses and thus a rich, interactive multimedia facsimile of real life. There are system components that create such illusions for each of the senses, in particular for vision, hearing, and touch (Burdea & Coiffet, 2003).

A virtual experience reproduces, to the extent that hardware and content are capable, the real-world experience of being in a space by creating the sights, sounds, and sense of touch associated with moving through and interacting with that place. Input devices determine what the user is doing and allow the VR environment to respond appropriately. The system monitors a user’s actions and updates the presented information so as to make the behaviors and outcomes in the virtual environment as realistic and congruent with the real world as possible.




Immersion and Presence

The most deeply compelling virtual reality experiences are associated with high levels of immersion. In the research community concerned with VR systems, immersion means the extent to which high-fidelity physical inputs (e.g., light patterns, sound waves) are provided to the different sensory modalities (vision, audition, touch) to create strong illusions of reality in each. This is in contrast to presence, the psychological sensation of being in a virtual place (Biocca, 1997). (Commentators in the video game community often use the term immersion to refer interchangeably to either physical inputs or psychological presence.)

Presence is the sensation of nonmediation while experiencing a mediated environment (Barfield & Weghorst, 1993; Lombard & Ditton, 1997; Steur, 1995). In other words, media experiences such as cinema and VR can be so absorbing and compelling that the observer loses some sense of his or her physical surroundings (i.e., including the medium itself) and responds physically and emotionally in a way that is analogous to actually being in the mediated (i.e., represented) place.

Researchers studying presence further subdivide the concept into notions such as telepresence—the sense of being in some remote location represented by the medium (Hirose, Yokoyama, & Sato, 1993)—and social presence— the sensation of being with and interacting with someone in another place (Zhao, 2003). Applications of VR for telepresence can enable the exploration of dangerous (e.g., nuclear power plants) or hard-to-reach environments (e.g., remote medical diagnosis) from a distant location. Such applications of VR can even be accompanied by distant robotic effectors that respond to the physical movements of a user immersed in the VR environment. The result is a feeling of “being there” in the remote location.

There are many levels of immersion and presence that can be achieved through different combinations of input and display devices. These levels can be thought of as points along a “virtuality continuum” whose end points are the real world and completely virtual environments (Milgram & Kishino, 1994). Points in between correspond to what has come to be known as mixed reality. Mixed reality consists of augmented reality—the blending of mostly real and some virtual content—and augmented virtuality—mostly virtual, with some real content (games created for Sony’s EyeToy, which puts the player into a game on-screen, represent an example of augmented virtuality). Mixed reality is discussed in the section Future Directions.

The degree of presence experienced in a virtual environment tends to correspond to the degree of immersion. Technically speaking, modern PC- and console-based video games constitute a major portion of commercial VR applications. Although peripheral input and output devices are becoming popular to make these games more immersive (discussed below), these “desktop display” applications don’t quite live up to the archetypal sense of the term virtual reality.

The most compelling VR environments are implementations that literally envelop the user in a virtual world, surrounding the user with stereoscopic visual imagery and sound, tracking body motion, and responding to behavior in the environment. The user experiences the sensation of having entered a computer-generated landscape that surrounds him or her in all directions. Parts of the environment will allow for naturalistic, body-based interaction—the user can walk through the environment while the scene changes in realistic ways, can hear different spatialized sounds while visiting different locales, and can reach out to touch and manipulate various features.

Strangely, as the realism of virtual humans approaches that of actual humans, observers tend to experience an uncomfortable sensation much like that created from observing a human corpse. Even in the most (currently) hyperrealistic virtual humans, the perceptual system’s extreme competence in detecting tiny aberrations from normal behavior and appearance lead to this surrealistic property. This sensation, referred to as “The Uncanny Valley,” was first described by Mori (1970). Paradoxically, people seem to prefer watching and interacting with less realistic human characters.

Virtual Reality Technology

Virtual reality applications can range from simple desktop applications, wherein the virtual environment is presented in a window on the desktop, to immersive motion platform systems that provide users with a complete virtual experience (Burdea & Coiffet, 2003). The world of systems can be divided into nonimmersive and immersive approaches.

Nonimmersive Virtual Reality

The most basic nonimmersive configuration is a virtual world in a window on a computer screen. This modality is, indeed, the basis for many computer games and online environments such as Second Life. These systems require little more than conventional computing capabilities, though high-performance graphics cards are a common component. Manipulation of the environment and navigation within the virtual space can be done using keyboard commands, mouse input, or gaming controllers. Keyboardbased input systems are necessarily limited by the small number of keys that have clear meanings in a VR environment (such as the arrow keys). Keyboards are available for gaming applications with extensive relabeling, though these tend to be very game specific. Mouse input provides a simple yet rich interface. The mouse can be used to pitch and roll the view, navigate using aircraftlike control for fly-throughs, or select specific objects or locations in the space by clicking on them. Some means such as varying combinations of mouse button presses or keyboard selections must be provided to switch the usage through a variety of modes. Joysticks provide a more natural interface to some virtual environments in that aircraftlike navigation is more easily mapped to a more aircraftlike control. Modern gaming controllers often have multiple joysticks and additional buttons with clear navigational and selection meanings. Systems such as VR Commander allow users to communicate with a virtual environment using voice commands, though this is accomplished primarily via simple mappings to keyboard or mouse events.

In some cases, systems are augmented with sensors that allow direct 3D manipulation of the world. A motion tracker is a device that is able to continuously ascertain the position and/or orientation of a physical location in space, usually through the attachment of a physical sensor at that location (Foxlin, 2002). Motion trackers for desktop applications use handheld or attached devices such as styli that allow the user to move a corresponding graphical model. Styli are held like pens and allow the user to specify points or perform operations such as spray painting or drawing in the virtual world. Tracking systems can also be attached to the hands, allowing the user the ability to place virtual hands in the environment, either disembodied or as part of an avatar (virtual representation of the user). Hand instrumentation includes the system that tracks movement of the hands, simple gloves that recognize finger touches, such as the PinchGlove, or instrumented gloves that capture joint angles for the fingers. Motion trackers considerably increase the realism and degree of immersion of even desktop virtual environments because the motions of the controllers are directly analogous to motion in the environment. Tracking systems able to capture orientation allow users to directly manipulate orientation, with the object or camera in the environment matching the orientation of the sensor. Systems capable of tracking a position can virtually move objects.

The addition of 3D displays increases the impact of desktop applications. 3D is commonly accomplished by presenting two images in rapid succession on the monitor, one image for each eye, allowing for the perception of depth due to binocular cues. These displays are often referred to as stereoscopic displays. Any desktop monitor with sufficient frame rate capabilities can provide 3D in this form. Shutter glasses open and close before each eye in an alternating pattern so that each only sees the image appropriate for that eye. Persistence of vision prevents the perception of flicker, provided the frame rate is sufficiently fast, usually 120 frames per second. Since shutter glasses are rather large and cumbersome, multiple approaches exist that avoid their use. Displays such as the Planar StereoMirror and the iZ3D monitor present images to each eye with different light polarizations. All that is needed to view the display are simple polarized glasses as are commonly used for 3D motion pictures. Some displays, such as Philips WOWvx, dispense with the glasses altogether by placing miniature optics in front of the pixels that direct the image to only the appropriate eye. These monitors only work when the head is at a specific location (the sweet spot).

3D displays are implemented in a tracked or untracked mode. Untracked displays assume a fixed viewer position, which is reasonable for many desktop applications. The graphics system renders images for the eyes and their distinct locations. An increased perception of reality is accomplished by tracking the head so that the viewpoint can vary. This allows the user to move, increasing depth perception beyond pure stereo vision through the addition of motion parallax.

Immersive Virtual Reality

Immersive virtual reality systems seek to place the user in a virtual environment. The most compelling VR systems are immersive, so termed because they immerse the senses of the user in computer-controlled stimuli. These systems exist over a wide range of modalities. The most basic immersive system is a rear-projection screen with a stereocapable display. The user has a motion-tracking system attached to the head so that the system can know the location of the user and render the environment as seen from that location. These systems are often augmented with tracking devices that can capture hand and body motion, allowing a more immersive experience (Foxlin, 2002).

Projection Display Systems

A single projection screen places a user next to a virtual window into a synthetic world. Additional screens can be placed around the user, forming a VR cave. A common cave configuration is five screens: four that surround the user and an additional screen above. Some systems augment this configuration with a sixth screen on the floor. Caves can provide a 360-degree immersive VR environment (Brennesholtz & Stupp, 2008).

Head-Mounted Displays

Head-mounted displays (HMDs) provide images directly to the eye and also provide a 360-degree immersive experience. A typical HMD consists of two small liquid crystal displays and optics that present an independent image to each eye. When used in conjunction with motion tracking of the head, an HMD can continuously present images to each eye that are rendered from the viewpoint of the eye in the virtual world. Turning the head pans and tilts the virtual camera so that the image appears stable to the user.

Sensors

A key element of many VR systems is the ability to track motion in space. Motion-tracking systems instrument the user’s head, hands, other body parts, and any other objects that need to have analogs in the virtual world. Motion tracking makes it possible for a virtual camera to follow the motion of a physical person. Motion capture systems exist based on inertial sensors, magnetic fields, ultrasonic sound waves, infrared markers, and softwarebased tracking of camera images. The Xsens MTi uses inertial sensing in combination with a simple electronic compass to determine orientation. Orientation-only sensors are only appropriate for HMDs. The Polhemus Patriot uses a transmitter that generates three alternating magnetic fields. Sensors determine the location and orientation of a tracked element by analyzing the reception magnitude of three-directional coils. Magnetic sensors allow reliable and fast tracking of the position and orientation of a sensor over a range of about 1 meter but are sensitive to magnetic fields. The Intersense IS-900 uses a large number of ultrasonic transmitters in combination with inertial tracking to determine the position and orientation of each receiver in a space that can be extended to several dozen meters square. The combination of ultrasonics, which provide high locational accuracy but at a slow rate, with inertial tracking, which is fast but subject to drift, is a sensor fusion approach that leverages the benefits of each system. The Intersense IS-1200 uses small cameras and ceilingmounted fiducial images. A fiducial image, sometimes referred to as a marker, is a printed image with characteristics that make it easy for a computer-vision system to recognize and track it. Since a tracking space need only be equipped with pieces of paper, this approach can track over a large distance and complex geometry, though it is subject to occlusion and limited orientations. The Moven inertialmotion capture suit can capture continuous poses for the entire body using inertial sensors.

Capture of hand gestures allows the development of high-performance user interfaces. The FakeSpace PinchGlove captures combinations of finger presses, allowing simple grasp indications. More highly instrumented gloves, such as the Immersion CyberGlove II, capture all joint angles of the hand.

Haptics

Haptic devices apply forces, commonly referred to as haptic feedback, to the user, increasing the sense of immersion and extending the virtual experience to include the sense of touch (Bicchi, 2008). The SensAble Phantom family of devices have a stylus that can be manipulated by the user. The system tracks the stylus and uses servo motors to resist user forces or move the stylus directly. The amount of force exerted on the stylus by the servo motors can be precisely controlled. The net effect is the simulation of moving the stylus against resistive surfaces, the feel of moving over textured surfaces, and the sense that the stylus is subject to manipulation by elements in the virtual environment. A common application is virtual surgery, where the stylus becomes a virtual scalpel, with the haptic feedback simulating the resistance of flesh or bone as the surgeon conducts a virtual procedure.

Software

A variety of software solutions exist for the development of virtual worlds (Burdea & Coiffet, 2003). Many video cards have the ability to enforce stereo on existing programs, allowing games to be played in immersive environments with no modification, though the degree of immersion is limited to a single viewpoint and is most suitable for projection screen systems. Dassault Systems Virtools allows the creation of virtual worlds and 3D games with a rapid prototyping methodology and supports HMDs and projection screens. The WorldViz Vizard VR Toolkit supports the development of a wide range of VR components including avatars.

Applications of Virtual Reality

In the 1990s, there was a great deal of public excitement surrounding the coming of age of VR and all the ways it purportedly would revolutionize entertainment, education, engineering, and other activities. Today, however, a popular view is that VR has failed to live up to (often overstated) visions (Rheingold, 1991; Sherman & Judkins, 1993) and that the excitement has faded as dreams of hyperrealistic personal VR experiences have failed to come to pass. This view, though, may merely reflect a change in popular perception. The fact is that VR has steadily made inroads into an enormous number of application areas, albeit perhaps more quietly and gradually than suspected by early prognosticators (Burdea & Coiffet, 2003; Giannachi, 2004; Gutierrez, Vexo, & Thalmann, 2008; Harders, 2008; Riva, 2004; Schmorrow, Cohn, & Nicholson, 2008; Wiederhold & Wiederhold, 2005). A large number of companies now specialize in equipment used in VR systems, and many have complete turnkey systems available. Furthermore, the industry may soon face another explosion in visibility and excitement as new variants (e.g., mixed reality) become more mainstream due to the ubiquity of portable devices containing many of the essential capabilities for VR (displays, camera and tracking systems, computer processing power).

Alongside evolving technology, VR applications have blossomed in a wide range of areas. VR has proven useful for gaining basic scientific knowledge, for medical diagnosis and treatment, for commerce and entertainment (especially in the realm of desktop VR), in training, and in cultural heritage. For illustrative purposes, we present here a brief sampling of VR applications.

Training

Virtual environments are often ideal to meet training needs (Schmorrow et al., 2008). They provide standardized interactive experiences that are cost-effective as they are potentially reusable by a wide audience. They are safe learning experiences (i.e., mistakes only lead to virtual consequences, not costly or dangerous outcomes in the real world that make on-the-job training hazardous). They are compelling (users often report higher levels of engagement completing a virtual task relative to more traditional methods such as listening to a lecture or reading a book). It is well-known that high levels of motivation and engagement lead to improved learning outcomes. Training can be done using either fully immersive or desktop virtual environments.

Communication Skills

Researchers at the University of Florida used a CAVE environment to create a virtual “standard patient” for imparting communication skills to medical students (Lok, 2006). Named DIANA (DIgital ANimated Avatar), the virtual patient engages the trainee in the doctor/patient interview process, intimating the existence of various symptoms. A system such as this could potentially replace the current training regime, which requires live actors to portray patients, is costly, and lacks uniformity for trainees.

Researchers at Case Western Reserve University have created a training simulator to enhance communication skills in dental students (Case Western Reserve University, 2008). This desktop VR application makes use of the massively multiplayer online world Second Life. Like DIANA, the focus of this training simulation is to foster improved doctor/patient communication. Students get much needed practice in collecting patient history information, informing patients about treatment options, and describing dental techniques.

Medicine

One of the most widely explored applications of VR technology is in the realm of medicine (Harders, 2008). VR allows researchers to see patient behaviors and body structures in new ways and enables new and effective therapeutic approaches.

Rehabilitation

Many medical researchers have explored the use of VR in rehabilitating stroke victims. At the University of Haifa, researchers have found a way to assess different patterns of stroke-induced brain damage (University of Haifa, 2008). Patients’ hand motions are recorded as they respond to virtual flying objects (tennis balls). The researchers’ computer models use this motion data to diagnose patients with a high degree of accuracy (approximately equivalent to that of human physicians). The value of these modeling techniques lies in their capacity to illuminate the probable outcomes of various new treatment alternatives. The ultimate hope is that these models will allow diagnosis and rehabilitation decisions that outperform any human doctor.

In a similar application, researchers at Rutgers University in New Jersey (Boian et al., 2002) used a desktop VR system equipped with data gloves for stroke rehabilitation. The patient exercises his or her affected hand and arm by manipulating an on-screen hand to interact with a virtual butterfly, play a virtual piano, and perform other tasks. At least in part due to the increased engagement that this task creates for the participant, the system leads to marked improvements. In addition, the data gloves enable a valuable recording of day-to-day progress.

Another intriguing medical application has been to give virtual limbs to amputees. One of the outcomes of losing a limb is the phenomenon known as phantom limb pain:

Patients may feel the sensation that their missing limbs still exist. This sensation is so real that patients may feel itching, soreness, and even cramps in the nonexistent appendage. Research has demonstrated that an ability to see and control a virtual limb—controlled by the remaining contralateral one—enables patients both to visualize and to mentally exercise the missing limb, to rub out soreness and cramps, and generally alleviate short-term phantom limb pain. Researchers in the United Kingdom have demonstrated a VR system for aiding patients with this visualization process (Murray, Pettifer, Caillette, Patchick, & Howard, 2006).

Finally, an active area of VR-based neurorehabilitation research is in the postural stability community, which involves developing instrumentation that allows clinicians to decouple the contributions of various sensory cues such as vision and touch (Jeka, Schöner, Dijkstra, Ribeiro, & Lackner, 1997), diagnose disorders in stroke patients and the elderly (Keshner & Kenyon, 2004), and potentially understand how to improve balance using therapeutic means. In this area of research, visual stimuli such as a moving starfield projected onto a CAVE environment are coupled to a mechanical stimulus presented via a touchplate. Models of multisensory integration predict that both visual stimuli moving across the screen at a certain rate and touch stimulation of varying frequencies delivered at the fingertip will affect stability while standing upright (Jeka et al., 1997). Research such as this holds the promise of applying VR to improve the balance of individuals diminished by age or stroke. Furthermore, the coupling of visual or auditory simulations and mechanical stimuli may ultimately be applied to the design of more sophisticated wearable and prosthetic interfaces.

E-Commerce

In recent years, Web-based VR technology has been used to enhance the online shopping experience. The increased interactivity allows shoppers to explore details of a given product, and this has been shown to increase product knowledge, lead to more positive attitudes about the product, and increase the chance of purchase (Li, Daugherty, & Biocca, 2002).

Specialized VR applications such as Aarkid and TurnTool allow viewers to inspect 3D models of a product closely in order to learn more about its features. One limitation to using these techniques has been the inability to allow the user to interact with a product beyond merely inspecting it in 3D. Recent attempts have sought to create modeling environments that can implement Web-based product demonstrations that allow the user to experience, to some degree, interacting with a product.

One interesting recent finding is that virtual online product examination leads to increased memory for product features.

Researchers at the University of Washington have examined the outcome of interacting with a virtual product (camera) and found that although participants remembered more product features than those viewing only static pictures (they also spent more time examining the VR product), they also had a tendency to falsely remember features that the camera didn’t actually possess (Schlosser, 2006).

Basic Science

VR technology affords possibilities for basic scientific understanding, which is often difficult or impossible to achieve using other methods. VR offers greater realism and complexity than is commonly found in controlled laboratory experiments. Also, VR implementations often create more control and greater replicability than do standard field experiments.

Spatial Navigation

Researchers at Johns Hopkins University studying visual perception and spatial navigation have benefited from the ability to create lifelike navigation spaces. The researchers used a virtual forest to explore the degree to which navigation performance is influenced by peripheral vision. For some participants, performance deteriorated when their field of view was narrowed. Nevertheless, others still managed to navigate the forest well. These latter participants were those who relied more on, and had better memory for, virtual landmarks that could be used to guide their navigation (Fortenbaugh, Hicks, Hao, & Turano, 2006).

Similarly, researchers at the University of Pennsylvania used immersive VR to study how viewers perceive the rate of change of approaching objects (Schrater, Knill, & Simoncelli, 2001). Their research disentangled the relative contributions of optic flow (spatial change information) and changes in size. The study found that size changes alone can be an effective indicator for perceiving the object’s approach.

3D Visualizations

One area of VR-based molecular modeling is in the area of protein and biomolecular simulation. Simulating the energetics and physics of processes such as protein folding in 3D can help scientists and students alike better understand the structure and function of biomolecules. Molecular simulations are also sensitive to the manipulation of both time and space. For example, proteins exist at the spatial scale of angstroms to nanometers (Hunter & Borg, 2003) but can be scaled up to centimeters using virtual simulations. Likewise, protein folding simulated at their natural timescale of nanoseconds is informative but does not explain dynamic processes such as photosynthesis or DNA repair. Simulations that operate at these longer timescales have been produced and can elucidate a number of important biological processes (Humphrey, Dalke, & Schulten, 1996).

Consumer Gaming

Consumer gaming has adopted VR technology in a major way. Estimates put profits in the video game industry on par with the film industry. The sophisticated computer graphics used in today’s PC and console games are the most widespread commercial adaptation of basic VR to date. As computer-generated graphics have advanced in fidelity, devices for sensory immersion have improved as well. Wide-screen, high-definition displays are becoming commonplace, and surround-sound audio systems implement highly realistic 3D spatialized sound. The realism generated by these devices can greatly enhance the sensation of presence while playing.

Devices that were once relegated to the VR lab have also begun to creep into the mainstream. Inexpensive stereo shutter glasses can turn a PC-based game into a lifelike 3D stereoscopic experience. Also available are inexpensive HMDs, such as eMagin’s 3DVisor, that serve the same purpose. The success of Nintendo’s Wii system underscores the possibilities of, and enthusiasm for, tracking devices that translate a player’s physical activity (beyond finger movements on a joystick or keyboard) into actions in the virtual world. Enabling senior citizens or other nontraditional game players to participate by swinging a virtual baseball bat or rolling a virtual bowling ball promises dramatic new outcroppings of VR gaming applications. Other companies have taken up this mantle by focusing on peripheral devices that increase immersion using haptic feedback (e.g., Novint’s Falcon).

In addition to the home-gaming market, many sophisticated VR installations have been developed for entertainment purposes. Once the primary domain of video games, arcades have survived the home-gaming explosion by devising ever more immersive player experiences. Input devices have moved from joysticks and buttons to replicas of actual environmental objects, such as motorcycles and personal watercraft, that tilt and lurch in concert with the game action. Arcade game designers have gone so far as to make realistic fire hoses for playing virtual firefighter and even replicated the interior of a tractor-trailer for simulated truck driving. Even an entire room can be devoted for use in highly compelling, projection-based golf course or hunting simulations.

With the expansion of the gaming market and culture for the past several decades, virtual games have increasingly been used not only for entertainment but also for serious purposes such as learning, training, exercise, and therapy. So-called serious games have drawn much attention for their potential to radically change the tradition of brick-and-mortar classroom teaching to “digital gamebased learning” (Prensky, 2005). Virtual games for educational purposes have been reported to provide educational effects such as increased motivation, memory retention, and engagement. One of the principle adopters of video games for learning is the U.S. military. VR games have proven useful for teaching the proper use of military equipment, establishing tactical advantages, and collaborating with others in team-based missions. Examples include battle simulations such as the Navy’s Fleet Commander and the Marines’s Close Combat: Marines and VBS1. Flight simulator games have been used for years to shorten training time and reduce pilot risks. Likewise, simulation-based virtual 3D navigation games such as Falcon 4.0, Fighter Ace, and AirForce Delta Storm have been developed to provide virtual navigation experiences to game users.

Virtual Reality in Popular Culture

The fascination with VR’s potential over the years is reflected in its portrayal by the popular media. Scores of books, movies, and TV series have explored the implications of making VR illusions part of our everyday lives. Nevertheless, due likely to the slowness of immersive VR technologies in becoming a part of the mainstream entertainment landscape, the public excitement about the technology seems to have waned to some degree in recent years. Nevertheless, VR remains a staple of science fiction writers today.

Probably the most prominent and influential vision of what VR could ultimately become is the Holodeck, depicted in the television series Star Trek: The Next Generation. Inside the Holodeck, participants are totally immersed (although not apparently outfitted with any gear) and feel a complete sense of presence in the virtual world: Although they know that they are in a facsimile of reality, nothing in the perceptual experience belies this fact. The Holodeck’s perfect realism has made it a touchstone in discussions about the current and future status of VR in academic and popular discourse (Murray, 1998). The writers of another Star Trek spinoff, Star Trek: Voyager, included not a Holodeck but a virtual physician that gave all appearances of being real. Embedded into the real world, this holographic character interacted with others with no help from any technological gear borne by the humans. Like the Holodeck, this vision of a virtual human embedded into the natural world portrays a fanciful boundary condition for augmented reality.

Although in each of these examples the authors envision highly realistic, compelling, and ostensibly useful VRs, a common thread in most science fiction portrayals is that VR serves merely as a plot device to explore ethical and existential conundrums. For example, in the Matrix movie series, humans have been co-opted by a race of machines, keeping their bodies in suspended animation to harvest their electrical energy. To keep us content and submissive, the machines must supply humans with a virtual existence so real that we are unable to detect its being otherwise. Another recent movie depiction, 1999s eXistenZ, portrays a future use of VR in which the boundary between the real and the virtual dissolves, creating a nightmarish existence where one can never be sure what is genuine. The television series Virtuality envisions a world where VR is required to exercise the imaginations of crew members enduring a year’s-long voyage through space. As in much science fiction, the perils of technology are explored and emphasized. In this case, the virtual world frequented by the crew is eventually invaded by a malicious virtual being that terrorizes users. An earlier fictional account—VR Troopers—was a live action television series in which VR was an alternate dimension that could be crossed into and in which lived evil beings bent on the destruction of “true” reality. Although each of these examples envisions highly evolved and potentially beneficial versions of VR, writers routinely employ visions of misuse and malevolence.

Future Directions

Augmented Reality

Although VR continues to make inroads into a wide variety of fields, it is often characterized by large, expensive, stationary test beds or training facilities. While the adoption of these kinds of environments is still experiencing healthy growth, the major near-term growth appears to lie in applications of augmented reality (AR) (Bimber & Raskar, 2005; Haller, Billinghurst, & Thomas, 2007). Rather than providing a user with a perception of an entirely virtual world, AR technologies change the perception of the real world. A variety of AR technologies exist, including HMDs that overlay computer graphics onto the visual field and display systems that present modified camera views.

The primary benefit of AR is mobility. In traditional VR worlds, all features are computer generated—even the user has a virtual body (e.g., disembodied virtual hands controlled by user movements enable one to touch and grasp virtual objects). AR capitalizes on the user’s real environment. The computer embellishes the user’s field of view by adding computer-generated information only when and where needed. For example, virtual objects or characters can be embedded into a real room alongside the user. Imagine seeing virtual signage on the side of a building that provides information about what can be found inside (e.g., addresses, business names, directory information, or anything else you might find today on a company Web site).

The necessary ingredients for AR are largely the same as for traditional VR: a display system and input sensors, with the ability to track spatial location with some precision. Today’s cell phones and other mobile handheld devices have increasingly capable displays and camera systems, and location monitoring is possible using camerabased fiducial tracking, ever more precise GPS tracking, or other currently available means. Although these capabilities are still somewhat crude for widespread deployment of AR, all these system components are rapidly growing in precision and power, and nearly all these components are already ubiquitous in society.

Technological Advances

Another driving factor for near-term adoption of VR and AR is miniaturization of the needed components. HMDs have gone from cumbersome, garish devices as large as (or larger than) a toaster to small, monocular displays about the size of a cell phone, though some, such as the Microvision Mobile Device Eyewear, resemble conventional sunglasses. The goal of HMD designers is to make devices so compact ultimately that they can be worn anywhere by anyone without attracting undue attention (much like the Bluetooth headsets common today). Devices this small are thought to be “socially acceptable,” meaning that they could be worn without arousing fear or suspicion from bystanders. Current HMDs are small enough to be portable and can be adopted for use by first responders (e.g., paramedics, the police) and military personnel. Acceptance on a grand scale will likely require smaller systems that blend more naturally with civilian attire. In addition to HMDs, this involves the use of a cell phone, PDA, or tablet PC for implementing “magic-window” AR displays that allow the viewer to see their augmented environment simply by holding up the screen in front of their face.

In addition to miniaturization and increased mobility, many other sources of expense in creating VR are becoming more affordable and easier to use. Projectors capable of stereoscopic display now sell for a few thousand dollars, in contrast to earlier models costing tens of thousands. The software for creating virtual environments has also improved dramatically, due largely to the explosion of the consumer video-game market. Several companies now even offer turnkey VR software solutions.

Increased Sensory Immersion

Another factor likely to drive a new interest in VR is the recent success of peripheral devices—most notably Nintendo’s Wii controller—in tracking body movements and increasing sensory immersion. For decades, game companies tried to increase the player’s immersion by selling peripheral input devices such as steering wheels or gun controllers; such devices generally sold poorly. Sony’s EyeToy—a webcam that inserts the player’s image into the game and responds to user movements (in two dimensions), however, was a surprising success. The release of Nintendo’s Wii system, which uses infrared sensors and accelerometers to pinpoint the player’s hand position and orientation in 3D space, suggests a sea of change in the marketability of these devices. The striking success of a consumer product (i.e., the Wii) that makes use of tracking technology developed decades ago in VR labs bodes well for the wider adoption of these techniques in coming years. Recent attempts at consumer products for haptic stimulation, such as Novint’s Falcon, have come on the market and make possible extremely compelling tactile stimulation.

Unconventional Input Devices

Perhaps the ultimate input device won’t respond to overt user behaviors at all but rather to physiological states produced by the body and sensed with electrophysiological and other recording devices. Numerous researchers are working on so-called brain-computer interfaces that enable continuous updating of information in the virtual world based on mere thought (using electroencephalogram data), changes in heart rate (using electrocardiogram data), skin conductance (using electrodermal response data), and other indicators (e.g., cerebral blood flow measured with functional near-infrared imaging).

Brain-computer interfaces hold great potential for restoring capabilities to users without use of their limbs. Monitoring these data is also of great interest to those interested in augmented cognition, which is a branch of human-computer interaction research concerned with improving learning and performance in computer-based tasks by making use of physiological states such as fear, boredom, or lack of attentional capacity. As with the other technologies described above, recent efforts have been made to commercialize headsets capable of recording brain and muscle activity for use in brain-computer interfaces. As with the video-game boom, renewed interest and vigorous research activity are outcomes likely to follow from widespread commercial adoption of these exciting new VR technologies.

Conclusion

VR combines the input and output from a variety of sensors and displays to create the psychological experience of actually being in some other, virtual place. Ongoing research and development have resulted in myriad new devices for monitoring body-based behaviors and for presenting increasingly realistic information to our visual, auditory, and other sensory systems. The growing availability of VR technology has led to its adoption in a wide range of application areas. VR enables enhanced perspectives on scientific questions, safe and engaging learning environments, new avenues for medical diagnosis and rehabilitation, and, of course, deeply involving entertainment experiences. With the recent widespread adoption of portable consumer products such as camera phones, GPS receivers, and the like, VR is set to break free from its stationary roots and begin populating our natural environment with valuable location-based information, perhaps bringing with it a new wave of exploration and excitement.

Bibliography:

  1. Barfield, W., & Weghorst, S. (1993). The sense of presence within virtual environments: A conceptual framework. Human Computer Interaction, 2, 699–704.
  2. Bicchi, A. (2008). The sense of touch and its rendering: Progress in haptics research. Berlin, Germany: Springer.
  3. Bimber, O., & Raskar, R. (2005). Spatial augmented reality: Merging real and virtual worlds.Wellesley, MA:A. K. Peters.
  4. Biocca, F. (1996, April 8–13). Can the engineering of presence tell us something about consciousness? Paper presented at the Second International Conference on “Towards a Scientific Basis of Consciousness,” Tucson, AZ.
  5. Biocca, F. (1997). The cyborg’s dilemma: Progressive embodiment in virtual environments. Journal of ComputerMediated Communication, 3(2), 12–26.
  6. Boian,R.,Sharma,A.,Han,C.,Merians,A.,Burdea,G.,Adamovich,S., et al. (2002, January 23–26). Virtual reality-based post-stroke hand rehabilitation. Paper presented at the Proceedings of Medicine Meets Virtual Reality, Newport Beach, CA.
  7. Brennesholtz, M. S., & Stupp, E. H. (2008). Projection displays. Hoboken, NJ: Wiley.
  8. Burdea, G., & Coiffet, P. (2003). Virtual reality technology. Hoboken, NJ: Wiley.
  9. Case Western Reserve University. (2008, July 24). Virtual toothache helps student dentists. Available at: https://www.sciencedaily.com/releases/2008/07/080721114604.htm
  10. Fortenbaugh, F., Hicks, J., Hao, L., & Turano, K. (2006). Highspeed navigators: Using more than what meets the eye. Journal of Vision, 6, 565–579.
  11. Foxlin, E. (2002). Motion tracking technologies and requirements. In K. M. Stanney (Ed.), Handbook of virtual environments: Design, implementation, and applications (pp. 163–209). Mahwah, NJ: Lawrence Erlbaum.
  12. Giannachi, G. (2004). Virtual theatres: An introduction. London: Routledge.
  13. Gibson, J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin.
  14. Gibson, J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.
  15. Gutierrez, M., Vexo, F., & Thalmann, D. (2008). Stepping into virtual reality. Santa Clara, CA: Springer-Verlag TELOS.
  16. Haller, M., Billinghurst, M., & Thomas, B. H. (2007). Emerging technologies of augmented reality: Interfaces and design. Hershey, PA: Idea Group.
  17. Harders, M. (2008). Surgical scene generation for virtual reality based training in medicine. London: Springer.
  18. Hawkins, D. (1995). Virtual reality and passive simulators: The future of fun. In F. Biocca & M. R. Levy (Eds.), Communication in the age of virtual reality (pp. 159–189). Hillsdale, NJ: Lawrence Erlbaum.
  19. Hirose, M., Yokoyama, K., & Sato, S. (1993, September 18–22). Transmission of realistic sensation: Development of a virtual dome. Paper presented at the IEEE Virtual Reality Annual International Symposium, Seattle, WA.
  20. Humphrey, W., Dalke, A., & Schulten, K. (1996). Visual molecular dynamics. Journal of Molecular Graphics, 14, 33–38.
  21. Hunter, P., & Borg, T. (2003). Integration from proteins to organs: The Physiome Project. Nature Reviews: Molecular Cell Biology, 4, 237–243.
  22. Jeka, J., Schöner, G., Dijkstra, T., Ribeiro, P., & Lackner, J. (1997). Coupling of fingertip somatosensory information to head and body sway. Experimental Brain Research, 113, 475–483.
  23. Keshner, E., & Kenyon, R. (2004). Using immersive technology for postural research and rehabilitation. Assistive Technology, 16, 54–62.
  24. Li, H., Daugherty, T., & Biocca, F. (2002). Impact of 3-D advertising on product knowledge, brand attitude, and purchase intention: Mediating role of presence. Journal of Advertising, 31(3), 43–57.
  25. Lok, B. (2006). Teaching communication skills with virtual humans. IEEE Computer Graphics and Applications, 26(3), 10–13.
  26. Lombard, M., & Ditton, T. (1997). At the heart of it all: The concept of presence. Journal of Computer-Mediated Communication, 3(2). Available at: https://academic.oup.com/jcmc/article/3/2/JCMC321/4080403
  27. Milgram, P., & Kishino, A. (1994). Taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems, E77-D, 1321–1329.
  28. Mori, M. (1970). The uncanny valley. Energy, 7(4), 33–35.
  29. Murray, C.,Pettifer, S.,Caillette, F.,Patchick, E., & Howard, T.(2006). Immersive virtual reality as a rehabilitative technology for phantom limb experience. CyberPsychology & Behavior, 9, 167–170.
  30. Murray, J. H. (1998). Hamlet on the holodeck: The future of narrative in cyberspace. Cambridge: MIT Press.
  31. Prensky, M. (2005). Computer games and learning: Digital gamebased learning. In J. R. J. Goldsten (Ed.), Handbook of computer game studies (pp. 97–122). Cambridge: MIT Press.
  32. Rheingold, H. (1991). Virtual reality. New York: Summit Books.
  33. Riva, G. (2004). Cybertherapy: Internet and virtual reality as assessment and rehabilitation tools for clinical psychology and neuroscience. Amsterdam, The Netherlands: IOS Press.
  34. Schlosser, A. (2006). Learning through virtual product experience: The role of imagery on true and false memories. Journal of Consumer Research, 33, 377–383.
  35. Schmorrow, D., Cohn, J., & Nicholson, D. (2008). The PSI handbook of virtual environments for training and education: Developments for the military and beyond. Westport, CT: Praeger Security International.
  36. Schrater, P., Knill, D., & Simoncelli, E. (2001). Perceived visual expansion without optic flow. Nature, 410, 816–819.
  37. Sherman, B., & Judkins, P. (1993). Glimpses of heaven, visions of hell: Virtual reality and its implications. London: Hodder & Stoughton.
  38. Steur, J. (1995). Defining virtual reality: Dimensions determining telepresence. In F. L. Biocca (Ed.), Communication in the age of virtual reality. Hillsdale, NJ: Lawrence Erlbaum.
  39. University of Haifa. (2008, March 11). Virtual reality and computer technology improve stroke rehabilitation. Available at: https://www.sciencedaily.com/releases/2008/03/080310110859.htm
  40. Wiederhold, B. K., &Wiederhold, M. D. (2005). Virtual reality therapy for anxiety disorders: Advances in evaluation and treatment. Washington, DC: American Psychological Association.
  41. Zhao, S. (2003). Toward a taxonomy of copresence. Presence: Teleoperators and virtual environments, 12, 445–455.
Agenda Setting and Framing Research Paper
Computer-Mediated Communication Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!