New Media for Learning Research Paper

View sample new media for learning research paper. Browse research paper examples for more inspiration. If you need a psychology research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

The Points of Viewing (POV) theory is the foundation upon which this research paper is based. In the POV theory viewers and readers actively layer their viewpoints and interpretations to create emergent patterns and themes (Goldman-Segall, 1996b, 1998b). The purpose of understanding this theory is to enable learners, educators, and designers to broaden their scope and to enable them to learn from one another. The POV theory has been used for more than a decade in ethnographic studies to interpret video research data, but here we apply this theory to interpret and make meaning of a variety of theories of learning and technology, expecting that readers will reinterpret and resituate the theoretical positions in new configurations as they read the text.

We explore how leading theorists have understood learning and teaching in relation to the use of computers, the Internet, and new media technologies. Our goal is to envision the directions in which the field is going and, simultaneously, to tease out some of the sticky webs that have confused decision makers and academics in their search for a singular best practice. The underlying theme running through this research paper is that many routes combining a vast array of perspectives are needed to shape an educationally sound approach to learning and teaching with new media technology. We call this new approach to design and application perspectivity technologies.

Contexts and Intellectual History

The legacy of the Enlightenment magnified the age-old debate between empiricism and idealism. In the early twentieth century the debate shifted: Science could be used not only to observe the external world with microscopes and telescopes but also to change, condition, and control behavior. Russian physiologist Ivan Pavlov, most renowned for his experiments with dogs, called his theory conditioning. Dogs “learned” to salivate to the sound of a bell that had previously accompanied their eating, even when they received no food. Pavlov’s theory of conditioning played a central role in inspiring John B. Watson, who is often cited as the founder of behaviorist psychology. As early as 1913, Watson, while continuing to work with animals, also applied Pavlov’s theories to children, believing that people act according to the stimulation of their nervous system and can be conditioned to learn just as easily as dogs can. A turbulent personal turn of events—leading to his dismissal from Johns Hopkins University—extended Watson’s behaviorist approach into the domain of marketing. He landed a job as vice president of J. Walter Thompson, one of the largest U.S. advertising companies, and helped changed the course of advertising forever. As media, education, and business enter a convergent course in the twenty-first century and new tools for learning are being designed, behaviorist theories are still a strong, silent partner in the new knowledge economy.

The most noted behaviorist in the educational domain, Burrhus Frederic (B. F.) Skinner, contributed the idea of operant conditioning—how positive and negative reinforcement (reward and punishment) can be used as stimuli to shape how humans respond. With this variation, the theory of behavior modification was born. All human actions are seen to be shaped (caused) by the stimulus of the external world on the body. In short, there is no mind creating reality, merely a hardwired system that responds to what it experiences from external sources. Infamous for designing the glass Air Crib, which his daughter—observed, measured, and “taught” how to behave—spent time living in, Skinner not only practiced what he preached but led the way for even more elaborate experiments to prove how educators could shape, reinforce, and manipulate humans through repeated drills.

With the advent of the computer and man-machine studies in the postwar period, intrepid behavioral scientists designed and used drill-and-practice methods to improve memorization tasks (e.g., Suppes, 1966). They turned to an examination of the role and efficacy of computers and technology in education, a subject understood in a behaviorist research agenda that valued measurable results and formal experimental methods, as Koschmann (1996, pp. 5–6) noted in his erudite critique of the period. Accordingly, a large amount of learning research in the 1960s, 1970s, and 1980s asked how the computer (an external stimulus) affects (modifies) the individual (a hardwired learning system). Research questions focused on how the process of learning could be improved by using the computer, applied as enhancement or supplement to an otherwise unchanged learning environment.

The approach one takes to using technologies in the learning setting is surely rooted in one’s concept of the mind. The mind as a site of research (and not just idealization or speculation) has its modern roots in the work of Jean Piaget (b. 1896), a natural scientist trained in zoology but most renowned for his work as a developmental psychologist and epistemologist. After becoming disillusioned with standardized testing methodology at the Sorbonne in France, Piaget returned to Geneva in 1921 to dedicate the rest of his academic life to studying the child’s conception of time (Piaget, 1969), space (Piaget & Inhelder, 1956), number (Piaget, 1952), and the world (Piaget, 1930). Although the idea that children could do things at one age that they could not do at another was not new, Piaget was able to lay out a blueprint for children’s conceptual development at different stages of their lives. For example, the classic theory of conservation eludes the young child: A tall glass contains more water than a short one even if the young child pours the same water from one glass into the other. Until Piaget, no one had conducted a body of experiments asking children to think about these phenomena and then mapped into categories the diverse views that children use to solve problems. By closely observing, recording his observations, and applying these to an emerging developmental theory of mind, Piaget and his team of researchers in Geneva developed the famous hierarchy of thinking stages: sensorimotor, preoperational, concrete, and formal. Piaget did not limit all thinking into these four rigid categories but rather used them as a way to deepen discussion on how children learn.

What is fundamentally different in Piaget’s conception of mind is that unlike the behaviorist view that the external world affectstheindividual—aunidirectionalapproachwithnoinput from the individual—the process of constructivist learning occurs in the mind of the child encountering, exploring, and theorizing about the world as the child encounters the world while moving through preset stages of life. The child’s mind assimilates new events into existing cognitive structures, and the cognitive structures accommodate the new event, changing the existing structures in a continually interactive process. Schemata are formed as the child assimilates new eventsandmovesfromastateofdisequilibriumtoequilibrium, a state only to be put back into disequilibrium every time the child meets new experiences that cannot fit the existing schema.Inthisway,asBeers(2001)suggests,assimilationand accommodation become part of a dialectical interaction.

We propose that learners, their tools and creations, and the technology-rich learning habitat are continually affecting and influencing each other, adding diverse points of viewing to the topic under investigation. This wider range of viewpoints sets the stage for a third state called acculturation— the acceptance of diverse points of viewing—that occurs simultaneously with both the assimilation and accommodation processes. Learning becomes an evolving social event in which ideas are diffused among the elements within a culture, as Kroeber argued in 1948 (p. 25), and also are changed by the participation of the elements.

Piaget believed that learning is a spontaneous, individual, cognitive process, distinct from the sort of socialized and nonspontaneous instruction one might find in formal education, and that these two are in a somewhat antagonistic relationship. Critiquing Piaget’s constructivism, the Soviet psychologist L. S. Vygotsky (1962) wrote,

We believe that the two processes—the development of spontaneous and of nonspontaneous concepts—are related and constantly influence each other. They are parts of a single process: the development of concept formation, which is affected by varying external and internal conditions but is essentially a unitary process, not a conflict of antagonistic, mutually exclusive forms of mentation. (p. 85)

Vygotsky heralded a departure from individual mind to social mind, and under his influence educational theorizing moved away from its individual-focused origins and toward more socially or culturally situated perspectives.The paradigmatic approaches of key theorists in learning technology reflect this change as contributions from anthropology and social psychology gained momentum throughout the social sciences. The works of Vygotsky and the Soviet culturalhistorical school (notably A. R. Luria and A. N. Leontiev), when translated into English, began to have a major influence, especially through the interpretations and stewardship of educational psychologists such as Jerome Bruner, Michael Cole, and Sylvia Scribner (Bruner, 1990; Cole & Engeström, 1993; Cole & Wertsch, 1996; Scribner & Cole, 1981). Vygotsky focused on the role of social context and mediating tools (language, writing, and culture) in the development of the individual and argued that one cannot study the mind of a child without examining the “social milieu, both institutional and interpersonal” in which she finds herself (Katz & Lesgold, 1993, p. 295). Vygotsky’s influence, along with that of pragmatist philosopher John Dewey (1916/1961), opened up the study of technology in learning beyond individual cognition. The ground in the last decade of the twentieth century thus became fertile for growing a range of new media and computational environments for learning, teaching, and research based on a socially mediated conceptualization of how people learn. But the path to social constructionism at the end of the twentieth century first took a circuitous route through what was known as computer-aided instruction (CAI).

Instructional Technology: Beginnings of Computer-Aided Instruction

An examination of the theoretical roots of computers in education exposes its behaviorist beginnings: The computer could reinforce activities that would bring about more efficient learning. For some, this meant “cheaper,” for others, “faster,” and for yet others, it meant without needing a teacher (see Bromley, 1998, for a discussion). The oldest such tradition of computing in education is CAI. This approach dates back to the early 1960s, notably in two research projects: at Stanford under Patrick Suppes (1966), and the Programmed Logic for Automated Teaching Operations (PLATO) project at the University of Illinois at UrbanaChampaign (UIUC) under Donald Bitzer and Dan Alpert (Alpert & Bitzer, 1970). Both projects utilized the then-new time-sharing computer systems to create learning opportunities for individual students. The potential existed for a timesharing system to serve hundreds or even thousands of students simultaneously, and this economy of scale was one of the main drivers of early CAI research. A learner could sit at a terminal and engage in a textual dialogue with the computer system: question and answer. As such, CAI can be situated mostly within the behavioral paradigm (Koschmann, 1996, p. 6), although its research is also informed by cognitive science.

The Stanford CAI project explored elementary school mathematics and science education, and the researchers worked with local schools to produce a remarkable quantity of research data (Suppes, Jerman, & Brian, 1968; Suppes & Morningstar, 1972). Suppes began with tutorial instruction as the key model and saw that the computer could provide individualized tutoring on a far greater scale than was economically possible before. Suppes envisioned computer tutoring on three levels, the simplest of which is drill-and-practice work, in which the computer administers a question and answer session with the student, judging responses correct or incorrect and keeping track of data from the sessions. The second level was a more direct instructional approach: The computer would give information to the student and then quiz the student on the information, possibly allowing for different constructions or expressions of the same information. In this sense, the computer acts much like a textbook. The third level involved more sophisticated dialogic systems in which a more traditional tutor-tutee relationship could be emulated (Suppes, 1966). Clearly, the simple drill-and-practice model is the easiest to implement, and as such the bulk of the early Stanford research uses this model, especially in the context of elementary school arithmetic (Suppes et al., 1968).

The research results from the Stanford experiments are hardly surprising: Students improve over time and with practice. For the time (the 1960s), however, to be able to automate the process was a significant achievement. More interesting from our perspective are the reflections that Suppes (1966) offered regarding the design of the human-computer interface: How and when should feedback be given? How can the system be tailored to different cognitive styles? What is the best way to leverage the unprecedented amount of quantitative data the system collects about each student’s performance and progress? These questions still form the cornerstone of much educational technology research.

The PLATO project at UIUC had a somewhat different focus (Alpert & Bitzer, 1970). Over several incarnations of the PLATO system through the 1960s, Bitzer, Alpert, and their team worked at the problems of integrating CAI into university teaching on a large scale, as indeed it began to be from the late 1960s. The task of taking what was then enormously expensive equipment and systems and making them economically viable in order to have individualized tutoring for students drove the development of the systems and led PLATO to a very long career in CAI—in fact, the direct descendants of the original PLATO system are still being used and developed. The PLATO project introduced some of the first instances of computer-based manipulables, studentto-student conferencing, and computer-based distance education (Woolley, 1994).

From these beginnings CAI and the models it provides for educational technology are now the oldest tradition in educational computing. Although only partly integrated in the school system, CAI is widely used in corporate training environments and in remedial programs and has had something of a resurgence with the advent of the World Wide Web as online training has become popular. It is worth noting that Computer Curriculum Corporation, the company that Suppes started with Richard Atkinson at Stanford in 1967, and NovaNet, a PLATO descendant spun off from UIUC in 1993, were both recently acquired by Pearson Education, the world’s largest educational publisher (Pearson Education, 2000).

Cognitive Science and Research on Artificial Intelligence

In order to situate the historical development of learning technology, it is also important to appreciate the impact of what Howard Gardner (1985) refers to as the “cognitive revolution” on both education and technology. For our purposes, the contribution of cognitive science is twofold. First, the advent of the digital computer in the 1940s led quickly to research on artificial intelligence (AI). By the 1950s AI was already a substantial research program at universities such as Harvard, MIT, and Stanford. And although AI research has not yet produced an artificial mind, and we believe it is not likely to do so, the legacy of AI research has had an enormous influence on our present-day computing paradigms, from information management to feedback and control systems, and from personal computing to the notion of programming languages. All derive in large part from a full half-century of research in AI.

Second, cognitive science—specifically the contributions of Piagetian developmental psychology and AI research— gave the world the first practical models of mind, thinking, and learning. Prior to the cognitive revolution, our understanding of the mind was oriented either psychoanalytically and philosophically out of the Western traditions of metaphysics and epistemology or empirically via behaviorism. In the latter case, cognition was regarded as a black box between stimulus and response. Because no empirical study of the contents of this box was thought possible, speculation as to what went on inside was both discouraged and ignored.

Cognitive science, especially by way of AI research, opened the box. For the first time researchers could work from a model of mind and mental processes. In 1957 AI pioneer Herbert Simon went so far as to predict that AI would soon provide the substantive model for psychological theory, in the same way that Newton’s calculus had once done for physics (Turkle, 1984, p. 244). Despite the subsequent humbling of AI’s early enthusiasm, the effect that this thinking has had on research in psychology and education and even the popular imagination (consider the commonplace notion of one’s short-term memory) is vast.

The most significant thread of earlyAI research wasAllen Newell and Herbert Simon’s information-processing model at Carnegie-Mellon University. This research sought to develop a generalized problem-solving mechanism, based on the idea that problems in the world could be represented as internal states in a machine and operated on algorithmically. Newell and Simon saw the mind as a “physical symbol system” or “information processing system” (Simon, 1969/1981, p. 27) and believed that such a system is the “necessary and sufficient means” for intelligence (p. 28). One of the venerable traditions of this model is the chess-playing computer, long bandied as exemplary of intelligence. Ironically, world chess master Gary Kasparov’s historic defeat by IBM’s supercomputer Deep Blue in 1997 had far less rhetorical punch than did AI critic (and chess novice) Hubert Dreyfus’s defeat in 1965, but the legacy of the information-processing approach cannot be underestimated.

Yet it would be unfair to equate all of classical AI research with Newell and Simon’s approach. Significantly, research programs at Stanford and MIT, though perhaps lower profile, made significant contributions to the field. Two threads in particular are worthy of comment here. One was the development of expert systems concerned with the problem of knowledge representation—for example, Edward Feigenbaum’s DENDRAL, which contained large amounts of domainspecific information in biology. Another was Terry Winograd’s 1970 program SHRDLU, which first tackled the issue of indexicality and reference in an artificial microworld (Gardner, 1985). As Gardner (1985) pointed out, these developments demonstrated that Newell and Simon’s generalized problemsolving approach would give way to more situated, domainspecific approaches.

At MIT in the 1980s, Marvin Minsky’s (1986) work led to a theory of the society of minds—that rather than intelligence being constituted in a straightforward representational and algorithmic way, intelligence is seen as the emergent property of a complex of subsystems working independently. The notion of emergent AI, more recently explored through massively parallel computers, has with the availability of greater computing power in the 1980s and 1990s become the mainstream of AI research (Turkle, 1995, pp. 126–127).

Interestingly, Gardner (1985) pointed out that the majority of computing—and therefore AI—research has been located within the paradigm defined by Charles Babbage, Lady Ada Lovelace, and George Boole in the nineteenth century. Babbage and Lovelace are commonly credited with the basic idea of the programmable computer; Lady Ada Lovelace’s famous quote neatly sums it up: “The analytical engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform” (quoted in Turing, 1950). George Boole’s contribution was the notion that a system of binary states (0 and 1) could suffice for the representation and transformation of logical propositions. But computing research began to find and transcend the limits of this approach. The rise of emergent AI was characterized as “waking up from the Boolean dream” (Douglas Hofstadter, quoted in Turkle, 1995, p. 135). In this model intelligence is seen as a property emergent from, or at least observable in, systems of sufficient complexity. Intelligence is thus not defined by programmed rules, but by adaptive behavior within an environment.

From Internal Representation to Situated Action

The idea of taking contextual factors seriously became important outside of pure AI research as well. A notable example was the reception given to Joseph Weizenbaum’s famous program, Eliza. When it first appeared in 1966, Eliza was not intended as serious AI; it was an experiment in creating a simple conversational interface to the computer— outputting canned statements in response to certain “trigger” phrases inputted by a user. But Eliza, with her reflective responses sounding a bit like a Rogerian psychologist, became something of a celebrity—much to Weizenbaum’s horror (Turkle, 1995, p. 105). The popular press and even some psychiatrists took Eliza quite seriously. Weizenbaum argued against Eliza’s use as a psychiatric tool and against mixing up human beings and computers in general, but Eliza’s fame has endured. The interface and relationship that Eliza demonstrates has proved significant in and of itself, regardless of what computational sophistication may or may not lie behind it.

Another contextualist effort took place at Xerox’s Palo Alto Research Center (PARC) in the 1970s, where a team led by Alan Kay developed the foundation for the personal computing paradigm that we know today. Kay’s team is most famous for developing the mouse-and-windows interface— which Brenda Laurel (Laurel & Mountford, 1990) later called the direct manipulation interface. However, at a more fundamental level, the Xerox PARC researchers defined a model of computing that branched away from a formalist, rules-driven approach and moved toward a notion of the computer as curriculum: an environment for designing, creating, and using digital tools. This approach came partly from explicitly thinking of children as the designers of computing technology. Kay (1996) wrote,

We were thinking about learning as being one of the main effects we wanted to have happen. Early on, this led to a 90-degree rotation of the purpose of the user interface from “access to functionality” to “environment in which users learn by doing.” This new stance could now respond to the echoes of Montessori and Dewey, particularly the former, and got me, on rereading Jerome Bruner, to think beyond the children’s curriculum to a “curriculum of user interface.” (p. 552)

In the mid-1980s Terry Winograd and Fernando Flores’s Understanding Computers and Cognition:ANew Foundation for Design (1986) heralded a new direction in AI and intelligent systems design. Instead of a rationalist, computational model of mind,Winograd and Flores described the emergence of a decentered and situated approach. The book drew on the phenomenological thinking of Martin Heidegger, the biology of perception work of Humberto Maturana and Francisco Varela, and the speech-act theory of John Austin and John Searle to call for a situated model of mind in the world, capable of (or dependent on) commitment and intentionality in real relationships. Winograd and Flores’s work raised significant questions about the assumptions of a functionalist, representational model of cognition, arguing that such a view is based on highly questionable assumptions about the nature of human thought and action.

In short, the question of how theseAI and cognitive science developments have affected the role of technology in the educational arena can be summed up in the ongoing debate between instructionist tutoring systems and constructivist toolkits. Whereas the earliest applications of AI to instructional systems attempted to operate by creating a model of knowledge or a problem domain and then managing a student’s progress in terms of deviation from that model (Suppes, 1966; Wenger, 1987), later and arguably more sophisticated construction systems looked more like toolkits for exploring and reflecting on one’s thinking in a particular realm (Brown & Burton, 1978; Papert, 1980).

The Role of Technology in Learning

When theorizing about the role of technology in learning, the tendency is often to use an instrumentalist and instructionist approach—the computer, for example, is a useful tool for gathering or presenting information (which is often and incorrectly equated with knowledge). Even within the constructionist paradigm, the social dimension of the learning experience is forgotten, focusing only on the individual child. And even when we remember the Vygotskian zone of proximal development (ZPD) with its emphasis on the socially mediated context of learning, we tend to overlook the differences that individuals themselves have in their learning styles when they approach the learning experience. And even when we consider group and individual differences, we fail to examine that individuals themselves try out many styles depending on the knowledge domain being studied and the context within which they are participating. Most important, even when the idea that individuals have diverse points of viewing the world is acknowledged, technologists and new media designers often do little to construct learning environments that truly encourage social construction and knowledge creation.

Designing and building tools as perspectivity technologies, we argue, enables learners to participate as members of communities experiencing and creating new worlds from the points of viewing of their diverse personal identities while contributing to the public good of the digital commons. Perspectivity technologies are technologies that enable learners—like stars in a constellation—to be connected to each other and to change their positions and viewpoints yet stay linked within the larger and movable construct of the total configuration of many constellations, galaxies, and universes. It is within the elastic tension among all the players in the community—the learner, the teacher, the content, the artifacts created, and most important, the context of the forces within which they communicate—that new knowledge in, around, and about the world is created.

This next section is organized less chronologically and more functionally and examines technologies from a variety of perspectives: as information sources, curricular areas, communications media, tools, environments, partners, scaffolds, and perspectivity toolkits. In the latter, we return to the importance of using the Points of Viewing theory as a framework for designing new media technological devices.

Technology as Information Source

When we investigate how meaning is made, we can no longer assume that actual social meanings, materially made, consist only in the verbal-semantic and linguistic contextualizations (paradigmatic, syntagmatic, intertextual) by which we have previously defined them. We must now consider that meaningin-use organizes, orients, and presents, directly or implicitly, through the resources of multiple semiotic systems. (Lemke, 1998)

Access to information has been the dominant mythology of computers in education for many educators. Not taking the time to consider how new media texts bring with them new ways of understanding them, educators and educational technologists have often tried to add computers to learning as one would add salt to a meal. The idea of technology as information source has captured the imagination of school administrators, teachers, and parents hoping that problems of education could be solved by providing each student with access to the most current knowledge (Graves, 1999). In fact, legislators and policy makers trying to bridge the digital divide see an Internet-connected computer on every desktop as a burning issue in education, ranking closely behind public versus charter schools, class size, and teacher expertise as hot-button topics.

Although a growing number of postmodern theorists and semioticians see computers and new media technologies as texts to deconstruct (Landow, 1992; Lemke, 1998), it is more common to see computers viewed as textbooks. Despite Lemke’s reminder that these new media texts require translation and not only digestion, the computer is commonly seen as merely a more efficient method of providing instruction and training, with information equated with knowledge. Learners working with courseware are presented with information and then tested or questioned on it, much as they would using traditional textbooks. The computer can automatically mark student responses to questions and govern whether the student moves on to the next section, freeing the teacher from this task—an economic advantage noted by many educational technology thinkers. In the late 1980s multimedia—audio, graphics, and video—dominated the educational landscape. Curriculum and learning resources, first distributed as textbook and accompanying floppy disks, began to be distributed on videodisc or CD-ROM, media formats able to handle large amounts of multiple media information. In the best cases, multimedia resources employed hypertext or hypermedia (Landow, 1992; Swan, 1994) navigation schemes, encouraging nonlinear traversal of content. Hypermedia, as such, represented a significant break with traditional, linear instructional design models, encouraging users to explore resources by following links between discrete chunks of information ratherthansimplyfollowingaprogrammedcourse.Oneofthe best early exemplars wasApple Computer’s VisualAlmanac: An Interactive Multimedia Kit (Apple Multimedia Lab, 1989), that enabled students to explore rich multimedia vignettes about interesting natural phenomena as well as events from history and the arts.

More recently, the rise of the Internet and the World Wide Web has stimulated the production of computer-based curriculum resources once again. As a sort of universal multimedia platform, the Web’s ability to reach a huge audience very inexpensively has led to its widespread adoption in schools, training centers, corporations, and, significantly, the home. More than packaged curriculum, however, the use of the Internet and the World Wide Web as an open-ended research tool has had an enormous impact on classrooms. Because the software for browsing the web is free (or nearly free) and the technology and skills required to use it are so widespread, the costs of using the Web as a research tool are largely limited to the costs of hardware and connectivity. This makes it an obvious choice for teachers and administrators often unsure of how best to allocate technology funds. The popular reputation of the Web as a universal library or as access to the world’s information (much more so than its reputation as a den of pornographers and pedophiles) has led to a mythology of children reaching beyond the classroom walls to tap directly into rich information sources, communicate with scientists and experts, and expand their horizons to a global view. Of course, such discourse needs to be examined in the light of day: The Web is a source of bad information as well as good, and we must also remember that downloading is not equivalent to learning. Roger Schank observed,

Access to the Web is often cited as being very important to education, for example, but is it? The problem in the schools is not that the libraries are insufficient. The Web is, at its best, an improvement on information access. It provides a better library for kids, but the library wasn’t what was broken. (Schank, 2000)

In a similar vein, correspondence schools—both University-based and private businesses dating back to the nineteenth century—are mirrored in today’s crop of online distance learning providers (Noble, 1999). In the classic distance education model, a student enrolls, receives curriculum materials in the mail, works through the material, and submits assignments to an instructor or tutor by mail. It is hoped that the student completes everything successfully and receives accreditation. Adding computers and networks to this model changes very little, except for lowering the costs of delivery and management substantially (consider the cost savings of replacing human tutors and markers with an AI system).

If this economic reality has given correspondence schools a boost, it has also, significantly, made it almost imperative that traditional education providers such as schools, colleges, and universities offer some amount of distance access. Despite this groundswell, however, the basic pedagogical questions about distance education remain: To what extent do learners in isolation actually learn? Or is distance education better considered a business model for selling accreditation (Noble, 1999)? The introduction of electronic communication and conferencing systems into distance education environments has no doubt improved students’ experiences (Hiltz, 1994), and this development has certainly been widespread, but the economic and educational challenges driving distance education still make it an ambivalent choice for both students and educators concerned with the learning process.

Technology as Curriculum Area

Driven by economic urgency—a chronic labor shortage in IT professions (Meares & Sargent, 1999), the extensive impact of computers and networks in the workplace, and the promise of commercial success in the new economy—learning about computers is a curriculum area in itself, and it has a major impact on how computers and technology are viewed in educational settings.

The field of technology studies, as a curriculum area, has existed in high schools since the 1970s. But it is interesting to note how much variation there is in the curriculum, across grade levels, from region to region, and from school to school—perhaps increasingly so as years go by. Apart from the U.S. College Board’s Advanced Placement (AP) Computer Science Curriculum, which is very narrowly focused on professional computer programming, what one school or teacher implements as the “computer science” or “information technology” curriculum is highly varied, and probably very dependent on individual teachers’ notions and attitudes toward what is important. The range includes straightforward computer programming (as in the AP curriculum), multimedia production (Roschelle, Kaput, Stroup, & Kahn, 1998), technology management (Wolfson & Willinsky, 1998), exploratory learning (Harel & Papert, 1991), textbook learning about bits and bytes, and so on. Standards are hard to come by, of course, because the field is so varied and changing.

A most straightforward conclusion that one may draw from looking at our economy, workplace, and prospects for the future is that computer-based technologies are increasingly part of how we work. It follows that simply knowing how to use computers is a requirement for many jobs or careers. This basic idea drives the job skills approach to computers in education. In this model computer hardware and software, particularly office productivity and data processing software, are the cornerstone of technology curriculum because skill with these applications is what employers are looking for. One can find this model at work in most high schools, and it is dominant in retraining and economic development programs. And whereas its simple logic is easy to grasp, this model may be a reminder that simple ideas can be limiting. MIT professor Seymour Papert (1992), invoking curriculum theorist Paolo Freire, wrote,

If “computer skill” is interpreted in the narrow sense of technical knowledge about computers, there is nothing the children can learn now that is worth banking. By the time they grow up, the computer skills required in the workplace will have evolved into something fundamentally different. But what makes the argument truly ridiculous is that the very idea of banking computer knowledge for use one day in the workplace undermines the only really important “computer skill”: the skill and habit of using the computer in doing whatever one is doing. (p. 51)

Papert’s critique of computer skills leads to a discussion of computer literacy, a term almost as old as computers themselves, and one that is notoriously elusive. Critic Douglas Noble (1985, p. 64) noted that no one is sure what exactly computer literacy is, but everyone seems to agree that it is good for us. Early attempts to define it come from such influential figures as J. C. R. Licklider, one of the founders of what is now the Internet, whose notion of computer literacy drew much on Dewey’s ideas about a democratic populous of informed citizens.

As computers became more widespread in the 1980s and 1990s, popular notions of computer literacy grew up around people struggling to understand the role of these new technologies in their lives. The inevitable reduction of computer literacy to a laundry list of knowledge and skills (compare with E. D. Hirsch’s controversial 1987 book Cultural Literacy) prompted Papert to respond with appeals to the richness of what literacy means:

When we say “X is a very literate person,” we do not mean that X is highly skilled at deciphering phonics. At the least, we imply that X knows literature, but beyond this we mean that X has certain ways of understanding the world that derive from an acquaintance with literary culture. In the same way, the term computer literacy should refer to the kinds of knowing that derive from computer culture. (Papert, 1992, p. 52)

Papert’s description broadens what computer literacy might include, but it still leaves the question open. Various contributions to the notion of literacy remain rooted in the particular perspectives of their contributors.Alan Kay (1996) wrote of an “authoring literacy.” Journalist Paul Gilster (2000) talked about “digital literacy.” Most recently, Andrea diSessa (2000), creator of the Boxer computer program, has written extensively on “computational literacy,” a notion that he hopes will rise above the banality of earlier conceptions: “Clearly, by computational literacy I do not mean a casual familiarity with a machine that computes. In retrospect, I find it remarkable that society has allowed such a shameful debasing of the term literacy in its conventional use in connection with computers” (p. 5).

The difficulty of coming to terms with computer or digital literacy in any straightforward way has led Mary Bryson to identify the “miracle worker” discourse that results, in which experts are called on to step in to a situation and implement the wonders that technology promises:

[W]e hear that what is essential for the implementation and integration of technology in the classroom is that teachers should become “comfortable” using it. . . . [W]e have a master code capable of utilizing in one platform what for the entire history of our species thus far has been irreducibly different kinds of things. . . . [E]very conceivable form of information can now be combined with every other kind to create a different form of communication, and what we seek is comfort and familiarity? (deCastell, Bryson, & Jenson, 2000)

However difficult to define, some sense of literacy is going to be an inescapable part of thinking about digital technology and learning. If we move beyond a simple instrumental view of the computer and what it can do, and take seriously how it changes the ways in which we relate to our world, then the issue of how we relate to such technologies, in the complex sense of a literacy, will remain crucial.

Technology as Communications Media

The notion of computer as communications medium (ormedia) began to take hold as early as the 1970s, a time when computing technology gradually became associated with telecommunications. The beginnings of this research are often traced to the work of Douglas Engelbart at the Stanford Research Institute (now SRI International) in the 1960s (Bootstrap Institute, 1994). Englebart’s work centered around the oNLine System (NLS), a combination of hardware and software that facilitated the first networked collaborative computing, setting the stage for workgroup computing, document management systems, electronic mail, and the field of computer-supported collaborative work (CSCW). The first computer conference management information system, EMISARI, was created by Murray Turoff while working in the U.S. Office of Emergency Preparedness in the late 1960s and was used for monitoring disruptions and managing crises. Working with Starr Roxanne Hiltz, Turoff continued developing networked, collaborative computing at the New Jersey Institute ofTechnology (NJIT) in the 1970s. Hiltz and Turoff (1978/1993) founded the field of computer-mediated communication (CMC) with their landmark book, The Network Nation. The book describes a new world of computer conferencing and communications and is to this day impressive in its insightfulness. Hiltz and Turoff’s work inspired a generation of CMC researchers, notably including technology theorist Andrew Feenberg (1989) at San Diego State University and Virtual-U founder Linda Harasim (1990, 1993) at Simon Fraser University.

Although Hiltz and Turoff’s Network Nation is concerned mostly with business communications and management science, it explores teaching and learning with network technologies as well, applying their insights to practical problems of teaching and learning online:

In general, the more the course is oriented to teaching basic skills (such as deriving mathematical proofs), the more the lecture is needed in some form as an efficient means of delivering illustrations of skills. However, the more the course involves pragmatics, such as interpretations of case studies, the more valuable is the CMC mode of delivery. (Hiltz & Turoff, 1978/1993, p. 471)

Later, Hiltz wrote extensively about CMC and education. Her 1994 book, The Virtual Classroom, elaborates a methodology for conducting education in computer-mediated environments and emphasizes the importance of assignments using group collaboration to improve motivation. Hiltz hoped that students would share their assignments with the community rather simply mail them to the instructor. Hiltz was surely on the mark in the early 1990s as researchers around the world began to realize the promise of “anyplace, anytime” learning (Harasim, 1993) and to study the dynamics of teachers and learners in online, asynchronous conferencing systems.

Parallel to the early development of CMC, research in CAI begantotakeseriouslythepossibilitiesofconnectingstudents over networks. As mentioned earlier, the PLATO system at UIUC was probably the first large-scale distributed CAI system. PLATO was a large time-sharing system, designed (and indeed economically required) to support thousands of users connecting from networked terminals. In the 1970s PLATO began to offer peer-to-peer conferencing features, making it one of the first online educational communities (Woolley, 1994).

Distance education researchers were interested in CMC as an adjunct to or replacement for more traditional modes of communication, such as audio teleconferencing and the postal service. The British Open University was an early test-bed of online conferencing. Researchers such as A. W. Bates (1988) and Alexander Romiszowski and Johan de Haas (1989) were looking into the opportunities presented by computer conferencing and the challenges of conducting groups in these text-only environments. More recently, Bates has written extensively about the management and planning of technology-based distance education, drawing on two decades of experience building “open learning” systems in the United Kingdom and Canada (Bates, 1995). In a 1996 article, Timothy Koschmann suggested that the major educational technology paradigm of the late 1990s would be computer-supported collaborative learning (CSCL), a close relative of the emerging field of CSCW. Educational technology, Koschmann pointed out, was now concerned with collaborative activities, largely using networks and computer conferencing facilities. Whether CSCL constitutes a paradigm shift is a question we will leave unanswered, but Koschmann’s identification of the trend is well noted. Two of the most oft-cited research projects of the 1990s fall into this category. The work of Margaret Riel, James Levin, and colleagues on teleprenticeship (Levin, Riel, Miyake, & Cohen, 1987) and on learning circles (Riel, 1993, 1996) connected many students at great distances—classroom to classroom as much as student to student—in large-scale collaborative learning.

In the early 1990s students, teachers, and researchers around the world engaged in networked collaborative projects. At the Institute for the Learning Sciences (ILS) at Northwestern University, the Collaborative Visualization (Co-Vis) project involved groups of young people in different schools conducting experiments and gathering scientific data on weather patterns (Edelson, Pea, & Gomez, 1996). At the Multimedia Ethnographic Research Lab (MERLin) at the University of British Columbia, young people, teachers, and researchers conducted ethnographic investigations on a complex environmental crisis at Clayoquot Sound on the west coast of Vancouver Island (Goldman-Segall, 1994), with the aim of communicating with other young people in diverse locations. The Global Forest project resulted in a CD-ROM database of video and was designed to link to the World Wide Web to allow participants from around the world to share diverse points of viewing and interpretation of the video data.

At Boston’s TERC research center, large-scale collaborative projects were designed in conjunction with the National Geographic Kids Network (Feldman, Konold, & Coulter, 2000; Tinker, 1996). The TERC project was concerned with network science, and as with Riel’s learning circles, multiple classrooms collaborated together, in this case gathering environmental science data and sharing in its analysis:

For example, in the NG Kids Network Acid Rain unit, students collect data about acid rain in their own communities, submit these data to the central database, and retrieve the full set of data collected by hundreds of schools. When examined by students, the full set of data may reveal patterns of acidity in rainfall that no individual class is able discover by itself based on its own data. Over time, the grid of student measurements would have the potential to be much more finely grained than anything available to scientists, and this would become a potential resource for scientists to use. (Feldman et al., 2000, p. 7)

But in the early 1990s, despite much written about the great emerging advances in telecommunications technology, no one could have predicted the sheer cultural impact that the Internet would have. It is difficult to imagine, from the standpoint of the early twenty-first century, any educational technology project that does not in some way involve the Internet. The result is that all education computing is in some way a communications system, involving distributed systems, peer-to-peer communication, telementoring, or some similar construct—quite as Hiltz and Turoff predicted. What is still to be realized is how to design perspectivity technologies that enable, encourage, and expand users’ POVs to create more democratic, interactive, convivial, and contextual communication.

One of the most interesting developments in CMC since the advent of the Internet is immersive virtual reality environments—particularly multiuser dungeons (MUDs) and MOOs—withinwhichlearnerscanmeet,interact,andcollaboratively work on research or constructed artifacts (Bruckman, 1998; Dede, 1994; Haynes & Holmevik, 1998). Virtual environments, along with the popular but less-interesting chat systems on the Internet, add synchronous communications to the asynchronous modes so extensively researched and written about since Hiltz and Turoff’s early work. One could position these immersive, virtual environments as perspectivity technologies as they create spaces for participants to create and share their worlds.

The Internet has clearly opened up enormous possibilities for shared learning. The emergence of broad standards for Internet software has lent a stability and relative simplicity to learning software. Moreover, the current widespread availability and use of Internet technologies could be said to mark the end of CMC as a research field unto itself, as it practically merges CMC with all manner of other conceptualizations of new media technological devices: CAI, intelligent tutoring systems, simulations, robotics, smart boards, wireless communications, wearable technologies, pervasive technologies, and even smart appliances.

Technology as Thinking Tool

David Jonassen (1996) is perhaps best known in the educational technology domain as the educator connected with bringing to prominence the idea of computer as mindtool. Breaking rank with his previous instructionist approach detailing what he termed frames for instruction (Duffy & Jonassen, 1992), Jonassen’s later work reflects the inspiration of leading constructionist thinkers such as Papert. In a classic quotation on the use of the computer as a tool from the landmark book, Mindstorms: Children, Computers, and Powerful Ideas, Papert (1980) stated, “For me, the phrase ‘computer as pencil’ evokes the kind of uses I imagine children of the future making of computers. Pencils are used for scribbling as well as writing, doodling as well as drawing, for illicit notes as well as for official assignments” (p. 210).

Although it is easy to think of the computer as a simple tool—a technological device that we use to accomplish a certain task as we use a pen, abacus, canvas, ledger book, file cabinet, and so on—a tool can be much more than just a better pencil. It can be a vehicle for interacting with our intelligence—a thinking tool and a creative tool. For example, a popular notion is that learning mathematics facilitates abstract and analytic thinking. This does not mean that mathematics can be equated with abstract thinking. The computer as a tool enables learners of mathematics to play with the elements that create the structures of the discipline. To employ Papert’s (1980) example, children using the Logo programming language explore mathematics and geometry by manipulating a virtual turtle on the screen to act out movements that form geometric entities. Children programming in Logo think differently about their thinking and become epistemologists. As Papert would say, Logo is not just a better pencil for doing mathematics but a tool for thinking more deeply about mathematics, by creating procedures and programs, structures within structures, constructed, deconstructed, and reconstructed into larger wholes. At the MIT Media Lab in the 1970s and 1980s, Papert and his research team led a groundbreaking series of research projects that brought computing technology to schoolchildren using Logo. In Mindstorms, Papert explained that Logo puts children in charge of creating computational objects—originally, by programming a mechanical turtle (a 1.5-ft round object that could be programmed to move on the floor and could draw a line on paper as it moved around), and then later a virtual turtle that moved on the computer screen. Papert, a protégé of Jean Piaget, was concerned with the difficult transition from concrete to formal thinking. Papert (1980) saw the computer as the tool that could make the abstract concrete:

Stated most simply, my conjecture is that the computer can concretize (and personalize) the formal. Seen in this light, it is not just another powerful educational tool. It is unique in providing us with the means for addressing what Piaget and many others see as the obstacle which is overcome in the passage from child to adult thinking. (p. 21)

Beyond Piaget’s notion of constructivism, the theory of constructionism focused its lens less on the stages of thought production and more on the artifacts that learners build as creative expressions of their understanding. Papert (1991) understood the computer as not merely a tool (in the sense of a hammer) but as an object-to-think-with that facilitates novel ways of thinking:

Constructionism—the N word as opposed to the V word—shares constructivism’s connotation of learning as building knowledge structures irrespective of the circumstances of the learning. It then adds the idea that this happens especially felicitously in a context where the learner is consciously engaged in constructing a public entity, whether it’s a sand castle on the beach or a theory of the universe. (p. 1)

By the late 1980s, and continuing up to today, the research conducted by Papert’s Learning and Epistemology Research Group at MIT had become one of the most influential forces in learning technology. A large-scale intensive research project called Project Headlight was conducted at the Hennigan School in Boston and studied all manner of phenomena around the experience of schoolchildren and Logo-equipped computers. A snapshot of this research is found in the edited volume titled Constructionism (Harel & Papert, 1991), which covers the perspectives of 16 researchers.

Goldman-Segall and Aaron Falbel explored Ivan Illich’s (1973) theory of conviviality—a theory that, in its simplest form, recommends that tools be simple to use, accessible to all, and beneficial for humankind—in relation to new technologies in learning. Goldman-Segall (2000) conducted a 3-year video ethnography of children’s thinking styles at Project Headlight and created a computer-based video analysis tool called Learning Constellations to analyze her video cases. Falbel worked with children to create animation from original drawings and to think of themselves as convivial learners. In Judy Sachter’s (1990) research, children explored their understanding of three-dimensional rotation and computer graphics, leading the way for comprehending how children understand gaming. At the same time, Mitchell Resnick, Steve Ocko, and Fred Martin designed smart LEGO bricks controlled by Logo. These LEGO objects could be programmed to move according to Logo commands (Martin & Resnick, 1993; Resnick & Ocko, 1991). Nira Granott asked adult learners to deconstruct how and why these robotic LEGO creatures moved in the way they did. Her goal was to understand the construction of internal cognitive structures that allow an interactive relationship between creator and user (Granott, 1991). Granott’s theory of how diverse individuals understand the complex movements of LEGO/Logo creatures was later woven into a new fabric that Resnick— working with many turtles on a screen—called distributed constructionism (Resnick, 1991, 1994). Uri Wilensky, with Resnick, deepened the theoretical framework around the behavior of complex systems (Resnick & Wilensky, 1998). To model, describe, and predict emergent phenomena in complex system, Resnick designed LEGO/Logo and Wilensky and Resnick designed StarLogo. Wilensky more recently designed NetLogo. Wilensky (2000, 2001), a mathematician concerned with probability, is often cited for his asking a simple question to young people: How do geese fly in formation? The answers that young people give reveal how interesting yet difficult emergent phenomena are to describe.

Given Papert’s background as a mathematician, mathematics was an important frame for much of the research conducted in Project Headlight. Idit Harel introduced Alan Collin’s theory of apprenticeship learning into the intellectual climate involving elementary students becoming software designers. Harel worked with groups of children creating games in Logo for other children to use in learning about fractions. This idea that children could be designers of their learning environments was developed further by Yasmin Kafai, who introduced computer design as an environment to understand how girls and boys think when playing and designing games—a topic of great interest to video game designers (Kafai, 1993, 1996). Kafai has spent more than a decade creating a range of video game environments for girls and boys to design environments for learning. In short, Kafai connected the world of playing and designing to the life of the classroom in a number of studies in the 1990s.

Continuing to expand Papert’s legacy with a new generation of graduate students, Kafai at UCLA, Resnick at the MIT Media Lab, Goldman-Segall at the MERLin Lab at the University of British Columbia, Granott at the University of Texas in Dallas, and Wilensky at the Institute of the Learning Sciences at Northwestern continue to explore the notion of computer device as a thinking tool from the constructionist perspective. Over the last decade the focus on understanding the individual mind of a child has shifted to understanding how groups of people collaborate to make sense of the world and participate as actors in shared constructions. Constructionism, in its more social, distributed, and complex versions, is now being reinterpreted through a more situated and ecological point of view.

Technology as Environment

The line between technology as tool and technology as environment is thus a thin one and in fact becomes even more permeable when one considers tools and artifacts as part of a cultural ecology (Cole, 1996; Vygotsky, 1978). As Alan Kay (1996) noted, “Tools provide a path, a context, and almost an excuse for developing enlightenment. But no tool ever contained it, or can provide it. Cesare Pavese observed: to know the world, we must make it [italics added]” (p. 547).

Historically, constructivist learning theories were rooted in the epistemologies of social constructivist philosopher Dewey, social psychologist Vygotsky, and developmental and cognitive psychologist Bruner. Knowledge of the world is seen to be constructed through experience; the role of education is to guide the learner through experiences that provide opportunities to construct knowledge about the world. In Piaget’s version, this process is structured by the sequence of developmental stages. In Vygotsky’s cultural-historical version, the process is mediated by the tools and contexts of the child’s sociocultural environment.As a result of the influence of Vygotsky’s work in the 1980s and 1990s across North America, researchers in a variety of institutions began to view the computer and new media technologies as environments, drawing on the notion that learning happens best for children when they are engaged in creating personally meaningful digital media artifacts and sharing them publicly. The MIT Media Lab’s Learning and Epistemology Group under the direction of Papert, the Center for Children and Technology under Jan Hawkins and Margaret Honey, Vanderbilt’s Cognition and Technology Group under the leadership of John Bransford and Susan Goldman, TERC and the Concord Consortium in Boston under Bob Tinker, Marcia Linn at Berkeley, Georgia Tech under Janet Kolodner, the Multimedia Ethnographic Research Lab (MERLin) under GoldmanSegall, and SRI under Roy Pea are just a few of the exemplary research settings involved in the exploration of learning and teaching using technologies as learning environments during the 1990s. Several of these communities (SRI, Berkeley, Vanderbilt, and the Concord Consortium) formed an association called CILT, the Center for Innovation in Learning and Teaching, which became a hub for researchers from many institutions.

The range of theoretical perspectives employed in conducting research about learning environments in these various research centers has been as diverse as might be expected. Most of these centers have asked what constitutes good research in educational technology and designed research methods that best address the issues under investigations. At the University of Wisconsin–Madison, Richard Lehrer and Leona Schauble (2001) have asked what constitutes real data in the classroom. As Mary Bryson from the University of British Columbia and Suzanne de Castell from Simon Fraser University have reminded us for over a decade now, studying technology-based classrooms is at best a complex narrative told by both students and researchers (Bryon & de Castell, 1998).

One might ask what constitutes scientific investigation of the learning environment and for whom. Sharon Derry, another learning scientist from University of Wisconsin– Madison who previously assessed knowledge building in computer-rich learning environments with colleague Suzanne Lajoie (Lajoie & Derry, 1993) using quantitative measures, has begun to investigate the role of rich video cases in online learning communities with colleagues Constance Steinkuehler, Cindy Hmelo-Silver, and Matt DelMarcelle (Steinkuehler, Derry, Hmelo-Silver, & DelMarcelle, in press). Derry established the Secondary Teacher Education Project (STEP) as an online preservice teacher education learning environment. In collaboration with Goldman-Segall at the New Jersey Institute of Technology’s emerging eARTh Lab, Derry is currently exploring how to integrate elements of Goldman-Segall’s conceptual framework of conducting digital video ethnographic methods and her software ORION for digital video analysis (shown later in Figure 16.5), as well as use tools designed at the University of Wisconsin for teacher analysis of video cases.

These qualitative research tools and methods, with their emphasis on case studies and in-depth analyses, best describe the conclusions of a study that is constructionist by design. In short, they are methods and tools to study the technology learning environment and to enter into the fabric of the environment as part of the learning experience. Employing perspectivity technologies and using a theoretical framework that encourages collaborative theory building are basic foundations of rich learning environments. When individuals and groups create digital media artifacts for learning or conducting research on learning, the artifacts inhabit the learning environment, creating an ecology that we share with one another and with our media constructions. Perspectivity technologies become expressive tools that allow learners to manipulate objects-to-think-with as subjects-to-think-with. Technology is thus not just an instrument we use within an environment, but is part of the environment itself.

Technology as Partner

Somewhere amid conceiving of computing technology as artificial mind and conceiving of it as communications medium is the notion of computer as partner. This somewhat more romanticized version of “technology as tool” puts more emphasis on the communicative and interactive aspects of computing. Acomputer is more than a tool like the pencil that one writes with because, in some sense, it writes back. And although this idea has surely existed since early AI and intelligent tutoring systems (ITS) research, it was not until an important article in the early 1990s (Salomon, Perkins, & Globerson, 1991) that the idea of computers as partners in cognition was truly elaborated.

As early as the 1970s, Gavriel Salomon (1979) had been exploring the use of media (television in particular) and its effect on childhood cognition. Well-versed in Marshall McLuhan’s adage, “The medium is the message,” Salomon built a bridge between those who propose an instrumentalist view of media (media effects theory) and those who understand media to be a cultural artifact in and of itself. Along these lines, in 1991 Salomon et al. drew a very important distinction: “effects with technology obtained during partnership with it, and effects of it in terms of the transferable cognitive residue that this partnership leaves behind in the form of better mastery of skills and strategies” (p. 2).

Their article came at a time when the effects of computers on learners were being roundly criticized (Pea & Kurland, 1987), and it helped break new ground toward a more distributed view of knowledge and learning (Brown, Collins, & Duguid, 1996; Pea, 1993). To conceive of the computer as a partner in cognition—or learning, or work—is to admit it into the cultural milieu, to foreground the idea that the machine in some way has agency or at least influence in our thinking.

If we ascribe agency to the machine, we are going some way toward anthropomorphizing it, a topic Sherry Turkle has written about extensively (Turkle, 1984, 1995). GoldmanSegall wrote of her partnership with digital research tools as “a partnership of intimacy and immediacy” (GoldmanSegall,1998b,p.33).MITinterfacetheoristAndrewLippman defined interactivity as mutual activity and interruptibility (Brand, 1987), andAlluquere Rosanne Stone went further, referring to the partnership with machines as a prosthetic device for constructing desire (Stone, 1995). Computers are, asAlan Kay envisioned in the early 1970s, personal machines.

The notion of computers as cognitive partners is further exemplified in research conducted by anthropologist Lucy Suchman at Xerox PARC. Suchman’s (1987) Plans and Situated Actions: The Problem of Human-Machine Communication explored the difference between rational, purposive plans, and circumstantial, negotiated, situated actions. Rather than actions being imperfect copies of rational plans, Suchman showed how plans are idealized representations of real-world actions. With this in mind, Suchman argued that rather than working toward more and more elaborate computational models of purposive action, researchers give priority to the contextual situatedness of practice: “A basic research goal for studies of situated action, therefore, is to explicate the relationship between structures of action and the resources and constraints afforded by physical and social circumstances” (p. 179).

Suchman’s colleagues at Xerox PARC in the 1980s designed tools as structures within working contexts; innovative technologies such as collaborative design boards, real-time virtual meeting spaces, and video conferencing between coworkers were a few of the environments at Xerox PARC where people could scaffold their existing practices.

Technology as Scaffold

The computer as scaffold is yet another alternative to tool, environment, or partner. This version makes reference to Vygotsky’s construct of the ZPD, defined as “the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers” (Vygotsky, 1978, p. 86). The scaffold metaphor originally referred to the role of the teacher, embodying the characteristics of providing support, providing a supportive tool, extending the learner’s range, allowing the learner to accomplish tasks not otherwise possible, and being selectively usable (Greenfield, 1984, p. 118).

Vygotsky’s construct has been picked up by designers of educational software, in particular the Computer Supported Intentional Learning Environment (CSILE) project at the Ontario Institute for Studies in Education (OISE). At OISE, Marlene Scardamalia and Carl Bereiter (1991) worked to ward developing a collaborative knowledge-building environment and asked how learners (children) could be given relatively more control over the ZPD by directing the kinds of questions that drive educational inquiry. The CSILE environment provided a scaffolded conferencing and note-taking environment in which learners themselves could be in charge of the questioning and inquiry of collaborative work—something more traditionally controlled by the teacher—in such a way that kept the endeavor from degenerating into chaos.

Another example of technological scaffolding comes from George Landow’s research into using hypertext and hypermedia—nonlinear, reader-driven text and media, as mentioned earlier—in the study of English literature (Landow & Delany, 1993). In Landow’s research, a student could gain more information about some aspect of Shakespeare, for example, by following any number of links presented in an electronic document. A major component of Landow’s work was his belief in providing students with the context of the subject matter. The technological scaffolding provides a way of managing that context—so that it is not so large, complicated, or daunting that it prevents learners from exploring, but is flexible and inviting enough to encourage exploration beyond the original text. The question facing future researchers of these nonlinear and alternately structured technologies may be this: Can the computer environment create a place in which the context or the culture is felt, understood, and can be communicated to others? More controversially, perhaps, can these technologies be designed and guided by the learners themselves without losing the richness that direct engagement with experts and teachers can offer them?

Technology as Perspectivity Toolkit

The Perspectivity Toolkit model we are introducing in this research paper (a derivative of the Points of Viewing theory) proposes that the next step in understanding new media technologies for learning is to define them as lenses to explore both self and world through layering viewpoints and looking for underlying patterns that lead to agreement, disagreement, and understanding. Perspectivity technologies provide a platform for sharing (not always shared) values and for building (not only participating in) cultures or communities of practice. Because we live in a complex global society, this new model is critical if we are to communicate with each other. Illich (1972) called this form of communication, conviviality and Geertz (1973) called it commensurability. Goldman-Segall (1995) referred to the use of new media, especially digital video technologies, to layer views and perspectives into new theories as configurational validity—a form of thick communication.

One can trace the first glimmer of perspectivity technologies to Xerox PARC in the 1970s. There, Alan Kay was inventing what we now recognize as the personal computer— a small, customizable device with substantial computing power, mass storage, and the ability to handle multiple media formats. Though simply pedestrian today, Kay’s advances were at the time revolutionary. Kay’s vision of small, selfcontained personal computers was without precedent, as was his vision of how they would be used: as personalized media construction toolkits that would usher in a new kind of literacy. With this literacy would start the discourse between technology as scientific tool and technology as personal expression: “The particular aim of [Xerox’s Learning Research Group] was to find the equivalent of writing—that is, learning and thinking by doing in a medium—our new ‘pocket universe’” (Kay, 1996, p. 552).

At Bank Street College in the 1980s, a video and videodisc project called The Voyage of the Mimi immersed learners in scientific exploration of whales and Mayan cultures. Learners identified strongly with the student characters in the video stories. Similarly, the Cognition and Technology Group at Vanderbilt (CTGV) was working on video-based units in an attempt to involve students in scientific inquiry (Martin, 1987). The Adventures of Jasper Woodbury is a series of videodisc-based adventures that provide students with engaging content and contexts for solving mysteries and mathematical problems (http://peabody.vanderbilt.edu/ctrs/ ltc/Research/jasper.html). While both of these environments were outstanding exemplars of students using various media forms to get to know the people and the culture within the story structures, the lasting contribution is not only one of enhanced mathematical or social studies understanding, but also a connection to people who are engaged in real-life inquiry and in expanding on perspective in the process.

With an AI orientation, computer scientist, inventor, and educator Elliot Soloway at the University of Michigan built tools to enable learners to create personal hypermedia documents, reminiscent of Kay’s personalized media construction toolkits. In his more current work with Joe Krajcik, Phyllis Blumenfeld, and Ron Marx, Soloway participated with communities of students and teachers as they explored project-based science through the design of sophisticated technologies developed for distributed knowledge construction (Soloway, Krajcik, Blumenfeld, & Marx, 1996). Similarly, at Berkeley, Marcia Linn analyzed the cognition of students who wrote programs in the computer language LISP, and Andrea diSessa worked with students who were learning physics using his program called Boxer. For diSessa, physics deals with

a rather large number of fragments rather than one or even any small number of integrated structures one might call “theories.” Many of these fragments can be understood as simple abstractions from common experiences that are taken as relatively primitive in the sense that they generally need no explanation; they simply happen. (diSessa, 1988, p. 52)

Andrea diSessa’s theory of physics resonates strongly with the notion of bricolage, a term first used by the French structural anthropologist Claude Lévi-Strauss (1968) to describe a person who builds from pieces and does not have a specific plan at the onset of the project. Lévi-Strauss was often used as a point of departure for cognitive scientists interested in the analysis of fragments rather than in building broad generalizations from top-down rationalist structures. By the 1990s French social theory has indeed infiltrated the cognitive paradigm, legitimizing cultural analysis.

Influenced by the notion of bricolage, however, one might ask whether these technology researchers were aware that they had designed perspectivity platforms for interactions between individuals and communities. Perhaps not, yet we propose that these environments should be reviewed through the perspectivity lens to understand how learners come to build consensual theories around complex human-technology interactions. Goldman-Segall’s digital ethnographies of children’s thinking (1990, 1991, 1998b) are exemplars in perspectivity theory. She established unique partnerships among viewer, author, and media texts—a set of partnerships that revolves around, and is revolved around, the constant recognition of cultural connections as core factors in using new-media technologies. Goldman-Segall explored the tenuous, slippery, and often permeable relations between creator, user, and media artifact through an online environment for video analysis. A video chunk, for example, became the representation of a moment in the making of cultures. This video chunk became both cultural object and personal subject, something to turn around and reshape. And just as we, as users and creators (readers and writers) of these artifacts, change them through our manipulation, so they change us and our cultural possibilities. Two examples of Goldman-Segall’s video case studies and interactive software that illustrate the implementation of perspectivity technologies for culture making and collaborative interpretation can be found on the Web at http://www.pointsofviewing.com.

Another good example of perspectivity technology is described in the doctoral work of Maggie Beers who, working with Goldman-Segall in the MERLin Research Lab, explored how preservice teachers learning modern languages build and critique digital artifacts connecting self and other (Beers, 2001; Beers & Goldman-Segall, 2001). Beers showed how groups of preservice teachers create video artifacts as representations of their various cultures in order to share and understand each other’s perspectives as an integral part of learning a foreign language. The self becomes a strong reference point for understanding others while engaged in many contexts with media tools and artifacts.

Another exemplary application of perspectivity theory is demonstrated by Gerry Stahl. Stahl has been working on the idea of perspective and technology at the University of Colorado for several years. Stahl’s Web Guide forms the technical foundation into an investigation of the role of artifacts in collaborative knowledge building for deepening perspective. Drawing on Vygotsky’s theories of cultural mediation, Stahl’s work develops models of collaborative knowledge building and the role of shared cultural artifacts— and particularly digital media artifacts—in that process (Stahl, 1999).

In sum, perspectivity technologies enhance, motivate, and provide new opportunities for learning, teaching, and research because they address how the personal point of view connects with evolving discourse communities. Perspectivity thinking tools enable knowledge-based cultures to grow, creating both real and virtual communities within the learning environment to share information, to alter the self-other relationship, and to open the door to a deeper, richer partnership with our technologies and one another. Just as a language changes as speakers alter the original form, so will the nature of discourse communities change as cultures spread and variations are constructed.

Exemplary Learning Systems

The following is a collage of technological systems designed to aid, enhance, or inspire learning.

Logo

Logo (see Figure 16.1), one of the oldest and most influential educational technology endeavors, dates back to 1967. Logo is a dialect of the AI research language LISP and was developed by Wally Feurzig’s team at Bolt, Beranek, and Newman (BBN), working with Papert. Papert’s work made computer programming accessible to children, not through dumbing down computer science, but by carefully managing the relationship between abstract and concrete. Logo gave children the means to concretize mathematics and geometry via the computer, which made them explorers in the field of math. As mentioned earlier, Papert believed that because the best way to learn French is not to go to French class, but to France, the best way to learn mathematics would be in some sort of “Mathland” (Papert, 1980, p. 6). Logo provided a microworld operating in terms of mathematical and geometric ideas. By experimenting with controlling a programmable turtle, children had direct, concrete experience of how mathematical and geometric constructs work. Through reflection on their experiments, they would then come to more formalized understandings of these constructs. Papert saw children as epistemologists thinking about their thinking about mathematics by living in and creating computer cultures.

New Media for Learning Research Paper

With the growing availability of personal computers in the late1970s and 1980s, the Logo turtle was moved onscreen.The notion of the turtle in its abstract world was called a microworld, a notion that has been the lasting legacy of the Logo research (Papert, 1980). The Logo movement was very popular in schools in the 1980s, and many, many versions of the language were developed for different computer systems. Some implementations of Logo departed from Papert’s geometry microworlds and were designed to address other goals, such as the teaching of computer programming (Harvey, 1997). Some implementations of Logo are freely distributed on the Internet. See http://www.cs.berkeley.edu/~bh/logo.html. The Logo Foundation, at http://el.www.media.mit.edu/groups/ logo-foundation/, has continued to expand the culture of Logo over the years.

Squeak

Squeak (see Figure 16.2) is the direct descendant of Alan Kay’s Dynabook research at Xerox PARC in the 1970s. It is a multimedia personal computing environment based on the SmallTalk, the object-oriented programming language that formed the basis of Kay’s investigations into personal computing (Kay, 1996). Squeak is notable in that it is freely distributed on the Internet, runs on almost every conceivable computing platform, and is entirely decomposable: Although one can create new media tools and presentations as with other environments, one can also tinker with the underlying operation of the system—how windows appear or how networking protocols are implemented. A small but enthusiastic user community supports and extends the Squeak environment, creating such tools as web browsers, music synthesizers, three-dimensional graphic toolkits, and so on—entirely within Squeak. See https://squeak.org/.

New Media for Learning Research Paper

Boxer

Boxer (see Figure 16.3) is a computational medium—a combination of a programming language, a microworld environment, and a set of libraries and tools for building tools for exploring problem solving with computers. Developed by Andrea diSessa, Boxer blends the Logo work of Seymour Papert (1980) and the mutable medium notion of Alan Kay (1996) in a flexible computing toolkit. diSessa’s work has been ongoing since the 1980s, when he conceived of an environment to extend the Logo research into a more robust and flexible environment in which to explore physics concepts (diSessa, 2000). Boxer is freely distributed on the Internet.

New Media for Learning Research Paper

HyperCard

In 1987 Apple Computer was exploring multimedia as the fundamental rationale for people wanting Macintosh computers. However, as there was very little multimedia software available in the late 1980s, Apple decided to bundle a multimedia-authoring toolkit with every Macintosh computer. This toolkit was HyperCard, and it proved to be enormously popular with a wide variety of users, and especially in schools. HyperCard emulates a sort of magical stack of index cards, and its multimedia documents were thus called stacks. An author could add text, images, audio, and even video components to cards and then use a simple and elegant scripting language to tie these cards together or perform certain behaviors. Two broad categories of use emerged in HyperCard: The first was collecting and enjoying predesigned stacks; the second was authoring one’s own. In the online bulletin board systems of the early 1990s, HyperCard authors exchanged great volumes of “stackware.” Educators were some of the most enthusiastic users, either creating content for students (a stellar example of this is Apple’s Visual Almanac, which married videodisc-based content with a HyperCard control interface) or encouraging students to create their own. Others used HyperCard to create scaffolds and tools for learners to use in their own media construction. A good snapshot of this HyperCard authoring culture is described in Ambron and Hooper’s (1990) Learning with Interactive Multimedia. Unfortunately, HyperCard development at Apple languished in the mid1990s, and the World Wide Web eclipsed this elegant, powerful software. A HyperCard derivative called HyperStudio is still popular in schools but lacks the widespread popularity outside of schools that the original claimed.

Constellations/ORION

New Media for Learning Research Paper

Constellations (see Figure 16.4) is a collaborative video annotation tool that works with the metaphor of starsand constellations. An individual data chunk (e.g., a video clip) is a star. Stars can be combined to make constellations, but different users may place the same star in different contexts, depending on their understanding by viewing data from various perspectives. Constellations is thus a data-sharing system, promoting Goldman-Segall’s notion of configurational validity by allowing different users to compare and exchange views on how they contextualize the same information differently in order to reach valid conclusions about the data. It also features collaborative ranking and annotation of data nodes. Although other video analysis tools have been developed and continue to be developed (Harrison & Baecker, 1992; Kennedy, 1989; Mackay, 1989; Roschelle, Pea, &Trigg, 1990), Constellations (also called Learning Constellations) was the first video data-analysis tool to analyze a robust set of video ethnographic data (Goldman-Segall, 1989, 1990). Constellations was originally developed as a stand-alone application using the HyperCard platform with a significance measure to layer descriptions and attributes (Goldman-Segall, 1993). However, in 1998 the tool went online as a Web-based collaborative video analysis tool called WebConstellations and focused more on data management and integration (Goldman-Segall, 1999; Goldman-Segall & Rao, 1998). The most recent version, ORION, provides more functionality for the administrator to designate access to users (see Figure 16.5). Unlike WebConstellations, ORION has returned to its original functionality of being a tool for video chunking, sorting, analysis, ethnographic theory building and story making.

New Media for Learning Research Paper

Adventures of Jasper Woodbury

Jasper Woodbury is the name of a character in a series of adventure stories that CTGV uses as the basis for anchored instruction. The stories, presented on videodisc or CD-ROM, are carefully crafted mysteries that present problems to be solved by groups of learners. Since the video can be randomly accessed, learners are encouraged to re-explore parts of the story in order to gather clues and develop theories about the problem to be solved. The Jasper series first appeared in the 1980s, and there are now 12 stories (CTGV, 1997).

KidPix

KidPix was the first kid-friendly, generic graphics studio program. It includes a wealth of design tools and features that make it easy and fun to create images, and it has been widely adopted in schools. KidPix was originally developed by Craig Hickman in the late 1980s for his own son and was subsequently marketed by Broderbund software (now owned by The Learning Company).

CSILE

Marlene Scardamalia and Carl Bereiter at OISE developed CSILE. CSILE is a collaborative, problem-based, knowledge-building environment. Learners can collaborate on data collection, analysis of findings, constructing and presenting conclusions by exchanging structured notes and attaching further questions, contributions, and so on to preexisting notes. CSILE was originally conceived to provide a dynamic scaffold for knowledge construction—one that would let the learners themselves direct the inquiry process (Scardamalia & Bereiter, 1991). CSILE is now commercially developed and licensed as Knowledge Forum.

StarLogo

StarLogo (see Figure 16.6) is a parallel-computing version of Logo. By manipulating multiple (thousands) of distributed turtles, learners can work with interactive models of complex interactions, population dynamics, and other decentralized systems. Developed by Resnick, Wilensky, and a team of researchers at MIT, StarLogo was conceived as a tool to move learners’ thinking beyond a centralized mindset and to study how people make sense of complex systems (Resnick, 1991; Resnick & Wilensky, 1993; Wilensky & Resnick, 1999). StarLogo is available for free on the Internet, as is NetLogo—a next generation multiagent environment developed by Wilensky at the Center for Connected Learning and ComputerBased Modeling at Northwestern University.

New Media for Learning Research Paper

MOOSE Crossing

Georgia Tech researcher Amy Bruckman created MOOSE Crossing (see Figure 16.7) as part of her doctoral work while at the MIT Media Lab. MOOSE Crossing can be characterized as something of a combination of the Logo/microworlds work of Papert (1980), the mutable media notions of Kay (1996), and a MOO (Haynes & Holmevik, 1998)—a real-time, collaborative, immersive, virtual environment. MOOSE Crossing is a thus a microworld that learners can themselves enter, designing and programming the virtual environment from within. It becomes a sort of lived-in text that one shares with other readers, writers, and designers. Bruckman (1998) calls MOOSE Crossing “community support for constructionist learning”:

Calling a software system a place gives users a radically different set of expectations. People are familiar with a wide variety of types of places, and have a sense of what to do there. . . . Instead of asking What do I do with this software?, people ask themselves, What do I do in this place? The second question has a very different set of answers than the first. (p. 49)

Bruckman’s (1998) thesis is that community and constructionist learning go hand in hand. Her ethnographic accounts of learners inside the environment reveals very close, very personal bonds emerging between children in the process of designing and building their worlds in MOOSE Crossing. “The emotional support,” she writes, “is inseparable from the technical support. Receiving help from someone you would tell your secret nickname to is clearly very different from receiving help from a computer program or a schoolteacher” (p. 128). The MacMOOSE and WinMOOSE software is available for free on the Internet.

New Media for Learning Research Paper

SimCalc

SimCalc’s tag line is “Democratizing Access to the Mathematics of Change,” and the goal is to make the understanding of change accessible to more learners than the small minority who take calculus classes (see Figure 16.8). SimCalc, a project at the University of Massachusetts under James Kaput working with Jeremy Roschelle and Ricardo Nemirovky, is a simulation and visualization system for learners to explore calculus concepts in a problem-based model, one that avoids traditional problems with mathematical representation (Kaput, Roschelle, & Stroup, 1998). The core software, called MathWorlds (echoing Papert’s Mathland idea), allows learners to manipulate variables and see results via real-time visualizations with both animated characters and more traditional graphs.

New Media for Learning Research Paper

Participatory Simulations

Participatory Simulations, a project overseen by Uri Wilensky and Walter Stroup at Northwestern University, is a distributed computing environment built on the foundations of Logo and NetLogo that encourages learners collaboratively to explore complex simulations (Wilensky & Stroup, 1999). This project centers around HubNet, a classroombased network of handheld devices that enables learners to participate in and collaboratively control simulations of dynamic systems. The emergent behavior of the system becomes the object of collective discussion and collaborative analysis.

CoVis

CoVis (Collaborative Visualization), a project developed at Northwestern University in the 1990s, focuses on science learning through projects using a telecommunications infrastructure, scientific visualization tools, and software to support collaboration between diverse schools in distributed locations (Edelson et al., 1996). Much of learners’ investigation centered on atmospheric and environmental studies, allowing wide-scale data sharing across the United States. Learners could then use sophisticated data analysis tools to visualize and draw conclusions. CoVis made use of a variety of networked software: collaborative “notebooks,” distributed databases, and system visualization tools, as well as the Web and e-mail. The goal in the CoVis project was for young people to study topics in much the same way as professional scientists do.

Network Science

In the late 1980s and 1990s a number of large-scale research projects explored the possibilities of connecting multiple classrooms across the United States for data sharing and collaborative inquiry (Feldman et al., 2000). Programs like National Geographic Kids Network (NGKNet), a National Science Foundation–funded collaboration between the National Geographic Society and TERC, reached thousands of classrooms and tens of thousands of students. TERC’s NGKNet provided curriculum plans and resources around issues such as acid rain and tools that facilitated large-scale data collection, sharing, and analysis of results. Other projects, such as Classroom BirdWatch and EnergyNet, focused on issues with comparable global significance and local implications, turning large numbers of learners into a community of practice doing distributed scientific investigation. Feldman, Konold, and Coulter noted that these large-scaleprojectsquestion the notion of the individual child as scientist, pointing instead toward interesting models of collaborative engagement in science, technology, and society issues (pp. 142– 143).

Virtual-U

Developed by Linda Harasim and Tom Calvert at Simon Fraser University and the Canadian Telelearning National Centres of Excellence, Virtual-U is a Web-based coursedelivery platform (Harasim, Calvert, & Groeneboer, 1996). Virtual-U aims to provide a rich, full-featured campus environment for learners, featuring a cafe and library as well as course materials and course-management functionality.

Tapped In

Tapped In (see Figure 16.9) is a multiuser online educational workspace for teachers and education professionals. The Tapped In project, led by Mark Schlager at SRI International, began in the late 1990s as a MOO (textual virtual reality) environment for synchronous collaboration and has since grown into a sophisticated (Web plus MOO) multimedia environment for both synchronous and asynchronous work, with a large and very active user population (Schlager & Schank, 1997). Tapped In uses a technological infrastructure similar to that of MOOSE Crossing but has a different kind of community of practice at work within it; Tapped In functions more like an ongoing teaching conference, with many weekly or monthly events, workshops, and happenings. Tapped In is an exemplary model of a multimode collaborative environment.

New Media for Learning Research Paper

CoWeb

At Georgia Tech Mark Guzdial and colleagues at the Collaborative Software Laboratory (CSL) have created a variety of software environments building on the original educational computing vision of Alan Kay in the 1970s (Kay 1996); the computer can be a tool for composing and experiencing dynamic media. Growing from Guzdial’s (1997) previous work on the CaMILE project—a Web-based anchored collaboration environment, CSL’s CoWeb project explores possibilities in designing and using collaborative media tools online (Guzdial, 1999). CoWeb and other CSL work are largely based on the Squeak environment, a direct descendant of Alan Kay’s research at Xerox PARC in the 1970s.

MaMaMedia

The rationale of MaMaMedia, a company founded by MIT Media Lab graduate Idit Harel, is to enable young learners and their parents to participate in web experiences that are safe, constructionist by nature, and educational. MaMaMedia maintains a filtered collection of dynamic Web sites aimed at challenging young children to explore, express, and exchange (Harel’s three Xs) ideas. Harel’s (1991) book Children Designers, lays the foundation for MaMaMedia and for research in understanding how children in rich online environments construct and design representations of their thinking. In Harel’s doctoral work, one young girl named Debbie was part of the experimental group at the Hennigan School, working with fractions in Logo. After several months of working on her project, she looked around the room and said, “Fractions are everywhere.” MaMaMedia enables thousands of girls and boys to be online playing games, learning how to think like Debbie, and participating in the vast MaMaMedia community.

WebGuide

WebGuide, a web-based, collaborative knowledgeconstruction tool, was created by Gerry Stahl and colleagues at the University of Colorado (Stahl, 1999). WebGuide is designed to facilitate personal and collaborative understanding through mediating perspectivity via cultural artifacts. WebGuide acts as a scaffold for group understanding. WebGuide is a structured conferencing system supporting rich interlinking and information reuse and recontextualization, as well as multiple views on the structure of the information set. Learners contribute information from individual perspectives, but this information can later be negotiated and recollected in multiple contexts construct.

Affective Computing and Wearables

A series of research projects under Rosalind Picard at the MIT Media Lab are aimed at investigating affective computing (Picard 1997)—the emotional and environmental aspects of digital technologies. Research areas include computer recognition of human affect, computer synthesis of affect, wearable computers, and affective interaction with computers. Jocelyn Schreirer conducted several experiments with advisors Picard, Turkle, and Goldman-Segall to explore how affective wearable technologies become expressive devices for augmenting communication. This relatively new area for research will undoubtedly prove very significant for education as well as other applications because the affective component of computing has been overlooked until recently.

WebCT

Originally developed in the late 1990s by Murray Goldberg at the University of British Columbia, WebCT has grown to be an enormously popular example of a course management system. What began as an easy-to-use Web-based courseware environment is now in use by more than 1,500 institutions. Indeed, it is so widespread among postsecondary institutions that WebCT, now a company, is almost a de facto standard for online course delivery.

Challenging Paradigms and Learning Theories

Cognition: Models of Mind or Creating Culture?

In this section, two challenging cognitive paradigms will be discussed. The overriding discussion focuses on whether cognition is best understood as a model of the mind or rather as a creation of culture.

From the Cognitive Revolution to Cultural Psychology

From the vantage point of the mid-1990s, Jerome Bruner looked back on the cognitive revolution of the late 1950s, which he helped to shape, and reflected on a lost opportunity. Bruner had imagined that the new cognitive paradigm would bring the search for meaning to the fore, distinguishing it from the behaviorism that preceded it (Bruner, 1990, p. 2). Yet as Bruner wrote, the revolution went awry—not because it failed, but because it succeeded:

Very early on, for example, emphasis began shifting from “meaning” to “information,” from the construction of meaning to the processing of information. These are profoundly different matters. The key factor in the shift was the introduction of computation as the ruling metaphor and computability as a necessary criterion of a good theoretical model. (p. 4)

The information-processing model of cognition became so dominant, Bruner argued, and the roles of meaning and meaning making ended up as much in disfavor as they had been in behaviorism. “In place of stimuli and responses, there was input and output,” and hard empiricism ruled again, with a new vocabulary, but with the same disdain for mentalism (Bruner, 1990, p. 7).

Bruner’s career as a theorist is itself instructive. Heralded by Gardner and others as one of the leading lights of 1950s cognitivism, Bruner has been one of a small but vocal group calling for a return to the role of culture in understanding the mind. This movement has been tangled up closely with the evolution of educational technology over the same period, perhaps illuminated in a pair of titles that serve as bookends for one researcher’s decade-long trajectory: Etienne Wenger’s (1987) Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge and his (1998) Communities of Practice: Learning, Meaning, and Identity.

Cognitive Effects, Transfer, and the Culture of Technology: A Brief Narrative

In his 1996 article, “Paradigm Shifts and Instructional Technology: An Introduction,” Timothy Koschmann began by identifying four defining paradigms of technology in education. In roughly chronological (but certainly overlapping) order, these are CAI, characterized by drill-and-practice and programmed instruction systems; ITS, which drew on AI research to create automated systems that could evaluate a learner’s progress and tailor instruction accordingly; the Logo-as-Latin paradigm, led by Papert’s microworld and children-as-programmers efforts; and CSCL, a sociallyoriented, constructivist approach that focuses on learners in practice in groups. Koschmann invoked Thomas Kuhn’s (1996) controversial notion of the incommensurability of competing paradigms:

Kuhn held that the effect of a paradigm shift is to produce a divided community of researchers no longer able to debate their respective positions, owing to fundamental differences in terminology, conceptual frameworks, and views on what constitutes the legitimate questions of science. (Koschmann, 1996, p. 2)

Koschmann’s analysis may well be accurate. The literature surrounding the effects that learning technology produces certainly displays examples of this incommensurability, even within the writings of individual theorists.

As mentioned earlier, Papert’s work with teaching children to program in Logo was originally concerned with bridging the gap between Piaget’s concrete and formal thinking stages, particularly with respect to mathematics and geometry. Over time, however, Papert’s work with children and Logo began to be talked about as “computer cultures” (Papert, 1980, pp. 22–23): Logo gave its practitioners a vocabulary, a framework, and a set of tools for a particular kind of learning through exploration. Papert envisaged a computer culture in which children could express themselves as epistemologists, challenging the nature of established knowledge. But although Papert’s ideas and the practice of Logo learning in classrooms contributed significantly to the esprit de temps of the 1980s, it was difficult for many mainstream educational researchers and practitioners to join the mindset that he believed would revolutionize learning.

A large-scale research project to evaluate the claims of Logo in classrooms was undertaken by Bank Street College in the mid-1980s. The Bank Street studies came to some critical conclusions about the work that Papert and his colleagues were doing (Pea & Kurland, 1987; Pea, Kurland, & Hawkins, 1987). Basically, the Bank Street studies concluded with a cautious note—that no significant effects on cognitive development could be confirmed—and called for much more extensive and rigorous research amid the excitement and hype. The wider effect of the Bank Street publications fed into something of a popular backlash against Logo in the schools. A 1984 article in the magazine Popular Psychology summarized the Bank Street studies and suggested bluntly that Logo had not delivered on Papert’s promises.

Papert responded to this critique (Papert, 1985) by arguing that the framing of research questions was overly simplistic. Papert chided his critics for looking for cognitive effects by isolating variables as if classrooms were treatment studies. Rather than asking “technocentric” questions such as “What is THE effect of THE computer?” (p. 23), Papert called for an examination of the culture-building implications of Logo practice, and for something he called “computer criticism,” which he proposed as akin to literary criticism.

Pea (1987) responded, claiming that Papert had unfairly characterized the Bank Street research (Papert had responded only to the Psychology Today article, not to the original literature) and arguing that as researchers they had a responsibility to adhere to accepted scientific methods for evaluating the claims of new technology. The effect of this exchange was to illuminate the vastly different perspectives of these researchers. Where Papert was talking about the open-ended promise of computer cultures, Pea and his colleagues, developmental psychologists, were evaluating the work from the standpoint of demonstrable changes in cognition (Pea & Kurland, 1987). Whereas Papert accused his critics of reductionism, Davy (1985) likened Papert to the proverbial man who looks for his keys under the streetlight because the light is better there.

Gavriel Salomon and Howard Gardner responded to this debate with an article that searched for middle ground (Salomon & Gardner, 1986): An analogy, they pointed out, could be drawn from research into television and mass media, a much older pursuit than educational computing, and one in which Salomon was an acclaimed scholar. Salomon and Gardner argued that one could not search for independent variables in such a complex area; instead, they called for a more holistic, exploratory research program, and one that took more than the overt effects of the technology into account.

Indeed, in 1991 Salomon and colleagues David Perkins and Tamar Globerson published a groundbreaking article that shed more light on the issue (Salomon et al., 1991). To consider the effects of a technology, one had to consider what was changed after a learner had used a technology—but in the absence of it. The questions that arise from this are whether there is any cognitive residue from the prior experience and whether there is transfer between tasks. This is a different set of questions than those that arise from investigating the effects with technology, which demand a more decentered, system-wide approach, looking at the learner in partnership with technology.

Although it contributed important new constructs and vocabulary to the issue, the Salomon et al. (1991) article is still deeply rooted in a traditional cognitive science perspective, like much of Pea’s research, taking first and foremost the individual mind as the site of cognition. Salomon, Perkins, and Globerson, all trained in cognitive psychology, warn against taking the “effects with” approach too far, noting that computers in education are still far from ubiquitous, and that the search for the “effects of” is still key.

In a 1993 article Pea responded to Salomon et al. (1991) from yet a different angle. Pea, then at Northwestern and working closely with his Learning Sciences colleagues, wrote on “distributed intelligence” and argued against taking the individual mind as the locus of cognition, criticizing Salomon and colleagues’ individualist notions of cognitive residue: “The language used by Salomon et al. (1991) to characterize the concepts involved in how they think about distributed intelligence is, by contrast, entity-oriented—a language of containers holding things” (Pea, 1993, p. 79).

Pea, reviewing recent literature on situated learning and distributed cognition (Brown et al., 1996; Lave, 1988; Winograd & Flores, 1986), had changed his individualist framework of cognitive science for a more “situative perspective” (Greeno, 1997, p. 6), while Salomon (1993) argued that cognition still must reside in the individual mind. It is interesting to note that neither Salomon nor Pea in this exchange seemedcompletelycomfortableatthispointwiththenotionof culture making beyond its influence as a contributing factor to mind, artifacts, and such empirically identifiable constructs.

Bricolage and Meaning Making at MIT

Scholarship at MIT’s Media Lab was also changing in the early 1990s. The shift played out amid discussions of bricolage, computer cultures, relational approaches, the construction and sharing of public artifacts, and so on (Papert, 1980, 1991;Turkle, 1984, 1995), as well as amid the centered, developmental cognitive science perspective from which their work historically derives.Theorizing on epistemological pluralism, Turkle and Papert (1991) clearly revealed the tension between the cognitivist and situative perspective: Papert andTurkle desired to understand the mind and simultaneously to reconcile how knowledge and meaning are constituted in community, culture, and technology. The cognitivist stance might well have been limiting for constructionist theory in the 1980s. Pea (1993) offered a critique of Papert’s constructionism from the standpoint of distributed intelligence:

Papert described what marvelous machines the students had built, with very little interference from teachers. On the surface, the argument was persuasive, and the children were discovering important things on their own. But on reflection, I felt this argument missed the key point about the invisible human intervention in this example—what the designers of LEGO and Logo crafted in creating just the interlockable component parts of LEGO machines or just the Logo primitive commands for controlling these machines. (p. 65)

Pea’s critique draws attention to the fact that what is going on in the Logo project exists partly in the minds of the children, and partly in the Logo system itself—that they are inseparable. Pea’s later work pointed to distributed cognition, whereas the Media Lab’s legacy—even in the distributed constructionism of Mitchel Resnick and Uri Wilensky and in the social constructionism of Goldman-Segall—is deeply rooted in unraveling the mystery of the mind and its ability to understand complexity and complex systems. For example, whereas Resnick’s work explores ecologies of Logo turtles, it does not so much address ecologies of learners. Not until the late 1990s did the research at the Media Lab move toward distributed environments and the cultures and practices within them (Bruckman, 1998; Picard, 1997).

Learning, Thinking Attitudes, and Distributed Cognition

Understanding the nature of technology-based learning systems greatly depends on one’s conceptualization of how learning occurs. Is learning linear and developmental, or a more fluid, flexible (Spiro, Feltovich, Jacobson, & Coulson, 1991) and even random “system” of making meaning of experience? Proponents of stage theory have tried to show how maturation takes place in logical causal sequences or stages according to observable stages in growth patterns—the final stage being the highest and most coveted. Developmental theories, such as Freud’s oral, anal, and genital stages (Freud, 1952), Erikson’s eight stages of psychological growth from basic trust to generativity (Erikson, 1950), or Piaget’s stages from sensorimotor to formal operational thinking (see Grubner & Voneche, 1977), are based on the belief that the human organism must pass through these stages at critical periods in its development in order to reach full healthy integrated maturation, be it psychological, physical, spiritual, or intellectual.

Strict adherence to developmentalism, and particularly to its unidirectional conception, has been significantly challenged by Gilligan (1982), Gardner (1985), Fox Keller (1983), Papert (1991), and Illich and Sanders (1989)—not to mention a wave of postmodern theorists—proposing theories that address the fundamental issues underlying how we come to terms with understanding our thinking. One such challenge, raised by Illich and Sanders (1989), reflects on the prehistorical significance of the narrative voice. Thinking about thinking as essentially evolving stages of development requires the kind of calibration possible only in a world of static rules and universal truths. Illich and Sanders pointed out that narrative thinking is rather a weaving of different layers or versions of stories that are never fixed in time or place. Before the written word and

[p]rior to history . . . there is a narrative that unfolds, not in accordance with the rules of art and knowledge, but out of divine enthusiasm and deep emotion. Corresponding to this prior time is a different truth—namely, myth. In this truly oral culture, before phonetic writing, there can be no words and therefore no text, no original, to which tradition can refer, no subject matter that can be passed on. A new rendering is never just a new version, but always a new song. (Illich & Sanders, 1984, p. 4)

Illich and Sanders (1984) contended that the prehistoric mode of thinking was a relativistic experience—that what was expressed at any given moment in time changed from the previous time it was expressed. Thus there could be neither fixed recall nor truth as we define it today. This concept of knowledge as a continually changing truth, dependent on both communal interpretation and storytellers’ innovation, dramatically changed with the introduction of writing. The moment a story could be written down, it could be referred to. Memory changed from being an image of a former indivisible time to being a method of retrieving a fixed, repeatable piece or section of an experience.

A parallel notion emerges in Carol Gilligan’s (1982) research on gender and moral development. Gilligan made the case that in the “different voice” of women lies an ethic of care, a tie between relationship and responsibility, and the origins of aggression in the failure of connection (p. 173). Gilligan set the stage for a new mode of research that includes intimacy and relation rather than the separation and objectivity of traditional developmental theory.

Evelyn Fox Keller, a leading critic of the masculinization of science, heralded the relational model as a legitimate alternative for doing science. She pointed out that science is a deeply personal as well as a social activity (1983) historically preferential to a male and objectivist manner of thinking.

Combining Thomas Kuhn’s ideas about the nature of scientific thinking with Freud’s analysis of the different relationship between young boys and their mothers and between girls and their mothers, Fox Keller analyzed underlying reasons for scientific objectivism. She claimed that boys are encouraged to separate from their mothers and girls to maintain attachments, influencing the manner in which the two genders relate to physical objects. The young boy, in competition with his father for his mother’s attentions, learns to compete in order to succeed. Girls, not having to separate from their mothers, find that becoming personally involved—getting a feeling for the organism, as Barbara McClintock (Fox Keller, 1983) would say—is a preferred mode of making sense of their relationship with the physical world. As a result, girls may do science in a more connected style, seeking relationships with, rather than dissecting, what they investigate. Girls seek to understand meaning through these personal attachments: “Just as science is not the purely cognitive endeavor we once thought it, neither is it as impersonal as we thought: science is a deeply personal as well as a social activity” (Fox Keller, 1983, p. 7).

Obviously, we will never know if a scientific discipline would really be different if it had been driven by more relational or narrative influences. Yet we may want to ask how people with a tendency toward relational or narrative thinking can be both invited into the study of the sciences and be encouraged to contribute to its theoretical foundations. In addition, we may want to ask how new media and technologies expand how we study what we study, thereby inviting a range of epistemologically diverse thinkers into the mainstream of intellectual pursuits.

Epistemological Pluralism

The emphasis on pluralism in constructionist practice was also a major theme to emerge from the MIT Media Lab in the 1980s. In Sherry Turkle’s (1984) book The Second Self: Computers and the Human Spirit, she explored the different styles of mastery that she observed in boys and girls in Logo classrooms. In a 1991 article with Papert, “Epistemological Pluralism and the Revaluation of the Concrete,” Turkle outlined two poles of technological mastery: hard and soft. Hard mastery, identified with top-down, rationalist thinking, was observed in a majority of boys. Soft mastery, identified with relational thinking and Claude Lévi-Strauss’s notion of bricolage, was observed in a majority of girls (Turkle & Papert, 1991, pp. 167–168).

The identification of soft mastery and bricolage in programming was very important for Papert and Turkle for a number of reasons. This relational, negotiated approach to understanding systems has much in common with Piaget’s constructivist theory and is also very much in line with how Papert saw children tinkering while programming in Logo, exploring the features of a microworld, and in doing so building an intimate connection with their own thinking. Papert and Turkle led MIT’s Media Lab Epistemology and Learning Group to a revaluation of the concrete, which they saw as woefully undervalued in contemporary life, and especially in math and science education.

Although Turkle and Papert used the terms hard and soft to explain different approaches to computation, their contribution reaches out to broader domains. They cited feminism and ethnography of science and computation (Turkle & Papert, 1991, p. 372) as three of several movements that promote concrete thinking to an object of science in its own right. They proposed accepting diverse styles of creating knowledge and understanding systems as equally significant to the world of thought, such that the personal relational perspective that Papert identifies with concrete thinking will gain respectability in the scientific community:

The development of a new computer culture would require more than technological progress and more than environments where there is permission to work with highly personal approaches. It would require a new and softer construction of the technological, with a new set of intellectual and emotional values more like those we apply to harpsichords than hammers. (Turkle & Papert, 1991, p. 184)

Multiple Perspectives and Thinking Attitudes

Goldman-Segall proposed a more dynamic conceptualization using the terms frames and attitudes. Her framing is rooted in several diverse but interwoven contexts: Marvin Minsky’s (1986) artificial intelligence, Howard Gardner’s (1983) theory of multiple intelligences, Erving Goffman’s (1986) everyday sociology, and Trinh Minh T. Ha’s (1992) cinematography. The important thing about frames—in contrast to the more essentialist notion of styles—is that they implicate both the framer and that which is left out of the frame:

I have become less comfortable with the notion of styles . . . The kinds of frames I now choose open the possibility for both those who are being portrayed and those who view them to become partners in framing [their stories]. (Goldman-Segall, 1998b, pp. 244–245)

In Goldman-Segall’s notion of thinking attitudes (instead of thinking or learning styles) imply positionality and orientation, and are situated in time and place:

I define attitudes, not as psychologists have used the word in any number of studies that start with the phrase, “children’s attitudes toward . . . ,” but as indicator of a fluid state of mind. Attitude is a ballet pose in which the dancer, standing on one leg, places the other behind it, resting on the calf. Attitude, as a pose, leads to the next movement. (Goldman-Segall, 1998b, p. 245)

The idea that dynamic epistemological attitudes may run at odds with the gender breakdown of hard and soft mastery led Goldman-Segall (1996a, 1998a, 1998b) to suggest that genderflexing may occur: Boys may take on attitudes that are traditionally associated with those of girls, and vice versa. The underlying theme here is the primacy of situated points of viewing, rather than essential qualities. She sees learners as ethnographers, observing and engaging with the cultural environments in which they participate. Cognitive attitudes, being dynamic, are transitional personae, taken on to make a moment of transition from one conceptual framing to the next as learners layer their points of viewing. Video excerpts are available on the Web at http://www.pointsofviewing.com.

The focus had clearly changed from understanding the mind of a child to understanding the situated minds of collaborative teams. Simultaneously, learning moved from learning modules, to open-ended constructionism, to problem-based learning (PBL) environments and rich-media cases of teaching practices.

Distributed Cognition and Situated Learning

Our memories are in families and libraries as well as inside our skins; our perceptions are extended and fragmented by technologies of every sort. (Brown et al., 1996, p. 19)

The 1989 article by John Seely Brown, Alan Collins, and Paul Duguid (1996) titled “Situated Cognition and the Culture of Learning” is generally credited with introducing the concepts and vocabulary of situated cognition to the educational community. This influential article, drawing on research at Xerox PARC and at the Institute for Research on Learning (IRL), expressed the authors’concern with the limits to which conceptual knowledge can be abstracted from the situations in which it is situated and learned (p. 19), as is common practice in classrooms. Building on the experiential emphasis of pragmatist thinkers like Dewey and on the social contexts of learning of Russian activity theorists like Vygotsky and Leontiev, Brown et al. proposed the notion of cognitive apprenticeship. In a cognitive apprenticeship model, knowledge and learning are seen as situated in practice: “Situations might be said to co-produce knowledge through activity. Learning and cognition, it is now possible to argue, are fundamentally situated” (p. 20). This idea is carried forward to an examination of tools and the way in which they are learned and used:

Learning how to use a tool involves far more than can be accounted for in any set of explicit rules. The occasions and conditions foruse arise directly out of the context of activities of each community that uses the tool, framed by the way members of each community see the world. The community and its viewpoint, quite as much as the tool itself, determine how a tool is used. (p. 23)

The work that brings the situated perspective firmly home to the learning environment is Jean Lave and Etienne Wenger’s (1991) Situated Learning: Legitimate Peripheral Participation, which goes significantly beyond Brown’s cognitive apprenticeship model. Core to Lave and Wenger’s work is the idea of knowledge as distributed or stretched across a community of practice—what Salomon later called the radical situated perspective (Salomon, 1993):

In our view, learning is not merely situated in practice—as if it were some independently reifiable process that just happened to be located somewhere; learning is an integral part of generative social practice in the lived-in world. . . . Legitimate peripheral participation is proposed as a descriptor of engagement in social practice that entails learning as an integral constituent. (Lave & Wenger, 1991, p. 35)

This perspective flips the argument over: It is not that learning happens best when it is situated (as if there were learning settings that are not situated), but rather, learning is an integral part of all situated practice. So, instead of asking—as Bransford and colleagues at Vanderbilt had—“How can we create authentic learning situations?” they ask “What is the nature of communities of practice?” and “How do newcomers and oldtimers relate and interact within communities of practice?” Lave and Wenger answer these questions through elaborating the nature of communities of practice in what they term legitimate peripheral participation:

By this we mean to draw attention to the point that learners inevitably participate in communities of practitioners and that mastery of knowledge and skill requires newcomers to move toward full participation in the sociocultural practices of a community. (p. 29)

Lave and Wenger (1991) also elaborated on the involvement of cultural artifacts and technologies within communities of practice. As knowledge is stretched over a community of practice, it is also embodied in the material culture of that community, both in the mechanisms of practice and in the shared history of the community:

Participation involving technology is especially significant because the artifacts used within a cultural practice carry a substantial portion of that practice’s heritage. . . . Thus, understanding the technology of practice is more than learning to use tools; it is a way to connect with the history of the practice and to participate more directly in cultural life. (p. 101)

Artifacts and technology are not just instrumental in embodying practice; they also help constitute the structure of the community. As Goldman-Segall (1998b) reminded us, “They are not just tools used by our culture; they are tools used for making culture. They are partners that have their own contribution to make with regard to how we build a cultural understanding of the world around us” (pp. 268–269). Situated cognition, then, becomes perspectival knowledge, and the tools and artifacts we create become perspectivity technologies: viewpoints, frames, lenses, and filters—reflections of selves with others.

Conclusion

In this research paper the Points of Viewing theory was applied to an already rich understanding of the use of computer, the Internet, and new media technologies. We have called this new approach to designing new learning technology environments for engaging in perspectival knowledge construction Perspectivity Technologies. We provided an in-depth analysis of the historical and epistemological development of computer technologies for learning over the past century. Yet, we realize that the range of possible contributors was so broad that we would have to focus only on those theories and tools that were directly connected with the notion of perspectival knowledge construction and perspectivity technologies. We regret that we did not find the opportunity to include the work of all researchers in this field.

Perspectivity technologies represent the next phase of thinking with our technologies partners. Not only will we build them, shape them, and use them. They will also affect, influence,andshapeus.Theywillbecome,ifsomeresearchers have their way, part of our bodies,not only augmenting our relationships but also becoming members in their own right.As robotic objects become robotic subjects, we will have to consider how Steven Spielberg’s robot boy in the movie A.I. felt when interacting with humans—and we hope that we will be kinder to ourselves and to our robots.

A perspectivity technology is not only a technology that enables us to see each other’s viewpoints better and make decisions based on multiple points of viewing. It is also concerned with the creation and design of technologies that add perspectives. Technologies have built-in filters. Recording an event with pen and paper, an audiotape recorder, and a camcorder each provides different perspectives of the same event. The technology provides an important filter or lens—a viewpoint, one could say.And while that viewpoint is deeply influenced by who the filmmaker is, or who the reporter is, the technology also contributes a new perspective.Acamera tells a story different from that of the audio or text tool. Designing perspectivity technologies for learning will enable multiple filters to be applied, easily understood, and felt. Learners will be able to observe the many layers that create the curricular story. Moreover, they will be able to use new media as communication devices.They will have the capability to shape the story being told. Beyond the “media is the message” theme of theorists Marshall McLuhan and Harold Innis, we are now deeply entrenched in a participatory relationship with content knowledge because technologies have become part of our perspective, our consciousness, and our way of life. The level of interaction with our virtual creatures (technologies) transforms our relationships. We are never completely alone. We are connected through media devices even if we cannot see them. They see us. For better and for worse.

Yet what has changed in learning? It seems that we have moved a long way from believing that learning is putting certain curriculum inside of students’ heads and then testing them for how well they have learned that material. Yet, instructionism is still alive and well. From kindergarten to higher education, students are still being trained to pass tests that will provide them with entrance into higher education. In spite of learning theories moving from behaviorism to cognitivism to distributed and situated cognition, educators are caught in the quagmire of preparing students for their future education instead of trying to make the present educational, engaging, and challenging fun. Teachers are caught in a web of uncertainty as they scramble to learn the new tools of the trade (the Internet, distance learning environments, etc.), to learn the content that they must teach, and then to organize the learning into modules that will fit into the next set of learning modules.

The irony is that when we think of who our best teachers were, they invariably were those who were able to elicit something within us and help us connect our lives to the lives of others—the lives of poets, mathematicians, physicists, and the fisher down at the docks. These teachers created a sense of community in the classroom. We became part of a discovery process that had no end. It was not knowledge that was already known that we craved. It was putting ideas together that had not yet been put together—at least in our own minds. We felt we invented something new. And, indeed, we and others within these learning environments did invent new ideas. Yet people say that this cannot happen to most students in most classes and that the best we can do is to teach the curriculum, provide a safe learning environment, and test people for what we wanted them to learn. This is not good enough.And if students do not become partners in their learning now, technologies will create islands of despair as more and more students stop learning how to be creative citizens interested in each other, in differences, and in understanding complexity.

Technologies have become many things for many people. But technologies that are designed for the creative sharing of perspectives and viewpoints will lead to building better communities of practice in our schools and in our societies. Since the tragedy of September 11, 2001, we have come to realize that the world is not what we thought it was. We know so little about each other. We know so little about the world. Our educational lenses have focused too long on curricular goals that were blinders to what was happening around us. We thought we did not need multiple perspectives—that one view of knowledge was enough. Yet what we know and what we make is always a reflection of our beliefs and assumptions about the world. And we need to build new bridges now.

Perspectival knowledge—the ability to stand up and view unknown territory—enables students, educators, and the public at large to take a second and third look at the many lenses that make up the human experience. The purpose is not to always like what we see, but to learn how to put different worldviews into a new configuration and uncover paths that we might yet not see. We might, if we are brave enough, respect students not for what has been taught them after they have taken prescribed courses and completed assignments, but respect them as they walk through the door—or through the online portal as they enter the learning habitat—on the first day of class.

Bibliography:

  1. Alpert, D., & Bitzer, D. L. (1970). Advances in computer-based education. Science, 167, 1582–1590.
  2. Ambron, S., & Hooper, K. (1990). Learning with interactive multimedia. Redmond, WA: Microsoft Press.
  3. Apple Multimedia Lab. (1989). The visual almanac: An interactive multimedia kit.
  4. Bates, A. W. (1988). Technology for distance education: A 10-year prospective. Open Learning, 3(3), 3–12.
  5. Bates, A. W. (1995). Technology: Open learning and distance education. London: Routledge.
  6. Beers, M. (2001). Subjects-in interaction version 3.0: An intellectual system for modern language student teachers to appropriate multiliteracies as designers and interpreters of digital media texts. Unpublished doctoral dissertation, University of British Columbia, Vancouver, British Columbia, Canada.
  7. Beers, M., & Goldman-Segall, R. (2001). New roles for student teachers becoming experts: Creating, viewing, and critiquing digital video texts. Paper presented at the American Educational Research Association Annual Meeting, Seattle, WA.
  8. Bootstrap Institute. (1994). Biographical sketch: Douglas C. Engelbart. Retrieved from https://web.stanford.edu/dept/SUL/library/extra4/sloan/mousesite/dce-bio.htm
  9. Brand, S. (1987). The Media Lab: Inventing the future at MIT. New York: Viking.
  10. Bromley, H. (1998). Introduction: Data-driven democracy? Social assessment of educational computing. In H. Bromley & M. W. Apple (Eds.), Education, technology, power: Educational computing as a social practice (pp. 1–27). Albany: State University of New York.
  11. Brown, J. S., & Burton, R. R. (1978). A paradigmatic example of an artificially intelligent instructional system. International Journal of Man-Machine Studies, 10(3), 323–339.
  12. Brown, J. S., Collins, A., & Duguid, P. (1996). Situated cognition and the culture of learning. In H. McLellan (Ed.), Situated learning perspectives (pp. 32–42). Englewood Cliffs, NJ: Educational Technology. (Original work published 1989)
  13. Bruckman,A. S. (1997). Moose Crossing: Construction, community, and learning in a networked virtual world for kids. Unpublished doctoral dissertation, MIT, Cambridge, MA. Retrieved from https://dspace.mit.edu/handle/1721.1/33821
  14. Bruckman, A. S. (1998). Community support for constructionist learning. CSCW, 7, 47–86. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.73.1605&rep=rep1&type=pdf
  15. Bruner, J. (1990). Acts of meaning. Cambridge, MA: Harvard University Press.
  16. Bryson, M., & deCastell, S. (1998). Telling tales out of school: Modernist, critical, and “true stories” about educational computing. In H. Bromley & M. W. Apple (Eds.), Education, technology, power: Educational computing as a social practice (pp. 65– 84). Albany: State University of New York.
  17. Cognition and Technology Group at Vanderbilt. (1997). The Jasper project: Lessons in curriculum, instruction, assessment and professional development. Mahwah, NJ: Erlbaum.
  18. Cole, M. (1996). Cultural psychology: A once and future discipline. Cambridge, MA: Harvard University Press.
  19. Cole, M., & Engeström, Y. (1993). A cultural-historical approach to distributed cognition. In G. Salomon (Ed.), Distributed cognitions: Psychological and educational considerations. Cambridge, UK: Cambridge University Press.
  20. Cole, M., & Wertsch, J. V. (1996). Beyond the individual-social antinomy in discussions of Piaget and Vygotsky. Human Development, 39(5), 250–256.
  21. Davy, J. (1985). Mindstorms in the lamplight. In D. Sloan (Ed.), The computer in education: A critical perspective (pp. 11–20). New York: Teachers College Press.
  22. deCastell, S., Bryson, M., & Jenson, J. (2000). Object lessons: Critical visions of educational technology. Paper presented at American Educational Research Association Annual Meeting, New Orleans, LA.
  23. Dede, C. (1994). The evolution of constructivist learning environments: Immersion in distributed, virtual worlds. Educational Technology, 35(5), 46–52.
  24. Dewey, J. (1961). Democracy and education: An introduction to the philosophy of education. New York: Macmillan. (Original work published 1916)
  25. diSessa, A. A. (1988). Knowledge in pieces. In G. Forman & P. B. Pufall (Eds.), Constructivism in the computer age (pp. 49–70). Hillsdale, NJ: Erlbaum.
  26. diSessa, A. A. (2000). Changing minds: Computers, learning, and literacy. Cambridge, MA: MIT Press.
  27. Duffy, T. M., & Jonassen, D. (1992). Constructivism and the technology of instruction: A conversation. Hillsdale, NJ: LEA.
  28. Edelson, D., Pea, R., & Gomez, L. (1996). Constructivism in the collaboratory. In B. G. Wilson (Ed.), Constructivist learning environments: Case studies in instructional design (pp. 151–164). Englewood Cliffs, NJ: Educational Technology.
  29. Erikson, E. (1950). Childhood and society. New York: Norton.
  30. Feenberg, A. (1989). The written world: On the theory and practice of computer conferencing. In R. Mason & A. Kaye (Eds.), Mindweave: Communication, computers and distance education (pp. 22–39). Oxford: Pergamon Press.
  31. Feldman, A., Konold, C., & Coulter, B. (2000). Network science, a decade later: The Internet and classroom learning. Mahwah, NJ: Erlbaum.
  32. Fox Keller, E. (1983). A feeling for the organism: The life and work of Barbara McClintock. San Francisco: W. H. Freeman.
  33. Freud, S. (1975). The standard edition of the complete psychological works of Sigmund Freud. New York: W. W. Norton.
  34. Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.
  35. Gardner, H. (1985). The mind’s new science: A history of the cognitive revolution. New York: Basic Books.
  36. Geertz, C. (1973). The interpretation of cultures. New York: Basic Books.
  37. Gilligan, C. (1982). In a different voice: Psychological theory and women’sdevelopment.Cambridge,MA:HarvardUniversityPress.
  38. Gilster, P. (2000). Digital literacy, The Jossey-Bass reader on technology and learning. San Francisco: Jossey-Bass.
  39. Goffman, E. (1986). Frame analysis: An essay on the organization of experience. Boston: Northeastern University Press.
  40. Goldman-Segall, R. (1989). Thick description: A tool for designing ethnographic interactive videodisks. SIGCHI Bulletin, 21(2), 118–122.
  41. Goldman-Segall, R. (1990). Learning Constellations: A multimedia ethnographic research environment using video technology to explore children’s thinking. Unpublished doctoral dissertation, MIT, Cambridge, MA.
  42. Goldman-Segall, R. (1991). Three children, three styles: A call for opening the curriculum. In I. Harel & S. Papert (Eds.), Constructionism (pp. 235–268). Cambridge, MA: MIT Press.
  43. Goldman-Segall, R. (1993). Interpreting video data: Introducing a “significance measure” to layer descriptions. Journal for Educational Multimedia and Hypermedia, 2(3), 261–282.
  44. Goldman-Segall, R. (1994). Virtual Clayoquot: The Bayside Middle School implements a multimedia study of a Canadian rain forest. Proceedings of Ed-Media ’94, World Conference on Educational Multimedia and Hypermedia, AACE, 603–609.
  45. Goldman-Segall, R. (1995). Configurational validity: Aproposal for analyzing ethnographic multimedia narratives. Journal of Educational Multimedia and Hypermedia, 4(2/3), 163–182.
  46. Goldman-Segall, R. (1996a). Genderflexing: A theory of gender and socio-scientific thinking. Proceedings of the International Conference on the Learning Sciences, Chicago, IL.
  47. Goldman-Segall, R. (1996b). Looking through layers: Reflecting upon digital ethnography. JCT: An Interdisciplinary Journal for Curriculum Studies, 13(1), 23–29.
  48. Goldman-Segall, R. (1998a). Gender and digital media in the context of a middle school science project. MERIDIAN, A Middle School Gender and Technology Electronic Journal 1(1).
  49. Goldman-Segall, R. (1998b). Points of viewing children’s thinking: A digital ethnographer’s journey. Mahwah, NJ: Erlbaum. Accompanying video cases retrieved from http://www .pointsofviewing.com.
  50. Goldman-Segall, R. (1999). Using video to support professional development and improve practice. White Paper presented to the Board on International Comparative Studies in Education (BICSE) Invitational Consortium on Uses of Video in International Studies, National Academy of Education, Washington,
  51. Goldman-Segall, R. (2000). Video cases: Designing Constellations, a perspectivity digital video data analysis tool. Paper presented at CILT 2000. 
  52. Goldman-Segall, R., & Rao, C. (1998). WebConstellations: A collaborative online digital data tool for creating living narratives in organizational knowledge systems. Proceedings for the 31st Hawaii International Conference for Systems Sciences, IEEE, 194–200.
  53. Granott, N. (1991). Puzzled minds and weird creatures: The spontaneous process of knowledge construction. In I. Harel & S. Papert (Eds.), Constructionism (pp. 295–310). Cambridge, MA: MIT Press.
  54. Graves, W. H. (1999). The instructional management systems cooperative: Converting random acts of progress into global progress. Educom Review, 34(6).
  55. Greenfield, P. M. (1984). A theory of the teacher in the learning activities of everyday life. In B. Rogoff & J. Lave (Eds.), Everyday cognition: Its development in social context (pp. 117–138). Cambridge, MA: Harvard University Press.
  56. Greeno, J. G. (1997). On claims that answer the wrong questions. Educational Researcher, 26(1), 5–17.
  57. Gruber, H. E., & Voneche, J. J. (Eds.). (1977). The essential Piaget. New York: Basic Books.
  58. Guzdial, M. (1997). Information ecology of collaborations in educational settings: Influence of tool. Paper presented at the Computer-Supported Collaborative Learning. Retrieved from https://www.researchgate.net/publication/221034134_Information_ecology_of_collaborations_in_educational_settings_influence_of_tool
  59. Guzdial, M. (1999). Teacher and student authoring on the web for shifting agency. Paper presented at the American Educational Research Association Annual Meeting, Montreal, CA.
  60. Harasim, L. M. (1990). Online education: Perspectives on a new environment. New York: Praeger.
  61. Harasim, L. M. (1993). Networlds: Networks as social space. In L. M. Harasim (Ed.), Global networks: Computers and international communication (pp. 15–36). Cambridge, MA: MIT Press.
  62. Harasim, L. M., Calvert, T., & Groeneboer, C. (1996). Virtual-U: A web-based environment customized to support collaborative learning and knowledge building in post secondary courses. Paper presented at the International Conference of the Learning Sciences, Northwestern University, Evanston, IL.
  63. Harel, I. (1991). Children designers: Interdisciplinary constructions for learning and knowing mathematics in a computer-rich school. Westport, CT: Ablex.
  64. Harel, I., & Papert, S. (Eds.). (1991). Constructionism. Norwood, NJ: Ablex.
  65. Harrison, B., & Baecker, R. (1992). Designing video annotation and analysis systems. Paper presented at the Proceedings of Computer Human Interface (CHI) 1992, Monterey, CA.
  66. Harvey, B. (1997). Computer science Logo style (2nd ed.). Cambridge, MA: MIT Press.
  67. Haynes, C., & Holmevik, J. R. (Eds.). (1998). High-wired: On the design, use, and theory of educational MOOs. Ann Arbor: University of Michigan Press.
  68. Hirsch, E. D., Jr. (1987). Cultural literacy. Boston: Houghton Mifflin.
  69. Hiltz, S. R. (1994). The virtual classroom: Learning without limits via computer networks. Norwood, NJ: Ablex.
  70. Hiltz, S. R., & Turoff, M. (1993). The network nation: Human communication via computer (Rev. ed.). Cambridge, MA: MIT Press. (Original work published 1978)
  71. Illich, I. (1972). Deschooling society. New York: Harrow Books.
  72. Illich, I. (1973). Tools for conviviality. New York: Marion Boyars.
  73. Illich, I., & Sanders, B. (1989). ABC: Alphabetization of the popular mind. Vintage Books.
  74. Jonassen, D. (1996). Computers in the classroom: Mindtools for critical thinking. Englewood Cliffs, NJ: Merrill.
  75. Kafai, Y. (1993). Minds in play: Computer game design as a context for children’s learning. Unpublished doctoral dissertation, Graduate School of Education of Harvard, Cambridge, MA.
  76. Kafai, Y. (1996). Software by kids for kids. Communications of the ACM, 39(4), 38–39.
  77. Kaput, J., Roschelle, J., & Stroup, W. (1998). SimCalc: Accelerating students’ engagement with the mathematics of change. In M. Jacobson & R. Kozma (Eds.), Educational technology and mathematics and science for the 21st century (pp. 47–75). Hillsdale, NJ: Erlbaum.
  78. Katz, S., & Lesgold, A. (1993). The role of the tutor in computerbased collaborative learning situations. In S. P. Lajoie & S. J. Derry (Eds.), Computers as cognitive tools (pp. 289–317). Hillsdale, NJ: Erlbaum.
  79. Kay, A. C. (1996). The early history of SmallTalk. In J. Thomas, J. Bergin, J. Richard, & G. Gibson (Eds.), History of programming languages—II (pp. 511–578). New York: ACM Press.
  80. Kennedy, S. (1989). Using video in the BNR utility lab. SIGCHI Bulletin, 21(2), 92–95.
  81. Koschmann, T. (1996). Paradigm shifts and instructional technology: An introduction. In T. Koschmann (Ed.), CSCL: Theory and practice of an emerging paradigm (pp. 1–23). Mahwah, NJ: Erlbaum.
  82. Kroeber, A. L. (1948). Anthropology: Race, language, culture, psychology, prehistory. New York: Harcourt, Brace & World.
  83. Kuhn, T. (1996). The structure of scientific revolutions (3rd ed.). Chicago: University of Chicago Press.
  84. Lajoie, S. P., & Derry, S. J. (1993). Computers as cognitive tools. Hillsdale, NJ: Erlbaum.
  85. Landow, G. P. (1992). Hypertext: The convergence of contemporary critical theory and technology. Baltimore: Johns Hopkins University Press.
  86. Landow, G. P., & Delany, P. (1993). The digital word: Text-based computing in the humanities. Cambridge, MA: MIT Press.
  87. Laurel, B., & Mountford, S. J. (Eds.). (1990). The art of humancomputer interface design. Reading, MA: Addison-Wesley.
  88. Lave, J. (1988). Cognition in practice: Mind, mathematics, and culture in everyday life. Cambridge, UK: Cambridge University Press.
  89. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, UK: Cambridge University Press.
  90. Lehrer, R., & Schauble, L. (Eds.). (2001). Real data in the classroom: Expanding children’s understanding of mathematics and science. New York: Teachers College Press.
  91. Lemke, J. (1998). Multiplying meaning: Visual and verbal semiotics in scientific text. In J. R. Martin & R. Veel (Eds.), Reading science (pp. 87–113). London: Routledge.
  92. Levin, J., Riel, M., Miyake, N., & Cohen, E. (1987). Education on the electronic frontier. Contemporary Educational Psychology, 12, 254–260.
  93. Lévi-Strauss, C. (1968). The savage mind. Chicago: University of Chicago Press.
  94. Lifter, M., & Adams, M. (1999). Multimedia projects for Kid Pix. Bloomington, IL: FTC.
  95. Mackay, W. (1989). Eva: An experimental video annotator for symbolic analysis of video data. SIGCHI Bulletin, 21(2), 68– 71.
  96. Martin, F., & Resnick, M. (1993). Lego/Logo and electronic bricks: Creating a scienceland for children. In D. L. Ferguson (Ed.), Advanced educational technologies for mathematics and science. Berlin: Springer-Verlag.
  97. Martin, L. M. W. (1987). Teachers’ adoption of multimedia technologies for science and mathematics instruction. In R. D. Pea & K. Sheingold (Eds.), Mirrors of minds: Patterns of experience in educational computing (pp. 35–56). Norwood, NJ: Ablex.
  98. Meares, C. A., & John F. Sargent, J. (1999). The digital work force: Building infotech skills at the speed of innovation. Retrieved from https://www.webharvest.gov/peth04/20041015104825/http://www.technology.gov//reports/itsw/Digital.pdf
  99. Minsky, M. (1986). The society of mind. New York: Simon and Schuster.
  100. Noble, D. (1985). Computer literacy and ideology. In D. Sloan (Ed.), The computer in education: A critical perspective (pp. 64– 76). New York: Teachers College Press.
  101. Noble, D. (1999). Digital diploma mills part IV: Rehearsal for the revolution. Retrieved from http://www.learning-org.com/99.11/0327.html
  102. Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic Books.
  103. Papert, S. (1985). Information technology and education: Computer criticism vs technocentric thinking. Educational Researcher, 16(1), 22–30.
  104. Papert, S. (1991). Situating constructionism. In I. Harel & S. Papert (Eds.), Constructionism (pp. 1–12). Norwood, NJ: Ablex.
  105. Papert, S. (1992). The children’s machine. New York: Basic Books.
  106. Pea, R. D. (1987). The aims of software criticism: Reply to Professor Papert. Educational Researcher, 20(3), 4–8.
  107. Pea, R. D. (1993). Practices of distributed intelligence and designs for education. In G. Salomon (Ed.), Distributed cognitions: Psychological and educational considerations (pp. 47–87). Cambridge, UK: Cambridge University Press.
  108. Pea, R. D., & Kurland, D. M. (1987). On the cognitive effects of learning computer programming. In R. Pea & K. Sheingold (Eds.), Mirrors of minds (pp. 147–177). Norwood, NJ: Ablex. (Original work published 1984)
  109. Pea, R. D., Kurland, D. M., & Hawkins, J. (1987). Logo and the development of thinking skills. In R. Pea & K. Sheingold (Eds.), Mirrors of minds (pp. 178–197). Norwood, NJ: Ablex.
  110. Pearson Education. (2000). Pearson Education History. Retrieved from https://www.pearson.com/us/higher-education/humanities—social-sciences/history/history.html
  111. Piaget, J. (1930). The child’s conception of the world. London: Harcourt, Brace, and World.
  112. Piaget, J. (1952). The child’s conception of number. London: Routledge & Kegan Paul.
  113. Piaget, J. (1969). The child’s conception of time. London: Rutledge & Kegan Paul.
  114. Piaget, J., & Inhelder, B. (1956). The child’s conception of space. London: Routledge & Kegan Paul.
  115. Picard, R. (1997). Affective computing. Cambridge, MA: MIT Press.
  116. Resnick, M. (1991). Overcoming the centralized mindset: Towards an understanding of emergent phenomena. In I. Harel & S. Papert (Eds.), Constructionism (pp. 205–214). Norwood, NJ: Ablex.
  117. Resnick, M. (1994). Turtles, termites, traffic jams: Explorations in massively parallel microworlds. Cambridge, MA: MIT Press.
  118. Resnick, M., & Ocko, S. (1991). Lego/Logo: Learning through and about design. In I. Harel & S. Papert (Eds.), Constructionism (pp. 141–150). Norwood, NJ: Ablex.
  119. Resnick, M., & Wilensky, U. (1993). Beyond the deterministic, centralized mindsets: New thinking for new sciences. Paper presented at the American Educational Research Association Annual Meeting, Atlanta, GA.
  120. Resnick, M., & Wilensky, U. (1998). Diving into complexity: Developing probabilistic decentralized thinking through roleplaying activities. Journal of Learning Sciences, 7(2).
  121. Riel, M. (1993). Global education through learning circles. In L. M. Harasim (Ed.), Global networks: Computers and international communication. Cambridge, MA: MIT Press, 221–236.
  122. Riel, M. (1996). Cross-classroom collaboration: Communication and education. In T. Koschmann (Ed.), CSCL: Theory and practice of an emerging paradigm (pp. 187–207). Mahwah, NJ: Erlbaum.
  123. Romiszowski, A. J., & de Haas, J. A. (1989). Computer mediated communication for instruction: Using e-mail as a seminar. Educational Technology, 29(10), 7–14.
  124. Roschelle, J., Kaput, J., Stroup, W., & Kahn, T. M. (1998). Scaleable integration of educational software: Exploring the promise of component architectures. Journal of Interactive Media in Education, 98(6). Retrieved from https://www.researchgate.net/publication/2573497_Scaleable_Integration_of_Educational_Software_Exploring_The_Promise_of_Component_Architectures
  125. Roschelle, J., Pea, R., & Trigg, R. (1990). Video Noter: A tool for exploratory video analysis. IRL Technical Report No. IRL 90-002, Menlo Park, CA.
  126. Sachter, J. E. (1990). Kids in space: Exploration into spatial cognition of children’s learning 3-D computer graphics. Unpublished doctoral dissertation, MIT, Cambridge, MA.
  127. Salomon, G. (1979). Interaction of media, cognition, and learning. San Francisco: Jossey-Bass.
  128. Salomon, G. (1993). No distribution without individuals’cognition: A dynamic interactional view. In G. Salomon (Ed.), Distributed cognitions: Psychological and educational considerations. Cambridge, UK: Cambridge University Press.
  129. Salomon, G., & Gardner, H. (1986). The computer as educator: Lessons from television research. Educational Researcher, 15(1), 13–19.
  130. Salomon, G., Perkins, D. N., & Globerson, T. (1991). Partners in cognition: Extending human intelligence with intelligent technologies. Educational Researcher, 20(3), 2–9.
  131. Scardamalia, M., & Bereiter, C. (1991). Higher levels of agency for children in knowledge building: A challenge for the design of new knowledge media. Journal of the Learning Sciences, 1(1), 37–68.
  132. Schank, R. C. (2000). Educational outrage: Are computers the bad guys in education? Retrieved from https://www.engines4ed.org/Education-Outrage-Archive-11.cfm
  133. Schlager, M., & Schank, P. (1997). Tapped In: A new on-line teacher community concept for the next generation of Internet technology. Paper presented at the Computer-Supported Collaborative Learning 1997. Retrieved July, 27, 2000, from http://www .tappedin.sri.com/info/papers/cscl97
  134. Scribner, S., & Cole, M. (1981). The psychology of literacy. Cambridge: Harvard University Press.
  135. Simon, H. A. (1981). The sciences of the artificial. Cambridge, MA: MIT Press. (Original work published 1969)
  136. Soloway, E., Krajcik, J. S., Blumenfeld, P., & Marx, R. (1996). Technological support for teachers transitioning to project-based science projects. In T. Koschmann (Ed.), CSCL: Theory and practice of an emerging paradigm (pp. 269–305). Mahwah, NJ: Erlbaum.
  137. Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1991). Cognitive flexibility, constructivism, and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured domains. Educational Technology, 31(5), 24–33.
  138. Stahl, G. (1999). WebGuide: Guiding collaborative learning on the web with perspectives. Paper presented at the American Educational Research Association Annual Meeting, Montreal, Quebec, Canada. Retrieved from https://www.researchgate.net/publication/228983341_WebGuide_Guiding_Collaborative_Learning_on_the_Web_with_Perspectives
  139. Steinkuehler, C. A., Derry, S. J., Hmelo-Silver, C. E., & DelMarcelle, M. (in press). Cracking the resource nut with distributed problem-based learning in secondary teacher education. Journal of Distance Education. Retrieved from https://www.researchgate.net/publication/228697869_Cracking_the_Resource_Nut_With_Distributed_Problem-Based_Learning_in_Secondary_Teacher_Education
  140. Stone, A. R. (1995). The war between desire and technology at the end of the mechanical age. Cambridge, MA: MIT Press.
  141. Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge, UK: Cambridge University Press.
  142. Suppes, P. (1966). The uses of computers in education. Scientific American, 215(3), 206–220.
  143. Suppes, P., Jerman, M., & Brian, D. (1968). Computer-assisted instruction: Stanford’s 1965–66 arithmetic program. New York: Academic Press.
  144. Suppes, P., & Morningstar, M. (1972). Computer-assisted instruction at Stanford, 1966–68: Data, models, and evaluation of the arithmetic programs. New York: Academic Press.
  145. Swan, K. (1994). History, hypermedia, and criss-crossed conceptual landscapes. Journal of Educational Multimedia and Hypermedia, 3(2), 120–139.
  146. Tinker, R. F. (1996). Telecomputing as a progressive force in education. TERC Technical Report, TERC Publications, Cambridge, MA.
  147. Trinh, M. H. (1992). Framer-framed. New York: Routledge.
  148. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460. Retrieved June 19, 2002, from http://www. loebner.net/Prizef/TuringArticle.html
  149. Turkle, S. (1984). The second self: Computers and the human spirit. New York: Simon and Schuster.
  150. Turkle, S. (1991). Romantic reactions: Paradoxical responses to the computer presence. In J. J. Sheehan & M. Sosna (Eds.), The boundaries of humanity: Humans, machines, animals (pp. 224–252). Berkeley: University of California Press.
  151. Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon and Schuster.
  152. Turkle, S., & Papert, S. (1991). Epistemological pluralism: Styles and voices within the computer culture. In I. Harel & S. Papert (Eds.), Constructionism (pp. 161–192). Cambridge, MA: MIT Press.
  153. Vygotsky, L. S. (1962). Thought and language (E. Hanfmann & G. Vakar, Trans.). Cambridge, MA: MIT Press.
  154. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.
  155. Wenger, E. (1987). Artificial intelligence and tutoring systems: Computational and cognitive approaches to the communication of knowledge. Los Altos, CA: Morgan Kaufmann.
  156. Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge, UK: Cambridge University Press.
  157. Wilensky, U. (2000). Modeling emergent phenomena with StarLogo. Retrieved from https://ccl.northwestern.edu/papers/modelemerg/ModelingEmergentData.html
  158. Wilensky, U. (2001). Emergent entities and emergent processes: Constructing emergence through multiagent programming. Paper presented at the American Educational Research Association Annual Meeting, Seattle, WA.
  159. Wilensky, U., & Resnick, M. (1999). Thinking in levels: A dynamic systems perspective to making sense of the world. Journal of Science Education and Technology, 8(1), 3–18.
  160. Wilensky, U., & Stroup, W. (1999). Learning through participatory simulations: Network-based design for systems learning in classrooms. Proceedings of the Computer-Supported Collaborative Learning Conference, Stanford, CA. Retrieved from https://people.cs.vt.edu/~kafura/CS6604/Papers/Participatory-Simulations-Network-Based-Design.pdf
  161. Winograd, T., & Flores, F. (1986). Understanding computers and cognition: A new foundation for design. Norwood, NJ:Ablex.
  162. Wolfson, L., & Willinsky, J. (1998). Situated learning of information technology management. Journal of Research on Computing in Education, 31(1), 96–110.
  163. Woolley, D. R. (1994). PLATO: The emergence of online community. Computer-Mediated Communication Magazine, 1(3). Retrieved from https://www.thinkofit.com/plato/dwplato.htm.
Mathematical Learning Research Paper
Learning Disabilities Research Paper