Social Influence and Group Dynamics Research Paper

View sample social influence and group dynamics research paper. Browse research paper examples for more inspiration. If you need a psychology research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

The belief that we are the masters of our own destiny surely ranks among the most fundamental of human conceits. This overarching self-perception is viewed by many scholars as a prerequisite to personal adjustment, enabling us to face uncertainty with conviction and challenges with perseverance (cf. Alloy & Abramson, 1979; Deci & Ryan, 1985; Kofta, Weary, & Sedek, 1998; Seligman, 1975; Taylor & Brown, 1988), and as equally central to the maintenance of social order because of its direct link to the attribution of personal responsibility (cf. Baumeister, Stillwell, & Heatherton, 1994; Shaver, 1985). Its adaptive significance notwithstanding, the sense that one’s actions are autonomous, self-generated, and largely impervious to external forces is routinely exaggerated in daily life (e.g., Langer, 1978; Taylor & Brown, 1988), and ultimately can be dismissed as philosophically untenable to the extent that it reflects naive assumptions about personal freedom (cf. Bargh & Chartrand, 1999; Skinner, 1971). Social psychologists know better, and in their pursuit of the true causal underpinnings of behavior, they have routinely placed the individual at the intersection of various and sundry social forces. In this view, people represent interdependent elements that together comprise larger social entities, be they familial, romantic, or societal in nature. Against this backdrop, people continually influence and in turn are influenced by one another in myriad ways. Social influence is the currency of human interaction, and although its operation may be subtle and sometimes transparent to the individuals involved, its effects are pervasive.

In recognition of the primacy of influence in the social landscape, G. W. Allport (1968) defined the field of social psychology as “an attempt to understand . . . how the thought, feeling, and behavior of the individual are influenced by the actual, imagined, or implied presence of others.” No other topic in social psychology can lay claim to such centrality. After all, no one has defined social psychology as the study of impression formation or self-concept, nor have researchers investigating such topics done so without assigning a prominent role to social influence processes. The belief in selfdetermination may well be important for personal and societal function, but the reality of social influence is equally significant—and for many of the same reasons. Our aim in this research paper is to outline the fundamental features of social influence and to illustrate the manifestations of influence in different contexts. In so doing, we emphasize the various functions served by social influence, both for the individual and for society.

Introduction

Because social influence is deeply embedded in every aspect of interpersonal functioning, any attempt to discuss it apart from all the topics and research traditions defining social psychology is necessarily incomplete and potentially misleading. How can one divorce a depiction of basic influence processes from such phenomena as attitude change, selfconcept malleability, or the development of close relationships? As it happens, of course, any field of scientific inquiry is differentiated into relatively self-contained regions, and social psychology is no exception. Although it can be argued that one person’s practical differentiation is another person’s unnecessary fragmentation (see, e.g., Gergen, 1985; Vallacher & Nowak, 1994), it is nonetheless the case that distinct theoretical and research traditions have emerged over the years to create a workable taxonomy of social psychological phenomena. Despite the pervasive nature of social influence, then, it is commonly treated as a separate topic in textbooks and secondary source summaries of relevant theory and research. To an extent, our treatment of social influence works within the accepted boundary conditions. Thus, we discuss such agreed-upon subtopics as compliance, conformity, and obedience to authority. At the same time, however, we attempt to impose a semblance of theoretical order on the broad assortment of relevant processes. So although each manifestation of influence—whether in advertising, the military, or intimate relationships—taps correspondingly distinct psychological mechanisms, there are certain invariant features that transcend the surface structure of social influence phenomena.

We begin by discussing the exercise of external control to influence people’s thoughts and behaviors. Rewards and punishments have self-evident efficacy in controlling behavior across the animal kingdom, so their incorporation into influence techniques in human affairs is hardly surprising. We then turn our attention to less blatant strategies of influence that typically fare better in inducing sustained changes in people’s thought and behavior. It is noteworthy in this regard that the lion’s share of the literature subsumed under the social influence label emphasizes subtle manipulation rather than direct attempts at control. We provide an overview of the principal manipulation techniques and abstract from them common features that are responsible for their relative success. This theme provides the foundation for an even less blatant approach to influence, one centering on the coordination of people’s internal states and overt behaviors. People have a natural tendency to bring their beliefs, preferences, and actions in line with those of the people around them, and this tendency becomes manifest in the absence of overt or subtle manipulation strategies. This penchant for interpersonal synchronization is what enables a mere collection of individuals to become a functional unit defining a higher level of social reality.

We then turn our attention to the manifestation of social influence at the level of society. A central theme here is that the emergence and maintenance of macrolevel properties in a social system can be understood in terms of the microlevel influence processes described in the preceding sections. We describe the results of computer simulations demonstrating this linkage between different levels of social reality. In a concluding section, we abstract what appear to be the common features of influence across different topics and relate them to fundamental psychological processes, chief among them the coordination of individual elements to create a coherent higher-order unit. Our suggestions in this regard are as much heuristic as integrative, and we offer suggestions for future lines of theoretical work to forward this agenda.

External Control

The most elemental way to influence someone’s behavior is make rewards and punishments contingent on the enactment of the behavior. For the better part of the twentieth century, experimental psychology was essentially defined in terms of this perspective, and during this era a wide variety of reinforcement principles were generated and validated. Attempts to extend these principles to social psychology were always complicated by the undeniable cognitive capacities of humans and the role of such capacities in regulating behavior (cf. Bandura, 1986; Zajonc, 1980). Nonetheless, several lines of research based on behaviorist assumptions are represented in social psychology (e.g., Byrne, 1971; Staats, 1975). With respect to social influence, this perspective suggests simply that people are motivated to do things that are associated with the attainment of pleasant consequences or the avoidance of unpleasant consequences. Thus, people adopt new attitudes, develop pBibliography: for one another, change the frequency of certain behaviors, or take on new activities because they in effect have been trained to do so. It’s fair to say this perspective never achieved mainstream status in social psychology, but one might think that social influence would be an exception. Reinforcement, after all, is defined in terms of the control of behavior, and to the extent that a self-interest premise underlies virtually all social psychological theories (cf. Miller, 1999), it is hard to imagine how the promise of reward or threat of punishment could fail to influence people’s thoughts, feelings, and actions.

Bases of Social Power

The ability to control someone’s behavior, whether by carrot or stick, is synonymous with having power over that person. Presumably, then, successful influence agents are those who are seen—by the target at least—as possessing social power. In contemporary society, power reflects more than physical strength, immense wealth, or the capacity and readiness to harm others—although having such attributes certainly wouldn’t hurt under some circumstances. Social power instead derives from a variety of different sources, each providing a correspondingly distinct form of behavior control. The work of French and Raven (1959; Raven, 1992, 1993) is commonly considered the definitive statement on the various bases of social power and their respective manifestations in everyday life. They identify six such bases: reward, coercion, expertise, information, referent power, and legitimate authority.

Reward power derives, as the term implies, from the ability to provide desired outcomes to someone. The rewards may be tangible and material (e.g., money, a nice gift), but often they are more subtle and nonmaterial in nature (e.g., approval, affection). The compliance-for-reward exchange may be direct and explicit, of course, as when a parent offers an economic incentive to a child for doing his or her homework. But the transaction is often tacit or implicit in the relationship rather than directly stated. The salesperson who pushes used cars with special zeal, for example, may do so because he or she knows the company gives raises to those who meet a certain sales quota. Coercive power derives from the ability to provide aversive or otherwise undesired outcomes to someone. As with rewards, coercion can revolve around tangible and concrete outcomes, such as the use or threat of physical force, or instead involve outcomes that are nonmaterial and acquire their valence by virtue of less tangible features. The parent concerned with a child’s study habits might express disapproval for the child’s shortcomings in this regard, for example, and the salesperson might redouble his or her efforts at moving stock for fear of losing his or her job.

Expert power is accorded those who are perceived to have superior knowledge or skills relevant to the target’s goals. Deference to such individuals is common when the target lacks direct personal knowledge regarding a topic or course of action. In the physician-patient relationship, for example, the patient typically complies with the physician’s instructions to take a certain medicine, even when the patient has no idea how the purported remedy will cure him or her. Knowledge, in other words, is power. Information power is related to expert power, except that it relates to the specific information conveyed by the source, not to the source’s expertise per se. Aperson could stumble on a piece of useful gossip, for example, and despite his or her general ignorance in virtually every aspect of his or her life, this person might wield considerable power for a time over those who would benefit from this information. Knowledge is power, it seems, even in the hands of someone who doesn’t know what he or she is talking about.

Referent power derives from people’s tendency to identify with someone they respect or otherwise admire. “Be like Mike” and “I am Tiger Woods,” for example, are successful advertising slogans that play on consumers’desire to be similar to a cultural icon. The hoped-for similarity in such cases, of course, is stunningly superficial—all the overpriced shoes in the world won’t enable a teenager to defy gravity while putting a basketball through a net or drive a small white ball 300 yards to the green in one stroke. Referent power is rarely asserted in the form of a direct request, operating instead through the pull of a desirable person, and can be manifest without the physical presence or surveillance of the influence agent. A young boy might shadow his older brother’s every move, for example, even if the brother hardly notices, and an aspiring writer might emulate Hemingway’s sparse writing style even though it is fair to say this earnest adulation is totally lost on Hemingway.

Legitimate power derives from societal norms that accord behavior control to individuals occupying certain roles. The flight attendant who instructs 300 passengers to put their tables in an upright position does not have a great deal of reward or coercive power, nor is he or she seen as necessarily possessing deep expertise pertaining to the request, and it is even more unlikely that he or she is the subject of identification fantasies for most of the passengers. Yet this person wields enormous influence over the passengers because of the legitimate authority he or she is accorded during the flight. Legitimate power is often quite limited in scope. A professor, for example, has the legitimate authority to schedule exams but not to tell students how to conduct their personal lives—unless, of course, he or she also has referent power for them. Legitimate power is clearly essential to societal coordination—imagine how traffic at a four-way intersection would fare if the signal lights failed and the police on the scene had to rely on gifts or their personal charisma to gain the cooperation of each driver. But blind obedience to those in positions of legitimate authority also has enormous potential for unleashing the worst in people, sometimes to the detriment of themselves or others. In recognition of this potential, social psychologists have devoted considerable attention to the nature of legitimate power, with special emphasis on obedience to authority. Not wanting to question this scholarly norm, we highlight this topic in the following section.

Obedience to Authority

Guards herding millions of innocent people into gas chambers, soldiers mowing down dozens of farmers and villagers with machine guns, and hundreds of cult members waiting in line for lethal Kool-Aid that is certain to kill themselves and their children: These images may be unthinkable, but they are part of the legacy of the twentieth century. Nestled in the security of our homes, we are nonetheless affected by such undeniable examples of mass abdications of personal responsibility and decision making; they can keep us up nights, not to mention undermine our sense of control. Although recent times have no monopoly on genocide, the abominations of World War II intensified the drive to plumb the depths of social influence, especially influence over the many by the few in the name of legitimate authority.

The best-known and most provocative line of research on this topic is that of Stanley Milgram (1965, 1974), who conducted a set of controversial laboratory experiments in the early 1960s. Milgram wanted to document the extent to which ordinary people will take orders from a legitimate authority figure when compliance with the orders entails another person’s suffering. The idea was to replicate in a relatively benign setting the dynamics at work during wartime, when soldiers are given orders to kill enemy soldiers and citizens. In his experimental situation, ostensibly concerned with the psychology of learning, participant “teachers” were asked to deliver electric shocks to “learners” (who were actually accomplices of Milgram) if the learners produced an incorrect response to an item on a simple learning task. In the initial study, Milgram (1965) found that 65% of the subjects cast in the teacher role obeyed the experimenter’s demand to proceed, ultimately administering 450 volts of electricity to a learner (a mild-mannered, middle-aged man with a selfdescribed heart condition) in an adjoining room, despite hearing the learner’s protests, screams, and pleas to stop emanating from the other room. Milgram subsequently performed several variations on this procedure, each designed to identify the factors responsible for the striking level of obedience initially observed. In one of the most intriguing variations, subjects were cast in the learner role as well as the teacher role, and the experimenter eventually told the teacher to cease administering shocks. Remarkably, some learners in this situation insisted that the teacher continue “teaching” them for the good of the experiment. Because the learner did not have the same degree of legitimacy as the experimenter did, however, none of the teachers acceded to the learner’s demand to continue shocking them.

Milgram’s findings proved unsettling to scholars and laypeople alike. With the horrors of World War II still fairly fresh in people’s memories, Milgram’s research suggested that Hitler’s final solution was not only fathomable, but perhaps also likely to occur again under the right circumstances. After all, these findings were produced by people from a nation of self-professed mavericks whose ancestors had risen up against the motherland’s authority less than two centuries earlier. Subsequent research employing Milgram’s basic paradigm has demonstrated comparable levels of obedience in many other countries, including Australia, Germany, Spain, and Jordan (Kilham & Mann, 1974; Meeus & Raaijmakers, 1986). The tendency to defer to legitimate authority, even when the demands of authority run counter to one’s personal beliefs and inhibitions, appears to be robust, representing perhaps an integral part of human nature.

The power of authority can derive from purely symbolic manifestations, such as titles or clothing, even when the ostensible authority has no credible claim to his or her role as a legitimate authority figure. A man wearing a security guard’s uniform, for example, can secure compliance with a request to pick up litter, even when the requests are made in a context outside the guard’s purview (Bickman, 1974). Even fictional symbols of authority can produce compliance. Television advertising trades on this tendency with astonishing commercial success. For example, the actor Robert Young, who played the part of Dr. Marcus Welby in a popular TV doctor series in the 1960s, wore a white lab coat in a commercial for Sanka (a brand of decaffeinated coffee). He was not an expert on coffee and certainly not a real doctor, yet the symbols of his authority (the white lab coat, the association with Dr. Welby) were sufficient to increase dramatically the sales of Sanka. Even when an actor states at the outset of a commercial pitch that I am not a doctor, but I play one on TV, his recommendations regarding cold remedies are followed by a significant portion of the viewing audience. This deference to titles and uniforms can have devastating effects. A study performed in a medical context, for example, found that 95% of nurses who received a phone call from a “doctor” agreed to administer a dangerous level of a drug to a patient (Hofling, Brotzman, Dalrymple, Graves, & Pierce, 1966).

Although pressures to obey authority are compelling, obedience is not inevitable. Research has shown, for example, that obedience to authority is tempered when the victim’s suffering is highly salient and when the authority figure is made to feel personally responsible for his or her actions (Taylor, Peplau, & Sears, 1997). Resistance to authority is enhanced, moreover, when the resister receives social support and in situations in which he or she is encouraged to question the motives, expertise, or judgments of the authority figure (Taylor et al., 1997). It should be reiterated, however, that legitimate authority serves important social functions and should not be viewed with a jaundiced eye only as a necessary evil in the human condition. Policeman, judges, elected representatives, and school crossing guards could not perform their duties if their power were not based on an aura of legitimacy. And as much as teachers like to be liked and to be seen as experts, their power over students in the classroom hinges to a large extent on students’perceiving them as legitimate authority figures. Even parents, who wield virtually every other kind of power (reward, coercion, expertise, information) over their children, must occasionally remind their offspring who is ultimately in charge in order to exact compliance from them. Obedience to authority, in sum, is pervasive in informal and formal social relations, and is neither intrinsically good nor intrinsically bad. Like many features of the human condition, its potential for good or evil is dependent on the restraint and judgment of those who exercise it.

Limitations of External Control

If the exercise of power always had its intended effect, both scholarly and lay interest in social influence would be minimal. Why bother obsessing over something as obvious as the tendency of people to defer to people in a position to offer rewards or threaten punishment? Is detailed experimentation really necessary to figure out why we listen to experts or model the behavior and attitudes of people we admire? And what could be more obvious than the observation that we typically comply with the demands and requests of those who are perceived as entitled to influence us in this way? Fortunately for social psychologists—and perhaps for intellectually curious laypeople as well—the story of social influence does not end with such self-evident conclusions, but rather unfolds with a far more interesting plotline. There is reason to think, in fact, that the general approach to influence outlined previously is among the least effective ways of implementing true change in people’s thoughts and feelings relevant to the behavior in question. Indeed, a fair portion of theoretical and research attention over the last 40 years has focused on the tendency for heavy-handed efforts at influence to boomerang, promoting effects opposite to those intended. This is especially the case for attempted influence that trades on reward and coercive power, although the assumptions underlying this line of theory and research would seem to hold true for legitimate power as well.

Psychological Reactance

To a certain extent, the failure of power-based approaches to induce change in people’s action pBibliography: can be traced to the fundamental human conceit noted at the outset. People want to feel like they are the directors of their own fate (cf. Deci & Ryan, 1985), and accordingly are sensitive to attempts by others to diminish this self-perceived role. No one really likes to be told what to do, and influence attempts that are seen in this light run the risk of producing resistance rather than compliance. Reactance theory (J. W. Brehm, 1966; S. S. Brehm & Brehm, 1981) trades on the assumption that people like to feel free, specifying how people react when this feeling is undermined. The basic idea is that when personal freedoms are threatened, people act to reassert their autonomy and control. Commanding a child not to do something runs the risk of eliciting an I won’t! rebuttal, for example, or reluctant compliance that disappears as soon as the surveillance is lifted (e.g., Aronson & Carlsmith, 1963). In effect, all the bases of power at the parent’s disposal—reward, coercion, referent, expert, legitimate—pale in comparison to the child’s distaste for having his or her tacit agreement removed from the parent-child exchange.

Considerable evidence has been accumulated over the years in support of the basic tenets of reactance theory (cf. Burger, 1992). Research by Burger and Cooper (1979), for example, found that even something as basic and spontaneous as humor appreciation is subject to reactance effects. Male and female college students were asked to rate ten cartoons in terms of funniness. Some participants rated the cartoons when alone, but others provided the ratings after receiving instructions from confederates to give the cartoons high ratings. Results revealed that pressure by the confederates tended to backfire, producing funniness ratings lower than those produced by participants not subject to the pressure. This effect was pronounced among individuals who had scored high on a preexperimental personality assessment of need for personal control.

Some studies have produced rather counterintuitive findings that call into question the basis for certain public policy initiatives. In a study investigating attempts to reduce alcohol consumption, for example, participants who received a strongly worded antidrinking message subsequently drank more than did those who received a moderately worded message (Bensley & Wu, 1991). The strongly worded message presumably was perceived by participants as a threat to their personal freedom, to which they reacted by drinking more rather than less in an effort to assert their sense of control. Findings such as these cast into doubt the wisdom of the Just say no mantra of many contemporary drug education programs aimed at young people. The slogan itself may promote the very behavior it is intended to discourage, because it represents a rather direct short-circuiting of targets’ personal decision-making machinery. There is evidence, in fact, that the Just say no approach has backfired in some instances, producing increased rather than decreased consumption of illegal substances—although it is not entirely clear that this effect is due primarily to reactance (Donaldson, Graham, Piccinin, & Hansen, 1995).

The experience of psychological reactance is not limited to influence techniques that trade on power per se. Indeed, the concernwithprotectingone’sself-perceivedfreedomcancurtail the effectiveness of any influence attempt that is seen as such.The use of flattery to seduce a target into a new course of action, for example, can backfire if the target is aware—or simply suspicious—that the flattery is being strategically employed for manipulative purposes (e.g., Jones & Wortman, 1973). Indeed, any attempt to gain influence over another person by becoming attractive to him or her runs a serious risk of failure if the attempted ingratiation is transparent to the person. Jones (1964) has referred to this stumbling block to interpersonal influence as the “ingratiator’s dilemma.” Normally, we like to hear compliments, to have others agree with our opinions, and to interact with people who are desirable by some criterion. As intrinsically rewarding as these experiences are, they also make us correspondingly vulnerable to requests and other forms of influence from the people in question. When their compliments become obsequious or if their desirability is buttressed by a little too much namedropping, we become suspicious that they are playing on this vulnerability with a particular agenda in mind. The result is resistance rather than assent to their subsequent requests, even requests that might otherwise seem quite reasonable.

Reactance, in short, is a pervasive human tendency that sets clear limits on the effectiveness of all manner of social influence. Power-based forms of influence are particularly vulnerable to reactance effects, not only because they are linked to a restriction of freedom for targets, but also because they tend to be explicit and thus transparent to targets. Letting someone know that you are trying to influence him or her is a decidedly poor strategy—unless, of course, your real goal is to get him or her to do the opposite.

Reverse Incentive Effects

Twentieth-century social psychology is a story of two seemingly incompatible perspectives on human nature. For the first half of the century, social psychology accepted as received wisdom the notion that the behavior of organisms, humans included, is ultimately under the control of external reinforcement. The mindless S-R models invoked by radical behaviorists may not have been most theorists’cup of tea, but no one seriously challenged the assumption that contingencies of positive and negative reinforcement play a pervasive role in shaping people’s psychological development as well as their specific behavior in different contexts. People’s concern over personal freedom was certainly recognized by social psychologists, but more often than not this penchant was considered an independent force that competed with reinforcement for the hearts and minds of people in their daily lives. Thus, people struggled to control their impulses, resist temptation, delay gratification, and maintain their dignity in the face of incentives to do otherwise.

After mid-century, something akin to a phase transition began to take place in social psychology. Fueled in large part by an emerging emphasis on the importance of cognitive mediation, theory and research began to question the imperial role of rewards and punishments in shaping personal and interpersonal behavior. People’s latent preoccupation with selfdetermination, for example, came to be seen not simply as a force that competed with reinforcement, but rather as a concern that was activated by explicit reinforcement contingencies (cf. de Charms, 1968; Deci & Ryan, 1985). Thus, the awareness of a contingency was said to sensitize people to the potential loss of self-determination if they were to adjust their behavior in accordance with the contingency. In effect, awareness of a contingent relation between behavior and reward weakened the power of the contingency, leaving the desire for self-determination the dominant casual force. This reasoning, of course, is consistent with the assumptions of reactance theory, described above. The dethroning of reinforcement theory, however, went far beyond a recognition of people’s need for autonomy, freedom, and the like. Two major perspectives in particular captured the academic spotlight for extended periods of time, and today they still stand as basic insights into human motivation—including motivation relevant to social influence.

The first of these, cognitive dissonance theory (Festinger, 1957), sparked psychologists’ imagination in large part because of its seemingly counterintuitive take on the role of rewards in shaping thought and behavior. The essence of the theory is a purported drive for consistency in people’s thoughts and feelings regarding a course of action. When inconsistency arises, it is experienced as aversive arousal, which motivates efforts to eliminate or at least reduce the inconsistency so as to reestablish affective equilibrium. This sounds straightforward enough, but under the right conditions a concern for restoring consistency can produce what can be described as reverse incentive effects (cf. Aronson, 1992; Wicklund & Brehm, 1976). In a prototypical experimental arrangement, subjects are induced to perform an action that they are unlikely to enjoy (e.g., a repetitive or boring task) or one that conflicts with an attitude they are likely to hold (e.g., writing an essay in support of raising tuition at their university). At this point, varying amounts of monetary incentive are offered for the action’s performance; some subjects are offered a quite reasonable sum (e.g., $20), others are offered a mere pittance (e.g., $1). Virtually all subjects agree to participate regardless of the incentive value, so technically they all perform a counterattitudinal task (i.e., a task that conflicts with their attitude concerning the task).

According to Festinger, the dissonance experienced as a result of such counterattitudinal behavior can be reduced by changing one of the cognitive elements to make it consistent with the other element. In this situation, the relevant cognitive elements for subjects presumably are their feelings about the action and their awareness they have performed the action. Because the latter thought cannot be changed (i.e., the damage is done), the only cognitive element open to revision is their attitude toward the action (which conveniently had not been assessed yet). So, the theory holds, subjects faced with this cognitive dilemma will adjust their attitude toward the action to make it consistent with the fact that they have engaged in the action. Subjects who performed a boring task now consider it interesting or important. Subjects who wrote an essay espousing an unpopular position now indicate they hold that position themselves. In effect, subjects rationalize their behavior by indicating that it really reflected their true feelings all along.

At this point, one might assume that all subjects would follow this scenario. But revising one’s attitude is not the only potential means of reducing the dissonance brought on by counterattitudinal behavior. Festinger suggested that a person can maintain his original attitude if he or she can justify the counterattitudinal behavior with other salient and reasonable cognitive elements. This is where the large versus small reward manipulation enters the picture. A subject offered a large incentive (e.g., $20) for performing the act can use that fact to justify what he or she has done. Who wouldn’t do something boring or even write an essay one doesn’t believe if the price were right? The reward, in other words, obviates the psychological need to change one’s feelings about what one has done. A subject offered a token incentive (e.g., $1), on the other hand, cannot plausibly argue that the reward justified engaging in the boring activity or writing the disingenuous essay. The only recourse in this situation is to revise one’s own attitude and indicate liking for the activity or belief in the essay’s position.

Note the upshot here: The smaller the contingent reward, the more positive one’s resultant attitude toward the behavior; or conversely, the larger the contingent reward, the more negative one’s attitude toward the rewarded behavior. This represents a rather stunning reversal of the conventional wisdom regarding the use of rewards to influence people’s behavior. To be sure, large rewards are useful—often necessary—to get a person to perform an otherwise undesirable activity or to express an unpopular attitude. But the effect is likely to be transitory, lasting only as long as the reward contingency is in place. To influence the person’s underlying thoughts and feelings regarding the action, and thereby bring about a lasting change in his or her behavioral orientation, it is best to employ the minimal amount of reward. In effect, lasting social influence requires reconstruction within the person rather than inducements from the outside.

Mental processes are notoriously hard to pin down objectively, of course, and this fact of experimental psychology has always been a problem for dissonance theory. Festinger and his colleagues did not attempt to measure what they assumed to be the salient cognitions at work in the reward paradigm, nor have subsequent researchers fared much better in providing definitive evidence regarding the stream of thought presumably underlying the experience and reduction of psychological tension. With this gaping empirical hole in the center of the theory, it is not surprising that other theorists soon rushed in to fill the gap with their own inferences about the true mental processes at work. In effect, the results observed in cognitive dissonance research served as something of a Rorschach for subsequent theorists, each of whom saw the same picture but imparted somewhat idiosyncratic interpretations of its meaning. Not all interpretations have fared well, however, and among those that have, there is sufficient common ground to characterize (in general terms at least) a viable alternative to the dissonance formulation.

Central to the alternative depiction of reverse incentive effects is the assumption that people’s minds are first and foremost interpretive devices, designed to impose coherence on the sometimes diverse and often ambiguous elements of personal experience. In analogy to Gestalt principles of perception, cognitive processes “go beyond the information given” (Bruner & Tagiuri, 1954) to impart higher-order meaning that links the information in a stable and viable structure. With respect to the dissonance paradigm, subjects’cognitive playing field is presumably populated with an abundance of salient or otherwise relevant information. These cognitive elements include the nature of the task (the activity or essay) and the money received, of course, but they no doubt encompass an assortment of other thoughts and feelings as well. Thus, subjects may be sensitized to their sense of personal freedom and control in that context, for example, or perhaps to their sense of personal competence in performing the task. For that matter, subjects might also be considering their feelings about the experimenter, pondering the value of the experiment, or rethinking the value of psychological research in general. In view of the plethora of likely cognitive elements and the potential for these elements to come in and out of focus in the stream of thought, the achievement of coherence is anything but a trivial task. What processes are at work to impart coherence to this complex and dynamic array of information? And what psychological dimensions capture the resultant coherence?

There is hardly a shortage of relevant theories. Several early models, for example, emphasized processes of causal attribution (cf. Bem, 1972; Jones & Davis, 1965; Kelley, 1967) that were said to promote personal interpretations favoring either internal causation (e.g., personal beliefs and desires) or external causation (most notably, the monetary incentive). In this view, a large incentive provides a reasonable and sufficient cause for engaging in the activity, shortcircuiting the need to make inferences about the causal role of one’s beliefs or desires. A small incentive, on the other hand, is not perceived as a credible cause for taking the time and expending the effort to engage in the activity, so one instead invokes relevant beliefs and desires as causal forces for the behavior. In effect, the counterintuitive influence of rewards is a testament to their perceived efficacy in causing people to do things they might not otherwise do. Causal attribution, of course, is not the only plausible endpoint of coherence concerns. Other well-documented dimensions relevant to higher-order integrative understanding include evaluative consistency (cf. Abelson et al., 1968), explanatory coherence (cf. Thagard & Kunda, 1998), narrative structure (cf. Hastie, Penrod, & Pennington, 1983), and level of action identification (cf. Vallacher & Wegner, 1987). It is hardly surprising, then, that a number of other models have been fashioned and tested in an attempt to explain why rewards sometimes fail to influence people’s beliefs and desires in the intended direction (e.g., Csikszentmihalyi, 1990; Deci & Ryan, 1985; Kruglanski, 1975; Harackiewicz, Abrahams, & Wageman, 1987; Trope, 1986; Vallacher, 1993).

Taken together, the various models emphasizing inference and interpretation have a noteworthy advantage over the standard dissonance reduction model in that they predict reverse incentive effects for any action, not just those that are likely to be viewed in a context-free manner as aversive by some criterion (e.g., repetitive, boring, pointless, timeconsuming, etc.). Indeed, some of the most interesting research has established conditions under which otherwise enjoyable or interesting activities can seemingly lose their intrinsic interest by virtue of their association with material rewards (cf. Lepper & Greene, 1978). Rewards do not always have this effect, however, a point that has been incorporated with varying degrees of success into many of these models. Still, the theoretical preoccupation with the effects of rewards has generated an unequivocal lesson: The success or failure of attempted influence depends on how the attempt engages the mental machinery of the target. Rewards can be perceived as bribery and aversive consequences can mobilize resistance, for example, and both can activate concerns about one’s freedom of action and self-determination. Social influence does not operate on blank minds, but rather encounters an active set of interpretative processes that operate according to their own dynamics to make sense of incoming information (Vallacher, Nowak, Markus, & Strauss, 1998).

Manipulation

Change in people’s behavior can be imposed from the outside by the exercise of power, but this approach to influence may prove effective only as long as the relevant contingencies (reward, punishment, expertise, information) are in place. To influence people in a more fundamental sense, it is necessary to include them as accomplices in the process. Aself-sustaining change in behavior requires a resetting of the person’s internal state—her or her beliefs, preferences, goals, and so on—in a way that preserves the person’s sense of freedom and control. Assuming the influence agent has an agenda that does not coincide with the target’s initial pBibliography: and concerns, the agent may then find it necessary to employ subtle strategies designed to manipulate the relevant internal states of the target. Couched in these terms, social influence boils down to various means by which an agent can obtain voluntary compliance from targets in response to his or her requests, offers, or other forms of overture. Research has identified several compliance-inducing strategies, some of which rely on basic interpersonal dynamics, others of which reflect the operation of basic social norms. We discuss specific manifestations of these general approaches in the following sections.

Manipulation Through Affinity

Could you pass the broccoli? Will you marry me? Whether the agenda at issue is mundane or life-altering, requests provide the primary medium by which people seek compliance from one another. Requests are a fairly routine feature of everyday social interaction and have been examined for their effectiveness under experimental arrangements designed to identify basic principles. However, requests are also central to businesses, charitable organizations, political parties, and other societal entities that depend on contributions of money, effort, or time from the citizenry. Accordingly, much of the knowledge concerning compliance has been gleaned from observation—sometimes participant observation—of professional influence agents operating in charitable, commercial, or political contexts (cf. Cialdini, 2001). Experimentation and real-world observation provide cross-validation for one another, and together have generated a useful taxonomy of effective strategies for obtaining compliance. Many of these strategies are based on what can be called the affinity principle—the tendency to be more compliant in the hands of an influence agent we like as opposed to dislike.

The Affinity Principle

Whoever suggested caution in the face of friends bearing gifts may not have been advocating cynicism, but rather selfpreservation. Extensive research supports the commonsense notion that personal affinity motivates compliance. From sales professionals, the consummate chameleons of the commercial world, to con artists preying on the elderly and college students calling home for cash, several effective influence strategies rest on the influence agent’s being liked, known by, or similar to the target. When such affinity exists between agent and target, ruse is not necessarily a prerequisite for compliance. Quite the opposite, in fact, can be true.

Consider, for example, the Tupperware Corporation, which has exploited the power of friendship in an unprecedented fashion. It has been reported that a Tupperware party occurs somewhere every 2.7 seconds (Cialdini, 1995)— although they typically last much longer than that, which suggests the sobering possibility that there is never a moment without one. The format is as follows: A host invites friends and relatives over to his or her home to participate in a gathering at which Tupperware products are demonstrated by a company representative. Armed with the knowledge that their friend and host will receive a percentage of sales, the attendees tend to buy willingly, because they are purchasing from someone they know and like rather than from a stranger. As confirmation for the pivotal role of “liking” in this context, Frenzen and Davis (1990) found that 67% of the variance in purchase likelihood was accounted for by socials ties between the hostess and the guest and only 33% by product preference.

Personal affinity has been shown to be a potent compliance inducer even in the absence of the liked individual. Anecdotal evidence of this phenomenon abounds in our daily lives. It is the rare parent who has not sent his or her child around to friends and neighbors to collect for a school walkathon or raffle. The child, hardly the embodiment of a “compliance professional” (Cialdini, 2001), represents the parent who is (one would hope) liked by the target. In the same vein, Cialdini (1993) discovered that door-to-door salespersons commonly ask customers for names of friends upon whom they might call. Although we may wonder what kind of friends a person might surrender in this way, rejecting the salesperson under these circumstances apparently is seen as a rejection of the referring friend—the person for whom affinity is felt. The potency of the affinity principle per se may be diminished by the physical absence of the liked person, but the allusion appears nonetheless to render the target more susceptible to other compliance tactics.

The affinity principle is not limited to influence seekers and their surrogates, but applies as well to those who are known or at least recognized by the target. During elections, for example, voters have been shown to cast their ballots for candidates with familiar-sounding names (Grush, 1980; Grush, McKeough, & Ahlering, 1978). In similar fashion, survey response rates sometimes double if the sender’s name is phonetically similar to the recipient’s (Garner, 1999). Physical attractiveness represents another extension of the affinity principle. A total stranger blessed with good looks has a distinct advantage over his or her less attractive counterparts in securing behavioral compliance (Benson, Karabenick, & Lerner, 1976) and attitude change (Chaiken, 1979). Good grooming, for example, accounts for greater variance in hiring decisions than does the applicant’s job qualifications, although interviewers deny the impact of attractiveness (Mack & Rainey, 1990). In political campaigns, meanwhile, there is evidence that a candidate’s attractiveness can substantially influence voters’ perceptions of him or her and affect their voting behavior as well (Budesheim & DePaola, 1994; Efran & Patterson, 1976). Even criminal justice is not immune to the power of physical attractiveness. Better-looking defendants generally receive more favorable treatment in the criminal justice system (Castellow, Wuensch, & Moore, 1990) and often receive lighter sentences when found guilty (Stewart, 1980).

Similarity and Affinity

Similarity between influence agent and target represents a special case of the affinity principle. It is rarely a coincidence when a car salesperson claims to hail from a customer’s home state or when an apparel salesperson claims to have purchased the very same outfit the vacillating customer is sporting. People like those who are similar to them (cf. Byrne, 1971; Byrne, Clore, & Smeaton, 1980; Newcomb, 1961), and in accordance with the affinity principle, they are inclined to respond affirmatively to requests from similar others as well. The similarity effect encompasses a wide range of dimensions, including opinions, background, lifestyle, and personality traits (Cialdini & Trost, 1998). Even similarity in nonverbal cues, such as posture, mood, and verbal style, has been observed to increase compliance (LaFrance, 1985; Locke & Horowitz, 1990; Woodside & Davenport, 1974). The effect of similarity is quite pervasive, having been demonstrated across a wide range of variation in age, cultural background, socioeconomic status, opinion topics, and relationship types (cf. Baron & Byrne, 1994).

The power of similarity to elicit compliance has been observed even when the dimension of similarity is decidedly superficial in nature. Sometimes outward manifestations of similarity such as clothing are all that are required. Emswiller, Deaux, and Willits (1971), for example, arranged for confederates to dress as either “straight” or “hippie” and had them ask fellow college students for a dime to make a phone call.When the confederate and target subject were similar in their respective attire, compliance was observed over two thirds of the time. When the confederate-target pair differed in clothing type, however, less than half of the students volunteered the dime. In a related vein, Suedfeld, Bochner, and Matas (1971) observed that if antiwar protestors were asked by a similarly dressed confederate to sign a petition, they tended to do so without even reading the petition. Automatic compliance to the requests of others perceived to be similar has a decidedly nonthinking quality to it. The very automaticity of the similarity principle, however, may have important adaptive significance. By using this heuristic to make quick decisions regarding compliance requests, people can allocate their valuable but limited mental resources to other types of judgment and decision-making situations defined in terms of ambiguous, conflicting, or complex information.

Esteem and Affinity

Perhaps even more basic than our propensity to do things for those we like is our need to be liked by those we know (cf. G. W. Allport, 1939; Baumeister, 1982; Tesser, 1988). To be sure, for some people the desire to be liked can be overridden by other motives, such as the need for acceptance (Rudich & Vallacher, 1999) or desires to be seen accurately (Trope, 1986) or in accordance with one’s personal self-view (Swann, 1990). For most people most of the time, however, it is hard to resist the allure of flattery. Receiving positive feedback from someone is highly rewarding and tends to promote a reciprocal exchange with the source. In other words, we like others who seem to like us. When activated in this way, the affinity principle makes the recipient of flattery a potential target for influence by the flatterer.

Flattery has a long history as an effective compliance technique, both inside and outside the laboratory (cf. Carnegie, 1936/1981; Cialdini, 2001). Drachman, DeCarufel, and Insko (1978), for example, arranged for men to receive positive or negative comments from a person in need of a favor. The person offering praise alone was liked most, even if the targets knew that the flatterer stood to gain from their liking them. Moreover, inaccurate compliments were just as effective as accurate compliments in promoting the target’s affinity for the flatterer. So influence agents need not bother gathering facts to support their complimentary onslaught; simply expressing positive comments may be sufficient to woo the target and thereby gain his or her compliance.At the same time, however, the ingratiator’s dilemma (Jones, 1964) discussed earlier sets limits on the effectiveness of the esteem principle. In particular, praise and other forms of ingratiation (e.g., opinion conformity with the target) can backfire if the ingratiator’s ulterior motives are readily transparent and the praise is seen as solely manipulative. And, of course, the influence agent can simply overdo the flattery and come across as disingenuous and obsequious.

Manipulation Through Scarcity

From childhood on, we want what we lack—be it toys, money, fancy cars, or greener grass. The cache of the unattainable, for example, is a sure bet to spark competition and fuel sales in commercial settings. Cries of today and today only and in limited quantities have been known to drive shoppers like lemmings toward the blue-light special, and convenient Christmastime shortages of Tickle Me Elmos or Furbees stoke the fires of demand for such toys. We may see ourselves as impervious to such base tactics, but the power of the human tendency to view scarcity as an indicator of worth or desirability is undeniable, well-documented— and routinely exploited as a method of securing compliance (cf. Cialdini, 2001).

It’s interesting in this regard to consider the tendency for efforts at censorship to backfire, creating a stronger demand than ever for the forbidden fruit. The prohibition of alcohol in the 1920s, for example, only whetted people’s appetite for liquor and spawned the rise of secret establishments (the speakeasy) that provided access to the scarce commodity. Antipornography crusades typically have the same effect, increasing interest in the banned books and magazines, even among people who might not otherwise consider this particular genre. Telling people they cannot read or see something can increase—or even create—a desire to take a proverbial peek at the hard-to-find commodity. By the same token, after the censorship or prohibition is lifted, interest in the object in question tends to wane.

Surprisingly, there is a paucity of research on the psychology of scarcity. The enhanced desirability of scarce items may reflect a perceived loss of freedom to attain the items, in line with reactance theory. The censorship example certainly suggests that people value an object in proportion to the injunction against having it. People don’t like having their freedom threatened, and making an item difficult to obtain or forbidding an activity clearly restricts people’s options with respect to the item and the activity. Reactance is a reasonable model, but one can envision other theoretical contenders. Simple supply-and-demand economics, for example, has a direct connection to the scarcity phenomenon. The lower the supply-demand ratio with respect to almost any item, the more those who control the resource can jack up the price and still count on willing customers. Perhaps there are viable evolutionary reasons for the heightened interest in scarce resources. The conditions under which we evolved were harsh and uncertain, after all, and there may have been selection pressures favoring our hominid ancestors who were successful at securing and hording valuable but limited food supplies and other resources.

Yet another possibility centers on people’s simultaneous desires to belong and to individuate themselves from the groups to which they belong (e.g., Brewer, 1991). Scarcity has a way of focusing collective attention on a particular object, and there may be a sense of social connectedness in sharing the fascination with others. Waiting in line with throngs of shoppers hoping to secure one of the limited copies of the latest Harry Potter volume, for example, is arguably an annoying and irrational experience, but it does make the person feel as though he or she is on the same wavelength as people who would otherwise be considered total strangers. At the same time, if the person is one of the lucky few who manages to secure a copy before the shelves are cleared, he or she has effectively individuated him- or herself from the masses. In essence, influence appeals based on scarcity may be effective because they provide a way for people to belong to and yet stand out from the crowd in a world where he or she may routinely feel both alienated and homogenized.

Manipulation Through Norms

Human behavior, compliance included, is driven to a large extent by social norms—context-dependent standards of behavior that exert psychological pressure toward conformity. At the group level, norms provide continuity, stability, and coordination of behavior among individuals. At the individual level, norms provide a moral compass for deciding how to behave in situations that might offer a number of action alternatives. The norm of social responsibility (e.g., Berkowitz & Daniels, 1964), for example, compels us to help those less fortunate than ourselves, and the norm of equity prevents us from claiming excessive compensation for minimal contribution to a group task (cf. Berkowitz & Walster, 1976). Norms pervade social life, and thus provide raw material for social influence agents. By tapping into agreed-upon and internalized rules for behavior, those who are so inclined can extract costly commitments to behavior from prospective targets without having to flatter them.

The Norm of Reciprocity

The obligation to repay what others provide us appears to be a universal and defining feature of social life. All human societies subscribe to the norm of reciprocity (Gouldner, 1960), which is understandable in light of the norm’s adaptive value (Axelrod, 1984). The sense of future obligation engendered by this norm promotes and maintains both personal and formal relationships. And when widely embraced by people as a shared standard, the reciprocity norm lends predictability, interpersonal trust, and stability to the larger social system. Transactions involving tangible assets are only a subset of the social interactions regulated by reciprocity. Favors and invitations are returned, Christmas cards are sent to those who send them, and compliments are rarely accepted without finding something nice to say in return (Cialdini, 2001).

The social obligation that there be a give for every take is well-documented (DePaulo, Brittingham, & Kaiser, 1983; Eisenberger, Cotterell, & Marvel, 1987; Regan, 1971). Even when gifts and favors are unsolicited (or unwanted), the recipient feels compelled to provide something in return. The ability of uninvited gifts to produce feelings of obligation in the recipient is successfully exploited by many organizations, both charitable and commercial. People may not need personalized address labels, key rings, or hackneyed Christmas cards, but after they have been received, it is difficult not to respond to the organization’s request for a “modest contribution” (e.g., Berry & Kanouse, 1987; Smolowe, 1990). A particularly vivid example of this tendency is provided by the Hare Krishna Society (Cialdini, 2001). The members of this religious sect found that they could dramatically increase the success of their solicitations in airports simply by giving travelers a free flower before asking for donations. People find it hard to turn down a request for money after receiving an unsolicited gift, even something as irrelevant to one’s current needs as a flower. That receiving a flower is not exactly the high point of the recipients’ day is confirmed by Cialdini’s observation that the flower more often than not winds up in a nearby waste container shortly after the flower-for-money transaction has been completed.

Reciprocity can have the subsidiary effect of increasing the recipient’s liking for the gift- or favor-giver, but the norm can be exploited successfully without implicit application of the affinity principle (e.g., Regan, 1971). Affect does enter the picture, however, when people fail to uphold the norm. Nonreciprocation runs the risk of damaging an exchange relationship (Cotterell, Eisenberger, & Speicher, 1992; Meleshko & Alden, 1993) and may promote reputational damage for the offender (e.g., moocher, ingrate) that can haunt him or her in future transactions. Somewhat more surprising is evidence that negative feelings can be engendered when the reciprocity norm is violated in the reverse direction. One might think that someone who provides a gift but does not allow the recipient to repay would be viewed as generous, unselfish, or altruistic (although perhaps somewhat misguided or naive). But under some circumstances, such a person is disliked for his or her violation of exchange etiquette (Gergen, Ellsworth, Maslach, & Seipel, 1975). This tendency appears to be universal, having been demonstrated in U.S., Swedish, and Japanese samples.

Cooperation is an interesting manifestation of the reciprocity norm. Just as the act of providing a gift or a favor prompts repayment, cooperative behavior tends to elicit cooperation in return (Braver, 1975; Cialdini, Green, & Rusch, 1992; Rosenbaum, 1980) and can promote compliance with subsequent requests as well (Bettencourt, Brewer, Croak, & Miller, 1992). This notion is not lost on the car salesperson who declares that he or she and the customer are on “the same side” during price negotiations, and then appears to take up the customer’s fight against their common enemy, the sales manager. Even if this newly formed alliance comes up short and the demonized sales manager purportedly holds fast on the car’s price, the customer may feel sufficiently obligated to repay the salesperson’s cooperative overture with a purchase.

A related form of reciprocity is the tactical use of concessions to extract compliance from those who might otherwise be resistant to influence. The strategy is to make a request that is certain to meet with a resounding no, if not a rhetorical are you kidding? The request might call for a large investment of time and energy, or perhaps for a substantial amount of money. After this request is turned down, the influence agent follows up with a more reasonable request. In effect, the influence agent is making a concession and, in line with the reciprocity norm, the target now feels obligated to make a concession of his or her own. Astudy by Cialdini et al. (1975) illustrates the effectiveness of what has come to be known as the door-in-the-face technique. Posing as representatives of a youth counseling program, Cialdini et al. approached college students to see if they would agree to chaperon a group of juvenile delinquents for several hours at the local zoo. Not surprisingly, most of them (83%) refused. The results were quite different, though, if Cialdini et al. had first asked the students to do something even more unreasonable—spending 2 hours per week as counselors to juvenile delinquents for a minimum of 2 years. After students refused this request—all of them did—the smaller zoo-trip request was agreed to by 50% of the students, a tripling of the compliance rate. The empirical evidence for the door-in-the face technique is impressive (cf. Cialdini & Trost, 1998) and largely supports the reciprocity of concessions interpretation.

The power of reciprocal concessions is also apparent in the that’s not all technique, which is a familiar trick of the trade among salespeople (Cialdini, 2001). The tactic involves making an offer or providing a come-on to a customer, then following up with an even better offer before the target has had time to respond to the initial offer. This technique is used fairly routinely to push big-ticket commercial items. A salesperson, for example, quotes a price for a large-screen TV, and while the interested but skeptical couple is thinking it over, he or she adds, “but that’s not all—if you buy today, I’m authorized to throw in a free VCR.” Research confirms that the effectiveness of the that’s not all technique is indeed attributable in part to the creation of a felt need in the target to reciprocate the agent’s apparent concession (e.g., Burger, 1986), although the contrast between the initial and follow-up concession plays a role as well. In the real world, the knowledge that people tend to reciprocate concessions provides a cornerstone of negotiation and dispute resolution. The bargaining necessary to reach a compromise solution in such instances invariably hinges on one party’s making a concession with the assumption that the other party will follow suit with a concession of his or her own. This phenomenon can be seen at work in a wide variety of contexts, including business, politics, international diplomacy, and marriage.

Reciprocity in Personal Relationships

The norm of reciprocity is not limited to transactions between people who otherwise would have little to do with one another (e.g., salespeople and consumers), but rather provides a foundation for virtually every kind of social relationship. The reciprocity norm even plays a role in personal relationships, serving to calibrate the fairness in people’s ongoing interactions with friends and lovers. The trust and warmth necessary to maintain a personal relationship would be impossible to maintain if either partner felt that his or her overtures of affection, self-disclosures, offers of assistance, and birthday gifts went unreciprocated (cf. Lerner & Mikula, 1994). There are two complications here, however. First, the partners to a relationship are not always equally invested in or dependent on the relationship (e.g., Rusbult & Martz, 1995). In terms of social exchange theory (Kelley & Thibaut, 1978; Thibaut & Kelley, 1959), the comparison level for alternatives (CLalt) for each partner may be substantially different, and this differential dependency can promote exploitative behavior by the less dependent person. In effect, the person who feels more confident that he or she could establish desirable alternative relationships (i.e., the person with the higher CLalt) can set the terms of exchange in the relationship. This power asymmetry need not be discussed explicitly in order for it to promote inequality in overt expressions of affection, the allocation of duties and responsibilities, and decision making.

The second complication arises in relationships that achieve a certain threshold of closeness. Intimate partners are somewhat loathe to think about their union in economic, titfor-tat terms, preferring instead to emphasize the communal aspect of their relationship (cf. M. S. Clark & Mills, 1979). They feel they operate on the basis of need rather than equity or reciprocity, and this perspective enables them to make sacrifices for one another without expecting compensation or repayment. The apparent suspension of reciprocity may be more apparent than real, however. The issue is not reciprocity per se, but rather the time scale on which reciprocity and other exchange metrics are calculated. What looks like selfless and unrequited sacrifice by one person in the short run can be viewed as inputs that are eventually compensated by the other person in one form or another (cf. Foa & Foa, 1974). Depending on the sacrifice (e.g., fixing dinner vs. taking on a second job), the time scale for repayment can vary considerably (e.g., hours or days vs. weeks or even years), but at some point the scales need to be balanced. The sense that one has been treated unfairly or exploited—or simply that one’s assistance and affection have not been duly reciprocated—can ultimately spoil a relationship and bring about its dissolution.

Commitment

Although it is not usually listed as a social norm, commitment can influence behavior as much as do reciprocity, equity, responsibility, and other basic social rules and expectations (Kiesler, 1971). After people have committed themselves to an opinion or course of action, it is difficult for them to change their minds, recant, or otherwise fail to stay the course. Commitment does not derive its power solely from the anger and disappointment that breaking of a commitment would engender in others—although this certainly counts for something—but also from a basic desire to act consistently with one’s point of view. A commitment that is expressed publicly, whether in front of a crowd or to a single individual, is especially effective in locking in a person’s opinion or promise, making it resistant to change despite the availability of good reasons for reconsideration (cf. Deutsch & Gerard, 1955; Schlenker, 1980).

Agents of influence play on this seemingly noble tendency, often for decidedly nonnoble purposes of their own. Several specific techniques have been observed in real-world settings and confirmed in research (Cialdini & Trost, 1998). Perhaps the best-known tactic is referred to as the foot-in-thedoor, which is essentially the mirror image of the door-in-theface tactic. Rather than starting out with a large request and then appearing to make a concession by making a smaller request, the foot-in-the-door specialist begins with a minor request that is unlikely to meet with resistance. After securing committing with this request, the influence agent ups the ante by making a far more costly request that is consistent with the initial request. Because of commitment concerns, it can be very difficult at this point for the target to refuse compliance. A series of clever field experiments (Freedman & Fraser, 1966) provide compelling evidence for the effectiveness of this tactic. In one study, suburban housewives were contacted and asked to do something that most of them (78%) refused to do: allow a team of six men from a consumer group to come into their respective homes for 2 hours to “enumerate and classify all the household products you have.” Another group of housewives was contacted and presented with a much less inconvenience-producing request— simply answering a few questions about their household soaps (e.g., “What brand of soap do you use in your kitchen sink?”). Nearly everyone complied with this minor request. These women were contacted again three days later, but this time with the larger home-visit request. In this case, over half the women (52%) complied with the request and allowed the men to rummage through their closets and cupboards for 2 hours.

The commitment process underlying this tactic goes beyond the target’s concern with maintaining consistency with the action per se. It also engages the target’s self-concept with respect to the values made salient by the action. Thus, the women who complied with the initial request in the Freedman and Fraser (1966) studies were presumably sensitized to their self-image as helpful, public-spirited individuals.To maintain consistency with this suddenly salient (and perhaps newly enhanced) self-image, they felt compelled to comply with the later, more invasive request.Assuming this to be the case, the foot-in-the-door tactic holds potential for influencing people’s thought and behavior long after the tactic has run its course. Freedman and Fraser (1966) themselves noted a parallel between their approach and the approach employed by the Chinese military on U.S. prisoners of war captured during the Korean War in the early 1950s. A prisoner, approached individually, might be asked to indicate his agreement with mild statements like The United States is not perfect.After the prisoner agreed with such minor anti-American statements, he might be asked by the interrogator to elaborate a little on why the United States is not perfect. This, in turn, might be followed by a request to make a list of the “problems with America” he had identified, which he was expected to sign. The Chinese might then incorporate the prisoner’s statement in an anti-American broadcast. As a consequence of this ratcheting up of an initially mild anti-American statement, a number of prisoners came to label themselves as collaborators and to act in ways that were consistent with this selfimage (cf. Schein, 1956).

Commitment underlies a related tactic known as throwing a lowball, which is routinely employed by salespeople to gain the upper hand over customers in price negotiations (Cialdini, 2001).Automobile salespeople, for example, will seduce customers into deciding on a particular car by offering it at a very attractive price. To enhance the customer’s commitment to the car, the salesperson might allow the customer to arrange for bank financing or even take the car home overnight. But just before the final papers are signed, something happens that requires changing the price or other terms of the deal. Perhaps the finance department has caught a calculation error or the sales manager has disallowed the deal because the company would lose money at that price. At this point, one might think that the customer would back out of the deal— after all, he or she has made a commitment to a particular exchange, not simply to a car. Many customers do not back out, however, but rather accept the new terms and proceed with the purchase. Apparently, in making the initial commitment, the customer takes mental possession of the object and is reluctant to let it go (Burger & Petty, 1981; Cioffi & Garner, 1996).

Changing the terms of the deal without undermining the target’s commitment is not limited to shady business practices. Indeed, lowball tactics underlie transactions having nothing to do with economics, and can be used to gain people’s cooperation to do things that center on prosocial concerns rather than personal self-interest (e.g., Pallak, Cook, & Sullivan, 1980). In an interesting application of the lowball approach, Cialdini, Cacioppo, Bassett, and Miller (1978) played on college students’potential commitment to psychological research. Students in Introductory Psychology were contacted to see if they would agree to participate in a study on “thinking processes” that began at 7:00 a.m. Because this would entail waking up before the crack of dawn, few students (24%) expressed willingness to participate in the study. For another group of students, however, the investigators threw a lowball by not mentioning the 7:00 a.m. element until after the students had indicated their willingness to take part in the study. Amajority of the students (56%) did in fact agree to participate, and none of them backed out of this commitment when informed of the starting time. After an individual has committed to a course of action, new details associated with the action—even aversive details that entail unanticipated sacrifice—can be added without undermining the psychological foundations of the commitment.

Like the lowball tactic, the bait-and-switch tactic works by first seducing people with an attractive offer. But whereas the lowball approach changes the rules by which the exchange can be completed, the bait-and-switch tactic nixes the exchange altogether, with the expectation that the target will accept an alternative that is more advantageous to the influence agent. Car salespeople once again unwittingly have furthered the cause of psychological science by their shrewd application of this technique (Cialdini, 2001). They get the customer to the showroom by advertising a car at a special low price. Taking the time to visit the showroom constitutes a tentative commitment to purchase a car. Upon arrival, the customer learns that the advertised special is sold, or that because of its low price, the car doesn’t come with all the features the customer wants. Because of his or her commitment to purchase a car, however, the customer typically expresses willingness to examine and purchase a more expensive model—even though he or she wouldn’t have made the trip to look at these models in the first place.

Social Coordination

To this point, social influence has been described as if it were a one-way street. One person (the influence agent) has an agenda that he or she wishes to impose upon another person (the influence target). Although influence strategies certainly are employed for purposes of control and manipulation, social influence broadly defined serves far loftier functions in everyday life. Indeed, as noted at the outset, it is hard to discuss any aspect of social relations without acceding a prominent role to influence processes. Social influence is what enables individuals to coordinate their opinions, moods, evaluations, and behaviors at all levels of social reality, from dyads to social groups to societies. The process of social coordination is a thus a two-way street, with all parties to the exchange influencing and receiving influence from one another. The ways and means of coordination are discussed in this section, as are the functions—both adaptive and maladaptive—of this fundamental human tendency.

Conformity

People go to a lot trouble to influence one another. Yet for all the effort expended in service of manipulation, sometimes all it takes to influence a person is to convey one’s own attitude or action preference. People take solace from the expressions of like-minded people and develop new ways of interpreting reality from those with different perspectives. In both cases, simply expressing an opinion—no tricks, strategies, or power plays—may be sufficient to bring someone into line with one’s point of view. This form of influence captures the essence of conformity, a phenomenon that is commonly counted as evidence for people’s herdlike mentality. There is a nonreflective quality to many instances of conformity, but this property enables people to coordinate their thoughts in an efficient manner and attain the social consensus necessary to engage in collective action. We consider first what constitutes conformity, and then we develop both the positive and negative consequences of this manifestation of social influence.

Group Pressure and Conformity

Conformity represents a “change in behavior or belief toward a group as a result of real or imagined group pressure” (Kiesler & Kiesler, 1976). Defined in this way, conformity would seem to be a defining feature of group dynamics. Festinger (1950), for example, suggested that pressures toward uniformity invariably exist in groups and are brought to bear on the individual so that over time, he or she will tend to conform to the opinions and behavior patterns of the other group members. If one of two diners at a table for two says that he or she finds the food distasteful and the other person expresses a more favorable opinion, the first person is unlikely to change his or her views to match those of his or her companion. However, the addition of several more dinner companions, each holding the contrary position, may well cause the person to rethink his or her position and establish common ground with the others. If he or she has yet to express an opinion, the likelihood of conforming to the others’opinions is all the greater. To investigate the variables at work in this sort of context—group size, unanimity of group opinion, and the timing of the person’s expressed judgment—Solomon Asch (1951, 1956) performed a series of experiments that became viewed unanimously by social psychologists as classics.

Asch’s original intention actually was to demonstrate that people do not conform slavishly and uncritically in a group setting (Levine, 1996). Asch put his hope for humanity to a test in a simple and elegant way. Participants thought they were participating in a study on perception. They sat facing a pair of white cardboards on which vertical lines were drawn. One card had a single line, which provided the standard for subjects’ perceptual judgments. The second card had three lines of varying length, one of which was clearly the same length as the standard. Participants were simply asked to indicate which of the three lines matched the standard. The correct answer was always obvious, and in fact when participants were tested individually, they rarely made a mistake. To give conformity a chance, Asch (1951) placed a naive participant in a group setting with six other people, who were actually experimental accomplices pretending to be naive participants. By arrangement, the participant always made his judgment after hearing the bogus participants make their judgments. For the first two trials, the accomplices (and, of course, the participant) gave the obviously correct answers. After creating this group consensus, the accomplices gave a unanimous but incorrect answer on the third trial—and again on trials 4, 6, 7, 9, 10, and 12. To Asch’s surprise, the typical participant conformed to the incorrect group response one third of the time. Over 80% of the participants conformed to the incorrect majority on at least one trial, and 7% conformed on all seven of the critical trials. Although it was not his intent, Asch had demonstrated that even when there is a clear reality, people are still inclined to go along with the crowd.

Informational and Normative Influence

Presumably, Asch’s participants conformed because they wanted the other group members to like them or because they were fearful of ridicule if they failed to go along. During postexperimental interviews, participants typically mentioned these concerns as their motivation for concurring with obviously inaccurate judgments. And when Asch allowed participants to make their responses privately in writing as opposed to publicly by voice, the extent to which participants conformed showed a marked decrease. Because people are obviously less concerned about the approval of others when the others cannot monitor their behavior, these findings suggest that participants’conformity did in fact reflect a desire to win approval or avoid disapproval.

Social approval does not exhaust the possible motives for conformity, however. Indeed, several years prior to Asch’s research, Muzafer Sherif (1936) had concocted an equally compelling experimental situation relevant to conformity, but one that played on the often ambiguous nature of physical reality rather than concerns with acceptance, rejection, and the like. Sherif felt that groups provide important information for individuals—and more important, interpretative frameworks for making coherent judgments about information. People have a need for cognitive clarity (Schachter, 1959), but sometimes they lack an objective yardstick for determining the true nature of their experiences. In such instances, people turn to others, not to gain approval but rather to obtain social clues to reality. People are highly prone to rumors, for example, even from unreliable sources, when they hear about goings-on for which no official explanation has been provided. A sudden noise or a hard-to-read message can similarly make people prone to the assessments of others in an attempt to clarify what has happened.

To test this motivation for conformity, Sherif (1936) needed a situation in which the physical environment lacked ready-made yardsticks for understanding, so that the operation of social standards could be observed. His solution was to take advantage of the autokinetic effect—the apparent motion of a stationary spot of light in a dark room. The idea was to place a group of participants in this type of situation and ask them to make estimates of the light’s movement. Participants, of course, were not informed that the light’s movement was illusory. When tested individually, participants varied considerably in their estimates, from virtually no movement to more than 10 inches. He then brought together three participants who had previously made estimates in private, and asked them to announce their individual judgments aloud and in succession. Despite their initial differences, participants converged fairly quickly (often within three trials) on a single estimate that functioned as a group standard for the light’s movement. Sherif went on to show that after a group defined reality for participants, they continued to adhere to the group judgment even after they left the group (see also Alexander, Zucker, & Brody, 1970).

Deutsch and Gerard (1955) recognized that people can conform for different reasons and formally distinguished between normative influence, which captures the essence of the Asch situation, and informational influence, which reflects participants’motivation in the Sherif situation. Normative influence refers to conformity in an attempt to gain approval, whereas informational influence refers to conformity in an attempt to gain clear knowledge about reality. Sometimes it is difficult to determine which basis of conformity is operative in a given situation. Imagine, for example, that you observe someone following the lead of others at a classical music concert. When they sit, he or she sits. When they give a standing ovation, the person follows suit. The group influence in this case could be normative, informational, or perhaps both, depending on the person’s primary source of uncertainty. If the person is unsure of his or her standing among the fellow concert-goers, the person’s conformity could be driven by desires for approval or fears of ridicule. If the person is unfamiliar with classical music, however, the behavior of others might provide all-important clues about the quality of the performance.

Normative influence is especially salient when the group controls material or psychological rewards important to the person (e.g., Crutchfield, 1955), when the behavior is public rather than private (e.g., Insko, Drenan, Solomon, Smith, & Wade, 1983), or when the person is especially eager for approval (Crowne & Marlowe, 1964). Someone attending the concert with prospective colleagues, for instance, may be especially inclined to match their behavior, particularly if he or she is uncertain about their interest in his or her job candidacy and the concert hall has good lighting. The salience of informational influence in turn depends on the person’s confidence in his or her own judgment, and on the person’s judgment of how well-informed the group is. Thus, a classical music neophyte who sees tuxedo-clad audience members leap to their feet upon completion of the Rach 3 (Rachmaninoff’s third piano concerto) is more likely to follow suit than if he or she instead sees the same behavior by school children. A graduate of Julliard, meanwhile, is unlikely to mimic such behavior in either case. Informational influence tends also to take precedence, not surprisingly, when the judgment task is particularly difficult or ambiguous (e.g., Coleman, Blake, & Mouton, 1958). Even in the Asch situation, conformity is increased when the lines are closer in length and thus harder to judge (Asch, 1952), and when judgments are made from memory rather than from direct perception of the lines (Deutsch & Gerard, 1955), presumably because our memories are considered more fallible than are our immediate perceptions.

Groupthink

Conformity clearly serves important functions, but like every other adaptation, there are downsides as well. A particularly troublesome aspect of conformity is groupthink (Janis, 1982). Janis borrowed this term from George Orwell’s 1984 to refer to a mode of thinking dominated by a concern for reaching and maintaining consensus, as opposed to making the best decision under the circumstances. Groupthink essentially entails “a deterioration of mental efficiency, reality testing, and moral judgment that results from group pressure” (Janis, 1982, p. 9). Rather than examining all possible courses of action, people in the grips of groupthink expend their mental energy on achieving and maintaining group solidarity and opinion unanimity.

The potential for groupthink exists in any group context, informal as well as formal, but the most intriguing examples concern decisions with far-reaching consequences by people normally considered the best and the brightest. Janis (1982) analyzed several such situations, including the Bay of Pigs invasion during the Kennedy administration, the bombing of Pearl Harbor, and the Vietnam War. Janis identified several common factors in these instances. In each case, crucial decisions were made in small groups whose members had considerable respect and liking for one another. Positive regard is certainly preferable to disinterest or disrespect, of course, but it can also serve to inhibit criticism and close examination of one another’s suggestions. The group members also tended to exhibit collective rationalization, systematically discrediting or ignoring all information contrary to the prevailing group sentiment. They also tended to develop strong feelings that their mission (e.g., invading Cuba, implementing a massive troop build-up in South Vietnam) was moral and that the opposite side was not only immoral but also stupid. To further cocoon the group, self-appointed “mind-guards” precluded members from accessing information that was inconsistent with the party line. The upshot is something akin to tunnel vision, in which a single perspective is seen as the only viable perspective—not because of a rational assessment of the facts but because of the group’s irrational espirit de corps.

Group Polarization

The groupthink phenomenon has rather straightforward implications for another phenomenon—group polarization— that was nonetheless considered surprising when first noted by researchers (e.g., Stoner, 1961; Wallach, Kogan, & Bem, 1962). The conventional wisdom was that individuals in groups avoid going out on the proverbial limb, and thus tend to produce more common or popular opinions and recommendations (cf. F. H. Allport, 1924). It followed from this that a group decision is usually more conservative than the average of the decisions generated by group members individually. This assumption regarding group decision making is reflected in critics’laments about the bland and often timid recommendations generated by committees in bureaucratic environments. When faced with making a decision, groups were assumed to inhibit boldness, subjugating the creative mind to the lowest common denominator of the group. What the research began to reveal, however, was quite the opposite tendency—greater endorsement of risky decisions as a result of group discussion.

This so-called risky shift is not surprising in light of theory and research on groupthink. If anything, the sense of superiority and certainty fostered by an emphasis on cohesiveness as opposed to rationality would seem to be a breeding ground for bold decisions that go beyond what an individual alone would contemplate. The shift toward risky decisions, however, was observed in contexts that didn’t involve the intellectual and emotional incest displayed by highly cohesive groups of self-important people. Even groups of strangers brought together for a one-shot encounter in a laboratory setting were found to advocate courses of action with less guarantee of success than the recommendations volunteered by the group members prior to their discussion. Because this observation flew in the face of conventional wisdom, it cried out for both replication and explanation. During the 1960s, neither proved to be in short supply. This burgeoning literature demonstrated greater risk-taking with respect to a wide variety of domains, including bargaining and negotiations (Lamm & Sauer, 1974), gambling behavior (Blascovich, Ginsberg, & Howe, 1975; Lamm & Ochssmann, 1972), and jury decisions (Myers, 1982). The risky shift was observed, moreover, when the consequences of a group’s decision involved real as well as hypothetical consequences (Wallach et al., 1962). The research also demonstrated that the risky shift was not limited to recommendations regarding possible courses of action. Indeed, group discussion—again, even among strangers—seemed to intensify all sorts of attitudes, beliefs, values, judgments, and perceptions (Myers, 1982). Such shifts were observed for both sexes, in different populations and cultures (e.g., United States, Canada, England, France, Germany, New Zealand), and with many kinds of group participants (Pruitt, 1971).

Several explanations for the risky shift achieved currency (Forsyth, 1990). The diffusion of responsibility perspective suggested that people are less averse to risk in groups because they feel less responsibility for—and hence less anxious about—the potential negative outcomes of risky decisions. The leadership account held that risk takers tend to emerge as leaders because of their greater confidence, assertiveness, and involvement in the task, and that their leadership status makes them more influential in group discussions. Familiarization theory maintained that group discussion increases members’ familiarity with the issue, which reduces their uncertainty and increases their willingness to advocate more risky alternatives. The value perspective proposed that taking risks is positively valued (in our culture, at least) and that group members like to be perceived as willing to take a chance; when group members discover that others in the group favor riskier alternatives, they change their original position to agree with the riskiest member.

During this same period, however, some research hinted at the opposite effect of group discussion—a cautious shift. To complicate matters even further, research began to find evidence of movement in both directions after a group discussion (Doise, 1969; Moscovici & Zavalloni, 1969), suggesting that both risky and cautious shifts were different manifestations of a more basic phenomenon. Based on a review of this research, Myers and Lamm (1976) identified what they felt was the underlying process. According to their grouppolarization hypothesis, the “average postgroup response will tend to be more extreme in the same direction as the average of the pregroup responses” (p. 603). Imagine two groups, each consisting of four individuals whose opinions vary in their respective pBibliography: for risk. The average choice of members is closer to the risky end of the cautionrisk dimension in one group, but closer to the cautious end of this dimension in the other group. The group-polarization effect predicts that the first group should become riskier as a result of group discussion (i.e., a risky shift), but that the second group should become more cautious during its deliberations (i.e., a cautious shift). The evidence cited by Myers and Lamm (1976) is consistent with this prediction and is widely accepted today as a valid empirical generalization regarding group dynamics.

This straightforward generalization proved to be resistant to a simple theoretical account. Most theorists eventually endorsed the value account (e.g., Myers & Lamm, 1976; Pruitt, 1971; Vinokur, 1971), although it didn’t take long for different variations on this general theme to emerge. Of these, two have stood the test of time (thus far). Social comparison theory holds that people attempt to accomplish two goals during group discussion: evaluating the accuracy of their position by comparing it with the positions of other group members, and creating a favorable impression of themselves within the group. The confluence of these two motives results in a tendency to describe one’s own position in somewhat more extreme terms (e.g., Goethals & Zanna, 1979; Myers & Lamm, 1976). Persuasive-arguments theory, meanwhile, stresses the importance of the information obtained during group discussion. Whether there is a shift toward risk or toward caution depends on the relative persuasiveness of the arguments favoring each position (e.g., Burnstein & Vinokur, 1977; Vinokur & Burnstein, 1974). The distinction between these two accounts corresponds to the distinction introduced earlier between normative and informational influence. Social comparison theory, with its emphasis on self-presentation attempts to match the perceived group norm, can be understood in terms of normative influence. The persuasive-arguments perspective, meanwhile, is practically synonymous with the rationale of informational influence. As noted in our earlier discussion, these two forms of influence often co-occur, so it should come as no surprise that social comparison and persuasive arguments often work together to promote polarization in groups (cf. Forsyth, 1990).

Minority Influence

In the film Twelve Angry Men, the character played by Henry Fonda turned his one-man minority into a unanimous majority during jury deliberations so that an innocent man could go free. In the face of virulent opposition, Galileo struggled for acceptance of his proof of Copernican theory that the planets revolve around the sun. This acceptance did not come during his lifetime, but his influence lived on and eventually turned the intellectual tide for subsequent generations. Martin Luther King Jr. and Mahatma Gandhi both defied the prevailing norms of their respective cultures and brought about significant social and political change. And in everyday life, people with opinions or lifestyles out of step with those of the majority often manage to preserve their personal perspective, sometimes even overcoming the majority’s disapproval and winning acceptance. If conformity were the only dynamic at work in social groups, these examples could be dismissed as aberrations with no implications for our understanding of social influence processes. One can envision groupthink and group polarization carried to the extreme, with the complete suppression of minority opinion and a resultant interpersonal homogeneity.

Far from representing aberrations, these examples suggest that there is more to social life than accommodation by the minority to majority influence. Even in small social groups, it is possible for a lone dissenter to be heard and to convert others to his or her point of view. At a societal level, minority interests and opinions manage to survive in the face of majority disapproval and hostility, and can sometimes manage to become dominant forces in the culture. In recognition of these facts of social life, minority influence has emerged as an important topic in social psychology (cf. Moscovici, 1976). Much of this research attempts to identify factors that enable minority opinions to persist in groups. Experiments in the Asch tradition, for example, have found that both group size and unanimity of the majority have important effects on conformity. The relation between group size and conformity appears to be logarithmic, such that conformity increases with increasing group size up to a point, after which the addition of more group members has diminishing impact (Latané, 1981). Asch’s own research showed that conformity is reduced if the group opposing the subject is not unanimous. Even one dissenter among the confederates emboldens the naive subject to resist group pressure and express his or her own judgment. This is true even if the dissenting confederate disagrees with the subject as well as the rest of the group (Allen & Levine, 1971). The key factor is not agreement with the subject, but rather the recognition that nonconformity is possible and acceptable.

Other lines of research have explored the conditions under which minority opinions not only survive, but also become influential to varying degrees in the group. A primary conclusion is that minority members must marshal high-quality arguments and come across as credible. In other words, minorities must rely on informational influence to counter the normative influence associated with the majority position. Against this backdrop, research has revealed a variety of more specific factors that foster minority influence. Thus, minorities are persuasive when they hold steadily to their views (Maass & Clark, 1984; Moscovici, Lage, & Naffrechoux, 1969), originally held the majority opinion (e.g., R. D. Clark, 1990; Levine & Ranelli, 1978), are willing to compromise a bit (Mugny, 1982), have at least some support from others (e.g., Asch, 1955; Tanford & Penrod, 1984; Wolf & Latané, 1985), appear to have little personal stake in the issue (Maass, Clark, & Haberkorn, 1982), and present their views as compatible with the majority but just a bit ahead of the curve, so to speak (e.g., Kiesler & Pallak, 1975; Maass et al., 1982; Volpato, Maass, Mucchi-Faina, & Vitti, 1990). Minority influence also has a better chance if the majority wants to make an accurate decision, because this situation gives the advantage to informational over normative influence (Laughlin & Ellis, 1986). The conditions associated with effective minority influence enable groups (and societies) to embrace new ideas, fashions, and action preferences.

Accountability

The notion of conformity conveys an image of nameless automatons who surrender their personal identity to the group. Ironically, however, the coordination function served by mutual influence in a group setting requires rather than negates a sense of personal identity and responsibility among group members. To achieve social coordination, people must feel that they are part of a larger social entity, of course, but they also must feel that this part is uniquely their own. Two research traditions are relevant to the role of accountability in achieving social coordination. The first concerns the conditions under which people abrogate personal responsibility for doing their part to achieve a common goal or for taking the initiative in a group setting in which their involvement would be helpful. The second concerns the conditions under which people in a sense become overly sensitized to the group goal to the point that they lose sight of their personal identity and unique role in the group.

Social Loafing

Sometimes the whole is less than the sum of its parts. This feature of group dynamics was first observed in an experimental setting by Max Ringelman in the 1920s. Using a gauge to measure effort exerted by tug-of-war participants, Ringelman found that the collective effort was always greater than that of any single participant, but less than the sum of all participants (Kravitz & Martin, 1986). If two people working alone could each pull 100 units, for example, their combined output was only 186—not the 200 one would expect if each pulled as hard as he or she could. Similarly, a three-person group did not produce 300 units, but only 255, and an eightperson group managed only 392 units—less than half the 800 possible.

Ringelman suggested that two mechanisms were responsible for this phenomenon. The first, coordination loss, reflects difficulties individuals have in combining their efforts in a maximally effective fashion. On a rope-pulling task, for example, people may not synchronize their respective pulls and pauses, and this can prevent each person from reaching his or her full potential. The second mechanism, commonly referred to today as social loafing (Latané, 1981), refers to diminished effort by group members. People may simply not work as hard when they feel other people can pick up the load. Latané, Williams, and Harkins (1979) attempted to replicate the Ringelman effect and to determine which of his proposed mechanisms accounted for it. Participants in one study, for example, were simply asked to shout or clap as loud or as hard as they could, while wearing blindfolds and headsets that played a stream of loud noise. When tested alone, participants averaged a rousing 9.22 dynes/cm2— about as loud as a pneumatic drill or a teenager’s stereo system. But in dyads, subjects performed at only 66% capacity, and in six-person groups, their performance dropped to 36% capacity. The results, in other words, revealed an inverse relationship between the number of coperformers and the output each one generated.

To separate the relative impact of coordination loss and social loafing, Latané et al. (1979) tested noise production in pseudogroups. Participants thought that either one other participant or five other participants were cheering with them, although they were actually cheering alone (the blindfolds and headsets came in handy here). Because there were not any other group members, any drop in individual production could not be due to coordination loss, but instead would reflect social loafing. Results revealed that social loafing was the operative mechanism. If participants thought they were cheering with one other person, they shouted at 82% of their individual capacity. Their productivity dropped to 74% if they thought five others were working with them.

Social loafing is not limited to group tasks involving shouting, or even to tasks involving physical effort of some kind. The decrement in personal contribution with increasing group size has been documented in groups working on a variety of tasks, including maze performance, typing, swimming, vigilance exercises, creativity problems, job-selection decisions, and even brainstorming (e.g., Weldon & Mustari, 1988; cf. Forsyth, 1990). Social loafing applies equally well to men and women, to people of all ages, and to groups in many different cultures (e.g., Brickner, Harkins, & Ostrom, 1986; Harkins & Petty, 1982). There may be polarization of attitudes and other mental states in social groups, but this intensification effect apparently does not apply to group member’s efforts in accomplishing a group task.

Social loafing varies in accordance with a set of specific factors. Group members loaf less when they are working on interesting or challenging tasks (e.g., Brickner et al., 1986). Loafing is also minimized when each member’s contribution to a group project can be clearly identified, presumably because identification creates the potential for evaluation by other group members (e.g., Harkins & Jackson, 1985; Jackson & Latané, 1981; Williams, Harkins, & Latané, 1981). Social loafing is also partly attributable to the diffusion of responsibility that takes place in groups and crowds (cf. Latané & Darley, 1970). Bystanders to emergency situations feel less compelled to intervene if there are other potential helpers (Darley & Latané, 1968), for example, and restaurant patrons leave pitiful tips when there are many people in the dinner party (Latané & Darley, 1970). Diminished personal responsibility reflects members’ feeling that someone else will make up the difference, and also reflects their assessment that they can get away with not helping because the blame is shared by everyone in the group.

The research on social loafing has focused primarily on additive group tasks in which each member’s performance is redundantwiththatofeveryothermember.Thishardlyexhausts the possible relationships among group members. In situations emphasizing individual rather than group performance, for example, there is a tendency for individual energy expenditure and effort to increase rather than decrease when others are physically present (cf. Triplett, 1898; Zajonc, 1965). Whether this social facilitation effect (cf. Cotterell, 1972) translates into better performance, however, depends on features of the task and the contingencies surrounding its occurrence. The presence of others typically enhances performance on overlearned tasks, for example, but tends to hinder performance on novel or difficult tasks (Zajonc, 1965). There is some controversy regarding the social influence processes at work in such contexts, although there is a fair degree of consensus that the presence of others increases a performer’s physiological arousal, which in turn activates his or her dominant responses on the task. This is consistent with the empirical generalization noted by Zajonc (1965), because correct responses are dominant for well-learned tasks and incorrect responses are dominant for unfamiliar tasks.

Even in groups mandating cooperation among group members, the nature of the task may entail forms of coordination that go beyond the simple additive criterion employed in social loafing research (cf. Steiner, 1972). Neither simultaneous shouting nor tug-of-war, after all, captures the essence of groups that build machines or solve human relations problems. Many group goals are defined in terms of distinct subacts that must be accomplished by different group members. For such activities, the quality of the group’s performance depends on how well members’ respective contributions are synchronized in time. Assembling a car on a production line requires such role differentiation, as does maintaining a household, moving heavy pieces of furniture, or implementing plans to manually recount votes in a close election. Coordination is every bit as critical as individual effort per se in such instances, and a particular blend of normative and informational influence may be necessary for the action to unfold smoothly and effectively. Identifying these blends of influence is an agenda for future research.

Deindividuation

Festinger, Pepitone, and Newcomb (1952) coined the term deindividuation to describe a mental state defined by total submergence in a group. A deindividuated person feels he or she does not stand out as a unique individual, and this feeling leads to a reduction of inner restraints that can result in impulsive acts or other behaviors that might otherwise be inhibited. Although these behaviors may be benign or even desirable (e.g., spontaneous expression of feelings, laughing and dancing at a boisterous party), researchers have typically focused on the potential for antisocial and aggressive actions under conditions that promote deindividuation (cf. Diener, 1980; Zimbardo, 1970). Soccer hooligans committing random acts of violence, mobs rioting and looting stores, and gangs terrorizing their enemies are disturbing manifestations of this potential.

Several preconditions for deindividuation have been identified (Zimbardo, 1970). Being part of a large, unstructured group, for example, increases one’s anonymity and thus can reduce feelings of personal responsibility for one’s actions. The same can be said for clothing that conceals one’s identity, the cover of darkness, sensory overload, the use of drugs or alcohol, and collective action of a simple, repetitive (or rhythmic) nature (e.g., marching, clapping, dancing). Diener (1980) suggested that the anonymity associated with deindividuating conditions is tantamount to a loss of self-awareness (Duval & Wicklund, 1972) and hence to diminished salience of personal standards for acceptable conduct (e.g., Carver & Scheier, 1999; Higgins, 1987; Vallacher & Solodky, 1979). Lacking the usual self-regulatory mechanisms for enacting and inhibiting behavior, the deindividuated person becomes highly susceptible to influence from the group and the context in which the group is acting. The nature of this influence, however, does not map onto either normative or informational influence in a straightforward manner. Thus, the person is not consciously modifying his or her behavior to court approval from others, nor is he or she gaining a great deal of insight into physical reality from fellow group members.

One likely dynamic at work is akin to what Le Bon (1895/1960) referred to as behavioral contagion, the rapid spread of behavior in a group context. Contagion occurs through simple imitation of others’ behavior or through the adoption of others’ emotional state, and thus is not particularly taxing on people’s mental processes.Arelated possibility follows from emergent norm theory (Turner & Killian, 1972), whichholdsthatpeopleinunstructuredgroupsettingswithout clear a priori group goals are highly susceptible to cues to higher-order meaning and guides to action that develop in the situation. Consider, for example, the experience of walking down New Orleans’ Bourbon Street at 2 a.m. during Mardi Gras. This situation is ripe for deindividuation—maybe even prototypical. You are part of a large, unstructured group consisting of unfamiliar people, it’s dark and no one is paying attention to you anyway, music is coming from all angles to overwhelm your powers of sensory integration, and there may have been a couple of hurricane specials consumed by this time. But despite the complex array of sights and sounds, there is no plan dictating your movements and shifts in attention.At this point, if others in the throng spontaneously broke into a rhythmic chant or began throwing plastic beads at a passing float, you might be tempted to follow suit. The collective action you observe provides temporary integration for the ensemble of your specific experiences and thus functions as an emergent norm. The norm doesn’t imply acceptance or rejection by others—you could keep on walking and no one would care—but it does provide a guide that allows you to engage in concerted action rather than mere movement (cf. Goldman, 1970; Vallacher & Wegner, 1985).

Viewed in this way, it is easy to appreciate how a state of deindividuation can promote widely divergent action trajectories—moral versus immoral, prosocial versus antisocial, effusive versus sullen, and so on. In effect, the deindividuated person is behaving in accordance with rudimentary moment-to-moment action guides that are devoid of higherlevel meaning. This mental state is a precondition for emergent understanding (Vallacher & Wegner, 1987), making the person highly susceptible to whatever goals and plans are rendered salient as the situation evolves. Should the situation resolve itself as an occasion for social camaraderie, the person might be inclined to laugh and dance with everyone he or she encounters. But should the opportunity for personal gain at the expense of others suddenly arise, the same person could just as easily behave in a decidedly unfriendly, even aggressive manner toward those who provide the opportunity. Social influence in this context provides personal (if somewhat transient) coherence and direction for individuals’ otherwise disassembled and unregulated actions.

The Individual and Society

One of the most challenging problems in social psychology centers on the relation between micro- and macrolevels of description. Social psychological theories are typically couched in terms of a single level of description, with little explicit coordination with theories defined at different levels. Thus, the processes at the level of the individual tend to be independent of group-level processes. Yet it is unreasonable to expect any level of structure and function to operate in isolation. An individual’s behavior is influenced by the social context in which he or she functions, and each individual in turn creates the social context for other individuals through his or her interactions with them. The nature of this mutual dependency is difficult to capture, but recent advances in the study of complex systems (cf. Schuster, 1984) are proving useful in linking different levels of social reality (e.g., Nowak & Vallacher, 1998a, 1998b; Nowak, Vallacher, & Burnstein, 1998; Nowak, Vallacher, & Zochowski, 2002). In this section, we describe one relevant approach—cellular automata—that has established a track record in this regard in recent years. Other approaches (neural networks, coupled dynamical systems) are showing promise as well, and the reader is referred to the sources cited above for a description of them.

The Cellular Automata Approach

Cellular automata models (Gutowitz, 1991; Ulam, 1952; von Neumann, 1966; Wolfram, 1986) capture important features of complex systems and are widely used in physics and various domains of biology, including neuroscience (Amit, 1989) and population dynamics (May, 1981). A set of elements is specified to represent the basic units (e.g., neurons, people) in the process under consideration. Each element can adopt a finite number of discrete states (e.g., activated vs. inhibited, pro- vs. antiabortion). The elements are arranged in a spatial configuration, the most common of which is a twodimensional grid. The state of an element at t + 1 depends on the states of the neighboring elements at time t. The exact form of this dependence is specified by so-called updating rules. The dynamics of cellular automata depend on the nature of the updating rule and on the format of the grid dictating the neighborhood structure.

Two classes of cellular automata models are used to characterize social processes. In both, elements represent individuals in a social system. In one, personal characteristics change as a result of updating rules. This approach explores changes in attitudes and opinions that occur as a result of social interaction. In the other class, individuals maintain stable characteristics but may change their physical location. This approach has revealed the emergence of spatial patterns on the basis of stable values and preferences. Shelling (1969, 1971), for instance, developed an updating rule specifying that an individual who has more dissimilar than similar neighbors will move to a different random location. Simulations based on this simple rule demonstrated the emergence of spatial patterns corresponding to social segregation. Both classes of models reveal the emergence of regularities and patterns on a global level that were not directly programmed into the individual elements. These regularities and patterns typically take the form of spatial configurations, such as coherent minority opinion clusters that emerge from an initial random distribution of opinions. Regularities may also appear as temporal patterns, including such basic trajectories as the development of a stable equilibrium (fixed-point attractor), alternation between different states (periodic attractor), and apparent randomness (deterministic chaos).

Cellular Automata and Social Processes

Cellular automata models are useful for exploring different social interaction rules and the generation of societal level phenomena as a result of such rules (cf. Hegselman, 1998; Messick & Liebrand, 1995; Nowak, Szamrej, & Latané, 1990). In these applications, the neighborhood structure is intended to capture the structure of interdependence among individuals (Thibaut & Kelley, 1959). Indirect interdependence exists when an individual’s actions have consequences, intended or unintended, for other people. This form of interdependence is often examined in the context of social dilemmas, in which an action intended to maximize personal gain has negative consequences for others (cf. Schulz, Alberts, & Mueller, 1994). In the tragedy of the commons (Hardin, 1968), for instance, a farmer is motivated to overgraze an area of land shared with other farmers. In the short run, the farmer gains advantage over his neighbors, but in the long run, everyone—the farmer included—suffers. Direct interdependence reflects what we normally think of as social influence: One person directly influences the state or behavior of another person. Power, manipulation, and coordination thus represent direct interdependence. Both indirect and direct forms of interdependence have been examined in cellular automata models.

Interdependence and Social Dilemmas

How can altruistic behavior can emerge against the backdrop of self-interest? Insight into this puzzle derives from cellular automata models that simulate the short- and long-term effects of behavior in the Prisoner’s Dilemma Game (PDG). In pioneering this approach, Axelrod (1984) demonstrated that cooperation often emerges among individuals trying to maximize their respective self-interest. Essentially, Axelrod found that cooperators survived by forming clusters with one another, so that they could engage in mutual help without risking exploitation.

In an extension of this approach, Messick and Liebrand (1995) modeled the consequences of different strategies in the PDG. Each interactant occupied a fixed position in a twodimensional lattice and played a PDG with one of his or her nearest neighbors. On each trial, the interactant chose whether to cooperate or defect according to one of several updating rules, each reflecting a specific social strategy. In a given simulation, everyone used the same strategy. In the titfor-tat strategy, individuals imitated the choice made on the preceding trial by their neighbor. In the win-cooperate– lose-defect strategy, the interactant with the greater outcome cooperated, whereas the interactant with the smaller outcome defected. In the win-stay–lose-shift strategy, meanwhile, interactants who perceived themselves to be winning behaved in the same fashion on the next trial, whereas interactants who perceived themselves as losing changed their behavior on the next trial. The results of simulations employing these updating rules reveal different effects depending on the size of the group. In relatively small groups, an equilibrium tends to be reached fairly quickly, with all interactants converging on a particular choice. In larger groups, however, each strategy leads to continuous dynamics characterized by the coexistence of different behavioral choices. Eventually, however, each strategy leads to specific proportions of cooperating individuals. These proportions tend to be maintained at the group level, with the interactants themselves continuing to change their choices throughout the simulation.

In a different approach, Hegselman (1998) explored the emergence of social support networks in a society. Individuals lived on a two-dimensional grid containing some unoccupied sites and played a two-person “support game” with all of their immediate neighbors. Each individual was characterized by some probability of needing help. A needy individual clearly benefited, of course, if he or she received help from a neighbor, but providing help to a neighbor was clearly costly. With this trade-off in mind, each individual’s preferred neighborhood was one in which he or she could obtain the degree of help needed while minimizing the help he or she provided. Individuals were sometimes provided a migration option that enabled them to move to a more desirable location within a certain radius. The results reveal how support networks can evolve in a world of rational egoists who are differentially needy, but similarly motivated to choose partners in an opportunistic manner. Although social support inevitably develops, the social networks that emerge tend to be highly segregated. Individuals with a moderate probability of becoming needy tend to form relationships with one another, and also with individuals from somewhat higher and lower risk classes. Interestingly, individuals at the extremes of neediness—those with very high or very low probabilities of needing help—tend to have the most difficulty in establishing support relations. If they do manage to form such relationships, their partners tend to be from the same risk class.

Social Influence and the Emergence of Social Structure

The cellular automata model of social process that has been analyzed most thoroughly concerns social influence (e.g., Lewenstein, Nowak, & Latané, 1993; Nowak, Lewenstein, & Frejlak, 1996). The initial formulation of this model (Nowak et al., 1990) focused on the emergence of public opinion in a society characterized by a diversity of attitudes. The model assumes that in the course of social interaction, individuals are motivated to sample the degree of social support for their position on a given topic. The model also assumes, in line with social impact theory (Latané, 1981), that each individual gives the greatest weight to the opinions of others who are spatially closest to him or her and who have the greatest strength (e.g., who are most influential or persuasive). An individual’s own opinion is also taken into consideration and is weighted most heavily by virtue of spatial immediacy (i.e., distance is 0). After each round of interaction, the individual compares the degree of support for each attitude position and adopts the one with the strongest support in preparation for the next round of interaction.

In the simulations, one individual is chosen (usually at random), and influence is computed for each opinion in the group. (The strength of influence of each opinion is expressed by the following formula.

Ii

where Ii denotes total influence, sj corresponds to the strength of each individual, and dij corresponds to the distance between individuals i and j.) If the resultant strength for an opinion position is greater than the strength of the individual’s current position, his or her opinion changes to match the prevailing position. This process is performed for each individual. This procedure is repeated until there are no further changes, which typically requires several rounds of simulation, because a person who had previously changed his or her position to match that of his or her neighbors may revert to the original position if the neighbors change their opinions. Figures 16.1 and 16.2 present representative results of the computer simulations. Each box corresponds to an individual. The color of the box (light vs. dark gray) denotes the individual’s position, and the height of the box corresponds to the individual’s strength. In Figure 16.1, there is a majority of 60% (light gray) and a minority of 40% (dark gray). The majority and minority members are randomly distributed, and each group has the same relative proportions of strong and weak members (high vs. low boxes). Figure 16.2 shows the equilibrium reached after six rounds of simulated discussion. Now the majority is 90% and the minority is 10%. Note that the minority opinion has survived by forming clusters of like-minded people and that these clusters are largely formed around strong individuals.

Social Influence and Group Dynamics Research Paper

These two group-level outcomes—polarization and clustering—are commonly observed in computer simulations (cf. Nowak et al., 1996; Latané, Nowak, & Liu, 1994) and are reminiscent of well-documented social processes. As noted earlier in this research paper, the average attitude in a group becomes polarized in the direction of the prevailing attitude as a result of group discussion (e.g., Moscovici & Zavalloni, 1969; Myers & Lamm, 1976). In the simulations, polarization reflects the greater influence of the majority opinion. In the initial random configuration (Figure 16.1), the average proportion of neighbors holding a given opinion corresponds to the proportion of this opinion in the total group. The average group member, then, is surrounded by more majority than minority members, a difference that results in more minority members’ being converted to the majority position than vice versa. Some majority members are converted to the minority position, however, because they happen to be located close to an especially influential minority member, or because by pure accident, more minority members happen to be at this location.

Social Influence and Group Dynamics Research Paper

Clustering is also pervasive in social life.Attitudes, for example, have been shown to cluster in residential neighborhoods (Festinger, Schachter, & Back, 1950). Pronounced clustering also characterizes political beliefs, religions, clothing fashions, and farming techniques. Clustering reflects the relatively strong influence exerted by an individual’s neighbors. When opinions are distributed randomly, the sampling of opinions through social interaction provides a reasonably accurate portrait of the distribution of opinions in the larger society.When opinions are clustered, however, the same sampling process will yield a highly biased result. Because the opinions of those in the nearby vicinity are weighted the most heavily, the prevalence of one’s own opinion is likely to be overestimated. Hence, opinions that are in the minority in global terms can form a local majority. Individuals who hold a minority opinion are therefore likely to maintain this opinion in the belief that it represents a majority position.

Control Factors for Social Influence

The results concerning polarization and clustering have been confirmed analytically (Lewenstein et al., 1993) and have received empirical support as well (Latané, Liu, Nowak, Bonavento, & Zheng, 1995; Latané & Nowak, 1997). This research has also identified several control factors that are responsible for the emergence of these macroscopic properties (Latané & Nowak, 1997; Lewenstein et al., 1993; Nowak et al., 1996). Individual differences in strength, first of all, are indispensable to the survival of minority clusters. This conclusion is consistent with evidence demonstrating the importance of leaders for maintaining the viability of minority opinions. The literature on brainwashing, for example, documents that natural leaders were commonly removed from the group before attempts were made to brainwash prisoners of war (cf. Schein, 1956). By counteracting the sheer number of majority opinions, the strength of leaders stops minority clusters from decaying. It is worth noting that as a result of social influence, individual differences in strength tend to become correlated with opinions. This is because the weakest minority members are most likely to adopt the majority position, so that over time the average strength of the remaining minority members will grow at the expense of the majority. This scenario is consistent with the observation that individuals advocating minority positions are often more influential than those advocating majority positions.

A second critical control factor is nonlinearity in attitude change. Abelson (1979) demonstrated that when individuals move incrementally toward the opinions of interaction partners as a result of social influence, the invariable outcome of simulations is uniformity and the complete loss of minority clusters. In the model depicted here, however, attitudes are assumed to be categorical in nature (Latané & Nowak, 1994). This means that individuals hold a fixed position and actively resist influence attempts until a critical threshold of influence is reached, at which point they switch dramatically from one category to another rather than incrementally on a dimension of judgment. There is empirical evidence in support of the nonlinearity assumption for attitude topics that are personally important (cf. Latané & Nowak, 1994). Such attitudes display a bimodal distribution, with almost no individuals occupying the intermediate points on the attitude dimension. This suggests, incidentally, that one way to achieve consensus in a group is to decrease the subjective importance of the topic in question.

Athird critical feature concerns the geometry of the social space (Nowak, Latané, & Lewenstein, 1994). People do not communicate equally with everyone in a group, nor are their interactions random. Specific communication patterns can be approximated with different geometries of social space. In most of the simulations, social space is portrayed as a two-dimensional matrix of n rows and n columns. This geometry reflects the assumption that interactions typically occur in two-dimensional spaces, such as neighborhoods, town squares, and rooms. One can envision other geometries, however, to capture different communication structures (Nowak et al., 1996). A one-dimensional geometry in which people interact mainly with neighbors to their left and right corresponds to a row of houses along a river or a village stretching along a road. In this case, strong clustering occurs because of well-pronounced local interactions between nearest neighbors. Polarization, however, is inhibited because members of the majority cannot encircle members of the minority and overwhelm them. Far more elaborate geometries of social space can also be envisioned. In the real world, many different geometries no doubt co-occur and thus determine the dynamics of social influence. The availability of telephones, e-mail, and common areas for shopping and recreation clearly add many dimensions to the effective geometry in which interactions occur. The combined effects of such geometries play a significant role in determining the form and outcome of social influence.

A fourth critical factor represents the weight an individual attaches to his or her own opinion as compared to the opinions of others. This variable, referred to as self-influence, corresponds to psychological states like self-confidence, strength of conviction, and belief certainty. An individual’s self-influence is correlated with his or her strength, although the absolute value of self-influence varies as a function of topic or social setting. When an issue is new or confusing, for example, self-influence is correspondingly lower, reflecting the fact that no strong opinion has formed and everyone is relatively open to external influence. When an issue is familiar and personally important, however, self-influence attains its maximum value for everyone, reflecting the greater importance of one’s own opinion compared to others’ opinions. Because issue familiarity is assumed to be the same for all individuals in a given simulation, variation in self-influence is a direct reflection of variation among individuals in their respective strength.

The dynamics of social influence are determined by the value of self-influence relative to the total influence of other individuals. When self-influence is low, individuals may switch their opinions several times during the course of simulations. This has the effect of destabilizing clusters. For topics that are unfamiliar, then, one observes heightened dynamics that promote unification based on the majority opinion. However, if self-influence is greater than the combined influence of others, dynamics tend to be dampened altogether, unless sources of noise (random external factors) are present. Because noise works jointly with social influence, noise-induced changes are typically in the same direction as majority influence. Introducing a random factor that by itself would not favor any position can thus neutralize the effect of self-influence and enhance the effect of majority opinion. Very high values of noise, however, can dilute the effects of social interaction as well, producing random changes in opinion.

Social Change and Societal Transitions

This general approach to the modeling of social processes has proven useful in generating insight into the dynamics of social change, including major societal transformations (Nowak & Lewenstein, 1996; Nowak, Lewenstein, & Szamrej, 1993; Nowak & Vallacher, 2001). This approach successfully models social change when a source of bias is introduced that makes the minority opinion more attractive than the majority opinion. The results of simulations reveal that rapid social change occurs in a manner that is remarkably similar to phase transitions in physical phenomena. Expressed metaphorically, changes enter as bubbles of new within the sea of old, and social transitions occur as these bubbles expand and become connected. Thus, for example, a new political ideology or lifestyle fashion that resonates with existing values or interests is introduced into a social system and is immediately embraced by pockets of people in different areas.These pockets become increasingly connected over time, until at some point the new idea achieves widespread dominance over the old idea.

Computer simulations also indicate, however, that the bubbles of the old manage to stay entrenched in the sea of the new. The strongest and best-supported individuals holding the old position, moreover, are the most likely to survive pressures associated with the new position. This, in turn, means that the old position is likely to display a rebound effect when the bias toward the new position disappears or is somehow reversed. This scenario provides an explanation for the return of leftist governments in Eastern Europe after their overwhelming defeat in the elections in the late 1980s.

This model of societal transition stands in marked contrast to the conventional view of social change, which holds that individuals gradually switch from an old set of attitudes or pBibliography: to a new set of ideas. From that perspective, new ideas spread more or less uniformly through a society at a constant and relatively slow rate. The simulation model allows for this mode of social change as long as the social system is near a relatively stable equilibrium and noise is not a significant factor in dictating the system’s dynamics (Nowak et al., 1993). The incremental scenario, in other words, may effectively characterize how change occurs in a stable society (e.g., a gradual shift from liberalism to conservatism or vice versa), but it does not capture the nature of change defining periods of rapid social transition.

Two sources of data provide empirical support for this perspective on social transition: the development of the private sector of the Polish economy and the emergence of voting pBibliography: in the Polish parliamentary elections during the transition from socialism to private enterprise in the late 1980s and early 1990s (Nowak, Urbaniak, & Zienkowski, 1994). For a description of these data, as well as a comprehensive depiction of the cellular automata model and its implications for societal transition, the reader is referred to Nowak and Vallacher (2001).

Implications for Cultural Differences

The cellular automata model is useful in understanding and predicting differences among cultures in the dynamics of social influence and societal organization. A primary theme in cross-cultural comparisons centers on collectivism versus individualism (cf. Markus & Kitayama, 1991). In so-called collectivist cultures—China and Japan, for example— interdependence among individuals is stressed at the expense of personal independence, so that individuals are readily influenced by the beliefs, attitudes, and expectations of other people. In so-called individualistic cultures—the United States, for example—greater emphasis is placed on independence, with individuals maintaining a relatively strong degree of autonomy in their self-concept, attitudes, and lifestyle. This dimension of cultural variation maps directly onto the variable of self-influence in the cellular automata model. In a society that values independence in decisionmaking and judgment, the magnitude of self-influence is correspondingly strong and operates at the expense of the opinions and expectations of others. Computer simulations have revealed that as self-influence increases in magnitude, the number of individuals changing their opinion on a given issue decreases, there is less polarization and clustering, and the average cluster is smaller in size (Latané & Nowak, 1997; Lewenstein et al., 1993).

Societies also differ in their relative stability. In less modernized societies, which are predominantly rural and agrarian rather than industrial in nature, the social context for individuals is relatively stable over time. In contrast, relatively modernized and industrial societies tend to be characterized by greater social mobility (e.g., travel, permanent relocation) and greater frequency of communication over large distances (by means of phone, e-mail, and fax). These features disturb the stability of social influence exerted by the social context on the individual. At different times, in other words, the individual is exposed to a broad range of opinions that go beyond those expressed in the immediate social context. This aspect of modernized society can be represented in the model as noise, which reflects the sum of influences (e.g., exposure to mass media, contact with people from other cultures) not accounted for by local influence. The greater the magnitude of noise in a society, the weaker the relative role played by the individual’s local context. The opinions of someone in a different part of the country, for example, may have a greater impact on an individual’s opinions than do the opinions of his or her immediate neighbors. This is clearly not the case in a stable society, in which everyone is exposed to the same local contacts throughout much of his or her life.

Computer simulations of the model have demonstrated a nonlinear relationship between noise and the distribution of opinions in a society (Latané & Nowak, 1997; Lewenstein et al., 1993). Small values of noise tend to destabilize weak clusters (e.g., Nowak, Vallacher, Tesser, & Borkowski, 2000). Because weak clusters tend also to be small, low-level noise has the effect of increasing the average size of clusters in the society, which is reflected in higher overall clustering and polarization. Higher values of noise, however, can destabilize all minority clusters and thus promote unification of opinions in the society. At very high levels of noise, however, individuals are likely to adopt opinions that are independent of their immediate social context. This not only disrupts clusters, but it also prevents unification of opinions in the society. In effect, everyone switches his or her opinions in a more-orless random fashion.

In a stable society characterized by low levels of noise, then, a stable pattern of relatively small clusters is to be expected, whereas in a somewhat less stable society characterized by moderate levels of noise, larger clusters and greater opinion polarization is to be expected. With further increases in societal instability, one might expect a breakdown in minority opinion clusters and a tendency toward societal unification in opinion. Finally, in a highly modernized and unstable society, one would expect the pattern of opinions to be largely independent of the pattern of social ties (e.g., neighborhood influence), demonstrating instead the influence of other factors, such as selective exposure to the media and contact with other cultures.

Cultures also differ in their respective values and pBibliography: regarding everything from clothing to religion. This feature is represented in the model as bias. If a new idea resonates well with a culture’s prevailing values and preferences, it will take somewhat less social influence for the idea to take hold in the society. But if the idea runs counter to cultural values, it is likely to be resisted even if it is supported by considerable influence. Communist ideology was never fully embraced in Poland, for example, despite the considerable influence exerted by the government, because communist values ran counter to strong Polish traditions of independence and Catholicism. As noted above, research exploring the social change implications of the model has verified that cultural bias is indeed a significant factor in determining the extent to which a new idea or ideology can take hold in a society (cf. Nowak & Vallacher, 2001).

It is interesting to consider cultural differences in terms of the specific combinations of self-influence, noise, and bias. Two industrialized societies may both have high levels of self-influence (i.e., an individualistic orientation), for example, but they may differ considerably in their respective levels of noise (e.g., selective exposure to mass media) or their bias toward various positions (e.g., religious beliefs). Because each of these variables plays a unique role in social influence, the interaction among them is likely to be decisive in shaping the predominant form of social influence characterizing a given society. Cultural variation in social influence processes, in other words, conceivably can be traced to the specific blend of variables in the cellular automata model. The investigation of this possibility provides an important agenda for future research concerning the relationship between micro- and macrolevels of social reality.

Toward Coherence in Social Influence

Social influence is clearly a big topic, a fact that reflects its centrality to the field of social psychology. The enormous range of ideas and principles associated with this topic, however, is a mixed blessing. On the one hand, the diversity of social influence phenomena and processes attests to the undeniable complexity of human social experience. But on the negative side of the ledger, this very diversity can prove vexing for those—laypeople and theorists alike—who seek integration and synthesis in their understanding. Several hundred studies and dozens of distinct mechanisms may well be necessary to capture the nuances of such a wide-ranging topic, but this state of affairs does little to inspire a feeling of coherent understanding. Like the field of social psychology as a whole (cf. Vallacher & Nowak, 1994), the subfield of social influence is highly fragmented, with poorly defined connections among the separate elements that define it.

Ironically, if there is a basis for theoretical coherence in social influence, it may reflect what psychologists have learned about the dynamics of coherence in recent years. Despite the enormous complexity of human minds and social groups—or perhaps because of such complexity— psychological systems at different levels of personal and social reality display self-organization and the emergence of higher-level properties. The mutual influences among the elements in each system promote such emergence, and the resultant properties in turn provide functional integration and coordination for the component elements. This reciprocal feedback between lower-level elements and higher-level properties may constitute an invariant principle common to all social psychological processes—or to all complex systems, for that matter (cf. Nowak & Vallacher, 1998a). Thus, the specific cognitive elements defining the stream of thought become self-organized with respect to higher-order judgments and values (Vallacher, Nowak, & Kaufman, 1994), specific movements and perceptions become coordinated to produce meaningful action (cf. Vallacher et al., 1998), individuals become integrated into higher-order functional units such as dyads and social groups (e.g., Nowak et al., 2002), and social groups become coordinated with respect to larger goals and values that define the social system in which they are embedded (cf. Nowak & Vallacher, 2001).

With this in mind, it is tempting to consider whether a press for higher-order coherence provides a common denominator for the otherwise dizzying array of specific social influence processes. Perhaps seemingly distinct means of influencing people prove effective or ineffective depending on how well each taps into established rules regarding coherence in thought and action. If so, many of the phenomena discussed in this research paper could be reframed so as to underscore their common features, and new predictions could be generated about the factors that determine whether a given influence strategy will prove successful in a particular context for a particular target. The central idea is that influence involves resynchronization of the elements in the target’s relevant cognitive structure. Achieving resynchronization is difficult, however, when the cognitive structure in question is wellintegrated and stable. To promote a change in behavior in this case, it is necessary to disassemble or otherwise destabilize the associated cognitive structure. After the structure is destabilized, the person is primed for resynchronization in line with cues to higher-order meaning provided by the influence agent.

Abasic strategy for resynchronizing people’s thoughts and desires follows from the emergence process of action identification theory (cf. Vallacher & Wegner, 1987; Vallacher et al., 1998). Research on this process has revealed that when people do not have an integrated representation of what they are doing, they become highly sensitive to coherent perspectives on their behavior provided by others. The extrapolation of this process to social influence is straightforward. In this scenario, the influence agent first induces the target to consider the relevant topic or action in concrete, low-level terms. Getting the target to engage in topic-relevant behavior has this effect, provided the behavior is sufficiently novel or complex that it requires attention to detail. Simply describing an action in terms of its details can also induce low-level identification, as can presenting the target with a surplus of concrete information regarding the attitude object. From this disassembled state, the target experiences a heightened press for integration. Left to his or her own devices, the target might emerge with a higher-level frame for the action or topic that reflects past positions or perhaps one that reflects a new integration altogether (Vallacher & Nowak, 1997; Vallacher et al., 1998). If, however, the influence agent offers a message that provides the missing integration before the target has demonstrated emergence on his or her own, the target is likely to embrace this message as an avenue of emergent understanding, even if it conflicts with his or her prior conception.

This general approach to influence is effective in changing people’s understanding of their own behavior, but with few exceptions (e.g., Davis & Knowles, 1999; Vallacher & Selz, 1991) this approach has not been extended to other domains of influence. Nonetheless, a wide variety of established influence strategies can be reframed as the disassembly of a coherent state into its lower-level elements, setting the stage for a reconfiguration of the elements in line with the influence agent’s agenda. Thus, any strategy that involves inducing the target to engage in acts that are at least somewhat novel or time-consuming can create the necessary precondition for guided emergence, as can providing the target with ambiguous or conflicting information that is open to different higherlevel interpretations. Placing the target in a situation that lacks a priori structure and coherence can similarly make him or her vulnerable to emergent norms for how to act. Certain dimensions of individual difference are also associated with vulnerability to social influence, and these too can be considered in light of the emergence scenario. Self-uncertainty (e.g., Swann & Ely, 1984; Vallacher, 1980), low levels of personal agency (Vallacher & Wegner, 1989), field dependence (Witkin, Dyk, Faterson, Goodenough, & Karp, 1962), low cognitive differentiation (Bieri, Atkins, Briar, Leaman, Miller, & Tripodi, 1966), and external locus of control (Rotter, 1966) are clearly distinct constructs, but each can be seen as a manifestation of weak cognitive structure concerning a relevant domain of judgment and self regulation (i.e., the self, action, other people, society). Lacking internal coherence, a person characterized in this fashion utilizes information provided by others as a frame around which he or she can achieve a sense of personal integration.

The failure of influence strategies, meanwhile, may reflect a corresponding failure to disrupt or otherwise disassemble the target’s prevailing understanding of the action or topic at issue. Thus, resistance to influence (e.g., psychological reactance) may be enhanced when the target’s prevailing perspective is not sufficiently deconstructed for him or her to embrace the influence agent’s alternative perspective. In essence, the emergence scenario suggests that all manner of influence, from compliance with requests to brainwashing, are built on a shared platform emphasizing people’s inherent press for coherent understanding.

We should note, however, that complete integration is rarely attained in complex systems. The cellular automata model of social influence, for example, commonly produces a highly clustered rather than unified social structure, even though the underlying dynamics are in service of selforganization and coherence (e.g., Nowak et al., 1990, 1998; Nowak & Vallacher, 1998b). Differentiation as opposed to unification is commonly observed as well in people’s selfstructure (Nowak et al., 2000), despite a sustained press for integration in self-understanding. It is unreasonable, then, to expect the voluminous literature on social influence to admit to a single higher-order principle. Nor should we expect the field to reach a static equilibrium, with an immutable set of conclusions concerning the ways in which people influence one another. Complex systems are inherently dynamic, continually evolving and becoming reconfigured in response to new influences from the outside. Because interest in social influence shows no sign of letting up, we can expect this defining area of social psychology to display repeated episodes of disassembly and reconfiguration in the years to come.

Bibliography:

  1. Abelson, R. P. (1979). Social clusters and opinion clusters. In P. W. Holland & S. Leinhardt (Eds.), Perspectives in social network research (pp. 239–256). New York: Academic.
  2. Abelson, R. P., Aronson, E., McGuire, W. J., Newcomb, T. M., Rosenberg, M. J., & Tannenbaum, P. H. (Eds.). (1968). Theories of cognitive consistency: A sourcebook. Chicago: RandMcNally.
  3. Alexander, C. N., Zucker, L. G., & Brody, C. L. (1970). Experimental expectations and autokinetic experiences: Consistency theories and judgmental convergence. Sociometry, 33, 108–122.
  4. Allen, V. L., & Levine, J. M. (1971). Social support and conformity: The role of independent assessment of reality. Journal of Experimental Social Psychology, 4, 48–58.
  5. Alloy, L. B., & Abramson, L. Y. (1979). Judgment of contingency in depressed and non-depressed students: Sadder but wiser? Journal of Experimental Psychology: General, 108, 441–485.
  6. Allport, F. H. (1924). Social psychology. Boston: Riverside Editions/ Houghton Mifflin.
  7. Allport, G. W. (1939). Personality: A psychological interpretation. New York: Holt.
  8. Allport, G. W. (1968). The historical background of modern social psychology. In G. A. Lindzey & E. Aronson (Eds.), The handbook of social psychology (Vol. 1, pp. 1–46). Reading, Mass.: Addison-Wesley.
  9. Amit, D. J. (1989). Modeling brain function: The world of attractor neural networks. Cambridge, UK: Cambridge University Press.
  10. Aronson, E. (1992). The return of the oppressed: Dissonance theory makes a comeback. Psychological Inquiry, 3, 303–311.
  11. Aronson, E., & Carlsmith, J. M. (1963). Effect of the severity of threat on the devaluation of forbidden behavior. Journal of Abnormal and Social Psychology, 66, 583–588.
  12. Asch, S. E. (1951). Effects of group pressure upon the modification and distortion of judgment. In H. Guetzow (Ed.), Groups, leadership, and men (pp. 177–190). Pittsburgh, PA: Carnegie.
  13. Asch, S. E. (1952). Social psychology. Englewood Cliffs, NJ: Prentice-Hall.
  14. Asch, S. E. (1955). Opinions and social pressure. Scientific American, 19, 31–35.
  15. Asch, S. E. (1956). Studies of independence and conformity: A minority of one against a unanimous majority. Psychological Monographs, 70 (9, Whole No. 416).
  16. Axelrod, R. (1984). The evolution of cooperation. New York: Basic Books.
  17. Bandura, A. (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice-Hall.
  18. Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54, 462–479.
  19. Baron, R. A., & Byrne, D. (1994). Social psychology: Understanding human interaction (7th ed.). Needham Heights, MA: Allyn & Bacon.
  20. Baumeister, R. F. (1982). A self-presentational view of social phenomena. Psychological Bulletin, 91, 3–26.
  21. Baumeister, R. F., Stillwell,A. M., & Heatherton, T. F. (1994). Guilt: An interpersonal approach. Psychological Bulletin, 115, 243–267.
  22. Bem, D. J. (1972). Self-perception theory. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 6, pp. 1–62). New York: Academic.
  23. Bensley, L. S., & Wu, R. (1991). The role of psychological reactance in drinking following alcohol prevention messages. Journal of Applied Psychology, 21, 1111–1124.
  24. Benson, H., Karabenick, S. A., & Lerner, M. (1976). The effects of physical attractiveness, race, and sex on receiving help. Journal of Experimental and Social Psychology, 12, 409–415.
  25. Berkowitz, L., & Daniels, L. R. (1964). Affecting the salience of the social responsibility norm: Effects of past help on the response to dependency relationships. Journal of Abnormal and Social Psychology, 68, 275–281.
  26. Berkowitz, L. (Series Ed.), & Walster, E. (Vol. Eds.). (1976). Advances in experimental social psychology: Vol. 9. Equity theory: Toward a general theory of social interaction. New York: Academic.
  27. Berry, S. H., & Kanouse, D. E. (1987). Physician response to a mailed survey: An experiment in timing of payment. Public Opinion Quarterly, 51, 102–114.
  28. Bettencourt, B. A., Brewer, M. B., Croak, M. R., & Miller, N. (1992). Cooperation and the reduction of intergroup bias. Journal of Experimental Social Psychology, 28, 301–319.
  29. Bickman, L. (1974). The power of a uniform. Journal of Applied Social Psychology, 4, 61–77.
  30. Bieri,J.,Atkins,A.L.,Briar,S.,Leaman,R.L.,Miller,H.,&Tripodi, T. (1966). Clinical and social judgment. New York: Wiley.
  31. Blascovich, J., Ginsberg, G. P., & Howe, R. C. (1975). Blackjack and the risky shift: Pt. 2. Monetary stakes. Journal of Experimental Psychology, 11, 224–232.
  32. Braver, S. L. (1975). Reciprocity, cohesiveness, and cooperation in two-person games. Psychological Reports, 36, 371–378.
  33. Brehm, J. W. (1966). A theory of psychological reactance. New York: Academic.
  34. Brehm, S. S., & Brehm, J. W. (1981). Psychological reactance: A theory of freedom and control. New York: Academic.
  35. Brewer, M. B. (1991). The social self: On being the same and different at the same time. Personality and Social Psychology Bulletin, 17, 475–482.
  36. Brickner, M., Harkins, S., & Ostrom, T. (1986). Personal involvement: Thought-provoking implications for social loafing. Journal of Personality and Social Psychology, 51, 763–760.
  37. Bruner, J. S., & Tagiuri, R. (1954). The perception of people. In G. Lindzey (Ed.), Handbook of social psychology (Vol. 2, pp. 634–654). Reading, MA: Addison-Wesley.
  38. Budesheim, T. L., & DePaola, S. J. (1994). Beauty or the beast? The effects of appearance, personality, and issue formation on evaluations of political candidates. Personality and Social Psychology Bulletin, 20, 339–348.
  39. Burger, J. M. (1986). Increasing compliance by improving the deal: The that’s-not-all technique. Journal of Personality and Social Psychology, 51, 277–283.
  40. Burger, J. M. (1992). Desire for control: Personality, social, and clinical perspectives. New York: Plenum.
  41. Burger, J. M., & Cooper, H. N. (1979). The desirability of control. Motivation and Emotion, 3, 381–393.
  42. Burger, J. M., & Petty, R. E. (1981). The low-ball compliance technique: Task or person commitment? Journal of Personality and Social Psychology, 40, 492–500.
  43. Burnstein, E., & Vinokur, A. (1977). Persuasive argumentation and social comparison as determinants of attitude polarization. Journal of Experimental Social Psychology, 13, 315–332.
  44. Byrne, D. (1971). The attraction paradigm. New York: Academic.
  45. Byrne, D., Clore, G. L., & Smeaton, G. (1980). The attraction hypothesis: Do similar attitudes affect anything? Journal of Personality and Social Psychology, 51, 1167–1170.
  46. Carnegie, D. (1981). How to win friends and influence people. New York: Pocket Books. (Original work published 1936)
  47. Carver, C. S., & Scheier, M. F. (1999). Themes and issues in the self-regulation of behavior. In R. S. Wyer, Jr. (Ed.), Advances in social cognition (Vol. 12, pp. 1–105). Mahwah, NJ: Erlbaum.
  48. Castellow, W. A., Wuensch, K. L., & Moore, C. H. (1990). Effects of physical attractiveness of the plaintiff and defendant in sexual harassment judgments. Journal of Social Behavior and Personality, 5, 547–562.
  49. Chaiken, S. (1979). Communicator physical attractiveness and persuasion. Journal of Personality and Social Psychology, 37, 1387–1397.
  50. Cialdini, R. B. (1993). Influence: Science and practice (3rd ed.). New York: HarperCollins.
  51. Cialdini, R. B. (1995). Principles and techniques of social influence. In A. Tesser (Ed.), Advanced social psychology (pp. 257–281). New York: McGraw-Hill.
  52. Cialdini, R. B. (2001). Influence: Science and practice (4th ed.). Needham Heights, MA: Allyn & Bacon.
  53. Cialdini, R. B., Cacioppo, J. T., Bassett, R., & Miller, J. A. (1978). Low-ball procedure for producing compliance: Commitment then cost. Journal of Personality and Social Psychology, 36, 463–476.
  54. Cialdini, R. B., Green, B. L., & Rusch, A. J. (1992). When tactical pronouncements of change become real change: The case of reciprocal persuasion. Journal of Personality and Social Psychology, 63, 30–40.
  55. Cialdini, R. B., & Trost, M. R. (1998). Social influence: Social norms, conformity, and compliance. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), The handbook of social psychology (Vol. 2, pp. 151–192). New York: McGraw-Hill.
  56. Cialdini, R. B., Vincent, J. E., Lewis, S. K., Catalan, J., Wheeler, D., & Darby, B. L. (1975). Reciprocal concessions procedure for inducing compliance: The door-in-the-face technique. Journal of Personality and Social Psychology, 31, 206–215.
  57. Cioffi, D., & Garner, R. (1996). On doing the decision: the effects of active versus passive choice on commitment and self-perception. Personality and Social Psychology Bulletin, 22, 133–147.
  58. Clark, M. S., & Mills, J. (1979). Interpersonal attraction in exchange and communal relationships. Journal of Personality and Social Psychology, 37, 12–24.
  59. Clark, R. D. (1990). Minority influence: The role of argument refutation of the minority position and social support for the minority position. European Journal of Social Psychology, 20, 489–497.
  60. Coleman, J. F., Blake, R. R., & Mouton, J. S. (1958). Task difficulty and conformity pressures. Journal of Abnormal Social Psychology, 57, 120–122.
  61. Cotterell, N. B. (1972). Social facilitation. In C. G. McClintock (Ed.), Experimental social psychology (pp. 185–236). New York: Holt, Reinhart & Winston.
  62. Cotterell, N. B., Eisenberger, R., & Speicher, H. (1992). Inhibiting effects of reciprocation wariness on interpersonal relationships. Journal of Personality and Social Psychology, 62, 658–668.
  63. Crowne, D. P., & Marlowe, D. (1964). The approval motive: Studies in evaluative dependence. New York: Wiley.
  64. Crutchfield, R. (1955). Conforming and character. American Psychologist, 10, 191–198.
  65. Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York: Harper & Row.
  66. Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 8, 377–383.
  67. Davis, B. P., & Knowles, E. S. (1999). A disrupt-then-reframe technique of social influence. Journal of Personality and Social Psychology, 76, 192–199. de Charms, R. (1968). Personal causation. New York: Academic.
  68. Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and selfdetermination in human behavior. New York: Plenum.
  69. DePaulo, B. M., Brittingham, G. L., & Kaiser, M. K. (1983). Receiving competence-relevant help. Journal of Personality and Social Psychology, 45, 1046–1060.
  70. Deutsch, M., & Gerard, H. G. (1955). A study of normative and informational social influence upon individual judgment. Journal of Abnormal Social Psychology, 51, 629–636.
  71. Diener, E. (1980). Deindividuation: The absence of self-awareness and self-regulation in group members. In P. B. Paulus (Ed.), Psychology of group influence (pp. 209–242). Hillsdale, NJ: Erlbaum.
  72. Donaldson, S. I., Graham, J. W., Piccinin, A. M., & Hansen, W. B. (1995). Resistance-training skills and onset of alcohol use. Health Psychology, 14, 291–300.
  73. Doise, W. (1969). Intergroup relations and polarization of individual and collective judgments. Journal of Personality and Social Psychology, 12, 136–143.
  74. Drachman, D., De Carufel, A., & Insko, C. A. (1978). The extra credit effect in interpersonal attraction. Journal of Personality and Social Psychology, 14, 458–467.
  75. Duval, S., & Wicklund, R. A. (1972). A theory of objective self awareness. New York: Academic.
  76. Efran, M. G., & Patterson, E. W. J. (1976). The politics of appearance. Unpublished manuscript, University of Toronto, Canada.
  77. Eisenberger, R., Cotterell, N., & Marvel, J. (1987). Reciprocation ideology. Journal of Personality and Social Psychology, 53, 743–750.
  78. Emswiller, T., Deaux, K., & Willits, J. E. (1971). Similarity, sex, and requests for small favors. Journal of Applied Social Psychology, 1, 284–291.
  79. Festinger, L. (1950). Informal social communication. Psychological Review, 57, 271–282.
  80. Festinger, L. (1957). A theory of cognitive dissonance. Evanston, IL: Row, Peterson.
  81. Festinger, L., Pepitone, A., & Newcomb, T. (1952). Some consequences of de-individuation in a group. Journal of Abnormal psychology, 47, 382–389.
  82. Festinger, L., Schachter, S., & Back, K. (1950). Social pressures in informal groups. Stanford, CA: Stanford University Press.
  83. Foa, E. B., & Foa, U. G. (1974). Societal structures of the mind. Springfield, IL: Thomas.
  84. Forsyth, D. R. (1990). Group dynamics (2nd ed.). Pacific Grove, CA: Brooks/Cole.
  85. Freedman, J. L., & Fraser, S. C. (1966). Compliance without pressure: The foot-in-the-door technique. Journal of Personality and Social Psychology, 4, 195–202.
  86. French, J., & Raven, B. (1959). The bases of social power. In D. Cartwright (Ed.), Studies in social power (pp. 150–167). Ann Arbor, MI: Institute for Social Research.
  87. Frenzen, J. R., & Davis, H. L. (1990). Purchasing behavior in embedded markets. Journal of Consumer Research, 17, 1–12.
  88. Garner, R. L. (1999). What’s in a name: Persuasion perhaps? Unpublished manuscript, Sam Houston State University. Huntsville, Texas.
  89. Gergen, K. J. (1985). The social constructionist movement in modern psychology. American Psychologist, 40, 266–275.
  90. Gergen, K. J., Ellsworth, P., Maslach, C., & Seipel, M. (1975). Obligation, donor resources, and reactions to aid in three cultures. Journal of Personality and Social Psychology, 31, 390–400.
  91. Goethals, G. R., & Zanna, M. P. (1979). The role of social comparison in choice shifts. Journal of Personality and Social Psychology, 37, 1469–1476.
  92. Goldman, A. I. (1970). A theory of human action. Princeton, NJ: Princeton University Press.
  93. Gouldner, A. (1960). The norm of reciprocity: A preliminary statement. American Sociological Review, 25, 161–178.
  94. Grush, J. E. (1980). Impact of candidate expenditures, regionality, and prior outcomes on the 1976 Democratic presidential primaries. Journal of Personality and Social Psychology, 38, 337–347.
  95. Grush, J. E., McKeough, K. L., &Ahlering, R. F. (1978). Extrapolating laboratory exposure experiments to actual political elections. Journal of Personality and Social Psychology, 36, 257–270.
  96. Gutowitz, H. (1991). Cellular automata: Theory and experiment. Cambridge, MA: MIT Press.
  97. Harackiewicz, J., Abrahams, S., & Wageman, R. (1987). Performance evaluation and intrinsic motivation: the effects of evaluative focus, rewards, and achievement motivation. Journal of Personality and Social Psychology, 53, 1015–1023.
  98. Hardin, G. (1968). The tragedy of the commons. Science, 162, 1243–1248.
  99. Harkins, S. G., & Jackson, J. M. (1985). The role of evaluation in eliminating social loafing. Personality and Social Psychology Bulletin, 11, 457–465.
  100. Harkins, S. G., & Petty, R. (1982). Effects of task difficulty and task uniqueness on social loafing. Journal of Personality and Social Psychology, 42, 1214–1229.
  101. Hastie, R., Penrod, S. D., & Pennington, N. (1983). Inside the jury. Cambridge, MA: Harvard University Press.
  102. Hegselman, R. (1998). Modeling social dynamics by cellular automata. In W. B. G. Liebrand, A. Nowak, & R. Hegselman (Eds.), Computer modeling of social processes (pp. 37–64). London: Sage.
  103. Higgins, E. T. (1987). Self-discrepancy: A theory relating self and affect. Psychological Review, 94, 319–340.
  104. Hofling, C. K., Brotzman, E., Dalrymple, S., Graves, N., & Pierce, C. M. (1966).An experimental study of nurse-physician relationships. Journal of Nervous and Mental Disease, 143, 171–180.
  105. Insko, C. A., Drenan, S., Solomon, M. R., Smith, R., & Wade, T. J. (1983). Conformity as a function of the consistency of positive self-evaluation with being liked and being right. Journal of Experimental Social Psychology, 19, 341–358.
  106. Jackson, J. M., & Latané, B. (1981). All alone in front of all those people: Stage fright as a function of number and type of coperformers and audience. Journal of Personality and Social Psychology, 40, 73–85.
  107. Janis, I. L. (1982). Victims of groupthink (2nd ed.). Boston: Houghton Mifflin.
  108. Jones, E. E. (1964). New York: Appleton-Century.
  109. Jones, E. E., & Davis, K. E. (1965). From acts to dispositions: The attribution process in person perception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 2, pp. 220– 266). New York: Academic.
  110. Jones, E. E., & Wortman, C. (1973). Ingratiation: An attributional approach. Morristown, NJ: General Learning Corporation.
  111. Kelley, H. H. (1967). Attribution in social psychology. Nebraska Symposium on Motivation, 15, 419–422.
  112. Kelley, H. H., & Thibaut, J. W. (1978). Interpersonal relations: A theory of interdependence. New York: Wiley Inter-Science.
  113. Kiesler, C. A. (1971). The psychology of commitment. New York: Academic.
  114. Kiesler, C. A., & Kiesler, S. B. (1976). Conformity (2nd ed.). Reading, MA: Addison-Wesley.
  115. Kiesler, C. A., & Pallak, M. S. (1975). Minority influence: The effect of majority reactionaries and defectors, and minority and majority compromisers, upon majority opinion and attraction. European Journal of Social Psychology, 5, 237–256.
  116. Kilham, W., & Mann, L. (1974). Level of destructive obedience as function of transmitter and executant roles in the Milgram obedience paradigm. Journal of Personality and Social Psychology, 29, 696–702.
  117. Kofta, M., Weary, G., & Sedek, G. (Eds.). (1998). Personal control in action. New York: Plenum.
  118. Kravitz, D. A., & Martin, B. (1986). Ringelman rediscovered: The original article. Journal of Personality and Social Psychology, 50, 936–941.
  119. Kruglanski, A. W. (1975). The endogenous-exogenous partition in attribution theory. Psychological Review, 82, 387–406.
  120. LaFrance, M. (1985). Postural mirroring and intergroup relations. Personality and Social Psychology Bulletin, 11, 207–217.
  121. Lamm, H., & Ochssmann, R. (1972). Factors limiting the generality of the risky-shift phenomenon. European Journal of Social Psychology, 2, 99–102.
  122. Lamm, H., & Sauer, C. (1974). Discussion-induced shift towards higher demands in negotiation. European Journal of Social Psychology, 4, 85–88.
  123. Langer, E. J. (1978). Rethinking the role of thought in social interaction. In J. H. Harvey, W. Ickes, & R. F. Kidd (Eds.), New directions in attribution research (Vol. 2, pp. 35–58). Hillsdale, NJ: Erlbaum.
  124. Latané, B. (1981). The psychology of social impact. American Psychologist, 36, 343–356.
  125. Latané, B., & Darley, J. M. (1970). The unresponsive bystander: Why doesn’t he help? New York: Appleton-Century-Crofts.
  126. Latané, B., Liu, J., Nowak, A., Bonavento, M., & Zheng, L. (1995). Distance matters: Physical space and social influence. Personality and Social Psychology Bulletin, 21, 795–805.
  127. Latané, B., & Nowak, A. (1994). Attitudes as catastrophes: From dimensions to categories with increasing involvement. In R. R. Vallacher & A. Nowak (Eds.), Dynamical systems in social psychology (pp. 219–249). San Diego, CA: Academic.
  128. Latané, B., & Nowak, A. (1997). The causes of polarization and clustering in social groups. Progress in Communication Sciences, 13, 43–75.
  129. Latané, B., Nowak, A., & Liu, J. (1994). Measuring emergent social phenomena: Dynamism, polarization and clustering as order parameters of social systems. Behavioral Science, 39, 1–24.
  130. Latané, B., Williams, K., & Harkins, S. (1979). Many hands make light work: The causes and consequences of social loafing. Journal of Personality and Social Psychology, 37, 822–832.
  131. Laughlin, P. R., & Ellis, A. L. (1986). Demonstrability and social combination processes on mathematical intellective tasks. Journal of Experimental Social Psychology, 22, 177–189.
  132. Le Bon, G. (1960). The crowd. New York: Viking.
  133. Lepper, M. R., & Greene, D. (Eds.). (1978). The hidden costs of reward. Hillsdale, NJ: Erlbaum.
  134. Lerner, M. J., & Mikula, G. (Eds.). (1994). Entitlement and the affectional bond: Justice in close relationships. New York: Plenum.
  135. Levine, J. M. (1996, October). Solomon Asch’s legacy for group research. Paper presented in Plenary Session (S. Fiske, Chair) Honoring the Memory of Solomon Asch. Society for Experimental Social Psychology, Toronto, Canada.
  136. Levine, J. M., & Ranelli, C. J. (1978). Majority reaction to shifting and stable attitudinal deviates. European Journal of Social Psychology, 8, 55–70.
  137. Lewenstein, M., Nowak, A., & Latané, B. (1993). Statistical mechanics of social impact. Physics Review A, 45, 703–716.
  138. Locke, K. S., & Horowitz, L. M. (1990). Satisfaction in interpersonal interactions as a function of similarity level in dysphoria. Journal of Personality and Social Psychology, 58, 823–831.
  139. Maass, A., & Clark, R. D. (1984). Hidden impact of minorities: Fifteen years of minority influence research. Psychological Bulletin, 95, 428–450.
  140. Maass, A., Clark, R. K., & Haberkorn, G. (1982). The effects of differential ascribed category membership and norms on minority influence. European Journal of Social Psychology, 12, 89–104.
  141. Mack, D., & Rainey, D. (1990). Female applicants’ grooming and personnel selection. Journal of Personality and Social Psychology, 5, 399–407.
  142. Markus, H. R., & Kitayama, S. (1991). Culture and the self: Implications for cognition, emotion, and motivation. Psychological Review, 98, 224–253.
  143. May, R. M. (Ed.). (1981). Theoretical ecology: Principles and applications. Oxford, UK: Blackwell Scientific.
  144. Meeus, W. H. J., & Raaijmakers, Q. A. W. (1986). Administrative obedience: Carrying out orders to use psychologicaladministrative violence. European Journal of Social Psychology, 16, 311–324.
  145. Meleshko, K. G. A., & Alden, L. E. (1993). Anxiety and selfdisclosure: Toward a motivational model. Journal of Personality and Social Psychology, 64, 1000–1009.
  146. Messick, D. M., & Liebrand, V. B. G. (1995). Individual heuristics and the dynamics of cooperation in large groups. Psychological Review, 102, 131–145.
  147. Milgram S. (1965). Some conditions of obedience and disobedience to authority. Human Relations, 18, 57–75.
  148. Milgram, S. (1974). Obedience to authority: An experimental view. New York: Harper & Row.
  149. Miller, D. T. (1999). The norm of self-interest. American Psychologist, 54, 1053–1060.
  150. Moscovici, S. (1976). Social influence and social change. London: Academic.
  151. Moscovici, S., Lage, E., & Naffrechoux, M. (1969). Influence of a consistent minority on responses of a majority in a color perception task. Sociometry, 32, 365–379.
  152. Moscovici, S., & Zavalloni, M. (1969). The group as a polarizer of attitudes. Journal of Personality and Social Psychology, 12, 125–135.
  153. Mugny, G. (1982). The power of minorities. London: Academic.
  154. Myers, D. G. (1982). Polarizing effects of social interaction. In H. Brandstatter, J. H. Davis, & G. Stocker-Kreichgauer (Eds.), Group decision making (pp. 125–161). New York: Academic.
  155. Myers, D. G., & Lamm, H. (1976). The group polarization phenomenon. Psychological Bulletin, 83, 602–627.
  156. Newcomb, T. M. (1961). The acquaintance process. New York: Holt, Reinhart, & Winston.
  157. Nowak, A., Latané, B., & Lewenstein, M. (1994). Social dilemmas exist in space. In U. Schulz, W. Albers, & U. Mueller (Eds.), Social dilemmas and cooperation (pp. 114–131). Heidelberg, Germany: Springer-Verlag.
  158. Nowak, A., & Lewenstein, M. (1996). Modeling social change with cellular automata. In R. Hegselman, K. Troitzch, & U. Muller (Eds.), Modeling and simulation in the social sciences from the philosophy of science point of view (pp. 249–285). Dordrecht, The Netherlands: Kluwer Academic.
  159. Nowak, A., Lewenstein, M., & Frejlak, P. (1996). Dynamics of public opinion and social change. In R. Hegselman & H. O. Pietgen (Eds.), Modeling social dynamics: Order, chaos, and complexity (pp. 54–78). Vienna, Austria: Helbin.
  160. Nowak, A., Lewenstein, M., & Szamrej, J. (1993). Social transitions occur through bubbles. Scientific American (Polish version), 12, 16–25.
  161. Nowak, A., Szamrej, J., & Latané, B. (1990). From private attitude to public opinion: A dynamic theory of social impact. Psychological Review, 97, 362–376.
  162. Nowak, A., Urbaniak, J., & Zienkowski, L. (1994). Clustering processes in economic transition. RECESS Research Bulletin, 3, 43–61.
  163. Nowak, A., & Vallacher, R. R. (1998a). Dynamical social psychology. New York: Guilford.
  164. Nowak, A., & Vallacher, R. R. (1998b). Toward computational social psychology: Cellular automata and neural network models of interpersonal dynamics. In S. J. Read & L. C. Miller (Eds.), Connectionist models of social reasoning and social behavior (pp. 277–311). Mahwah, NJ: Erlbaum.
  165. Nowak, A., & Vallacher, R. R. (2001). Societal transition: Toward a dynamicalmodelofsocialchange.InW.Wosinska,R.B.Cialdini, D. W. Barrett, & J. Reykowski (Eds.), The practice of social influence in multiple cultures (pp. 151–171). Mahwah, NJ: Erlbaum.
  166. Nowak, A., Vallacher, R. R., & Burnstein, E. (1998). Computational social psychology: A neural network approach to interpersonal dynamics. In W. B. G. Liebrand, A. Nowak, & R. Hegselman (Eds.), Computer modeling of social processes (pp. 97–125). London: Sage.
  167. Nowak, A., Vallacher, R. R., Tesser, A., & Borkowski, W. (2000). Society of self: The emergence of collective properties in selfstructure. Psychological Review, 102, 39–61.
  168. Nowak, A., Vallacher, R. R., & Zochowski, M. (2002). The emergence of personality: Personal stability through interpersonal synchronization. In D. Cervone & W. Mischel (Eds.), Advances in personality science (Vol. 1, pp. 292–331). New York: Guilford.
  169. Pallak, M. S., Cook, D. A., & Sullivan, J. J. (1980). Commitment and energy conservation. Applied Social Psychology Annual, 1, 235–253.
  170. Pruitt,D.G.(1971).Choiceshiftsindiscussion:Anintroductoryview. Journal of Personality and Social Psychology, 20, 339–360.
  171. Raven, B. H. (1992). Apower/interaction model of interpersonal influence: French and Raven thirty years later. Journal of Social Behavior and Personality, 7, 217–244.
  172. Raven, B. H. (1993). The bases of power: Origins and recent developments. Journal of Social Issues, 49, 227–251.
  173. Regan, D. T. (1971). Effects of a favor and liking on compliance. Journal of Experimental Social Psychology, 7, 627–639.
  174. Rosenbaum, M. E. (1980). Cooperation and competition. In P. B. Paulus (Ed.), The psychology of group influence (pp. 23–41). Hillsdale, NJ: Erlbaum.
  175. Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs, 80 (1, Whole No. 609).
  176. Rudich, E. A., & Vallacher, R. R. (1999). To belong or to selfenhance? Motivational bases for choosing interaction partners. Personality and Social Psychology Bulletin, 25, 1387–1404.
  177. Rusbult, C. E., & Martz, J. M. (1995). Remaining in an abusive relationship: An investment model analysis of nonvoluntary dependence. Personality and Social Psychology Bulletin, 21, 558–571.
  178. Schachter, S. (1959). The psychology of affiliation: Experimental studies of the sources of gregariousness. Stanford, CA: Stanford University Press.
  179. Schein, E. (1956). The Chinese indoctrination program for prisoners of war: A study of attempted “brainwashing.” Psychiatry, 19, 149–172.
  180. Schlenker, B. R. (1980). Impression management. Monterey, CA: Brooks/Cole.
  181. Schulz, U., Alberts, W., & Mueller, U. (Eds.). (1994). Social dilemmas and cooperation. Heidelberg, Germany: Springer.
  182. Schuster, H. G. (1984). Deterministic chaos. Vienna, Austria: Physik Verlag.
  183. Seligman, M. E. P. (1975). On depression, development, and death. San Francisco: Freeman.
  184. Shaver, K. G. (1985). The attribution of blame. New York: SpringerVerlag.
  185. Shelling, T. (1969). Models of segregation. American Economic Review, 59, 488–493.
  186. Shelling, T. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1, 143–186.
  187. Sherif,M.(1936).NewYork:Harper.
  188. Skinner, B. F. (1971). Beyond freedom and dignity. New York: Knopf.
  189. Smolowe, J. (1990, November 26). Contents require immediate attention. Time, 64.
  190. Staats, A. W. (1975). Social behaviorism. Homewood, IL: Dorsey.
  191. Steiner, I. D. (1972). Group process and productivity. New York: Academic.
  192. Stewart, J. E. (1980). Defendant’s attractiveness as a factor in the outcome of criminal trials: An observational study. Journal of Applied Psychology, 10, 348–361.
  193. Stoner, J. A. F. (1961). A comparison of individual and group decisions involving risk. Unpublished master’s thesis, Massachusetts Institute of Technology, Cambridge.
  194. Suedfeld, P., Bochner, S., & Matas, C. (1971). Petitioner’s attire and petition signing by peace demonstrators: A field experiment. Journal of Applied Social Psychology, 58, 171–181.
  195. Swann, W. B., Jr. (1990). To be adored or to be known? The interplay of self-enhancement and self-verification. In E. T. Higgins & R. M. Sorrentino (Eds.), Handbook of motivation and cognition: Foundations of social behavior (Vol. 2, pp. 408–448). New York: Guilford.
  196. Swann, W. B., Jr., & Ely, R. J. (1984). A battle of wills: Selfverification versus behavioral confirmation. Journal of Personality and Social Psychology, 46, 1287–1302.
  197. Tanford, S., & Penrod, S. (1984). Social influence model: A formal integration of research on majority and minority influence processes. Psychological Bulletin, 95, 189–225.
  198. Taylor, S. E., & Brown, J. D. (1988). Illusion and well-being: A social psychological perspective on mental health. Psychological Bulletin, 103, 193–210.
  199. Taylor, S. E., Peplau, L. A., & Sears, D. O. (1997). Social psychology (9th ed.). Upper Saddle River, NJ: Prentice-Hall.
  200. Tesser, A. (1988). Toward a self-evaluation maintenance model of social behavior. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 21, pp. 181–227). New York: Academic.
  201. Thagard, P., & Kunda, Z. (1998). Making sense of people: Coherence mechanisms. In S. J. Read & L. C. Miller (Eds.), Connectionist models of social reasoning and social behavior (pp. 3–26). Mahwah, NJ: Erlbaum.
  202. Thibaut, J. W., & Kelley, H. H. (1959). The social psychology of groups. New York: Wiley.
  203. Triplett, H. (1898). The dynamogenic factors in pace making and competition. American Journal of Psychology, 9, 507–533.
  204. Trope, Y. (1986). Identification an inferential processes in dispositional attribution. Psychological Review, 93, 239–257.
  205. Turner, R. H., & Killian, L. M. (1972). Collective behavior (2nd ed.). Englewood Cliffs, NJ: Prentice-Hall.
  206. Ulam, S. (1952). Random processes and transformations. Proceedings of International Congress of Mathematics, 2, 264–275.
  207. Vallacher, R. R. (1980). An introduction to self theory. In D. M. Wegner & R. R. Vallacher (Eds.), The self in social psychology (pp. 3–30). New York: Oxford University Press.
  208. Vallacher, R. R. (1993). Mental calibration: Forging a working relationship between mind and action. In D. M. Wegner & J. W. Pennebaker (Eds.), Handbook of mental control (pp. 443–472). Englewood Cliffs, NJ: Prentice-Hall.
  209. Vallacher, R. R., & Nowak, A. (1994). The chaos in social psychology. In R. R. Vallacher & A. Nowak (Eds.), Dynamical systems in social psychology (pp. 1–16). San Diego, CA: Academic.
  210. Vallacher, R. R., & Nowak, A. (1997). The emergence of dynamical social psychology. Psychological Inquiry, 8, 73–99.
  211. Vallacher, R. R., Nowak, A., & Kaufman, J. (1994). Intrinsic dynamics of social judgment. Journal of Personality and Social Psychology, 67, 20–34.
  212. Vallacher, R. R., Nowak, A., Markus, J., & Strauss, J. (1998). Dynamics in the coordination of mind and action. In M. Kofta, G. Weary, & G. Sedek (Eds.), Personal control in action (pp. 27– 59). New York: Plenum.
  213. Vallacher, R. R., & Selz, K. (1991). Who’s to blame? Action identification in allocating responsibility for alleged rape. Social Cognition, 9, 194–219.
  214. Vallacher, R. R., & Solodky, M. (1979). Objective self awareness, standards of evaluation, and moral behavior. Journal of Experimental Social Psychology, 15, 254–262.
  215. Vallacher, R. R., & Wegner, D. M. (1985). A theory of action identification. Hillsdale, NJ: Erlbaum.
  216. Vallacher, R. R., & Wegner, D. M. (1987). What do people think they’re doing? Action identification and human behavior. Psychological Review, 94, 3–15.
  217. Vallacher, R. R., & Wegner, D. M. (1989). Levels of personal agency: Individual variation in action identification. Journal of Personality and Social Psychology, 57, 660–671.
  218. Vinokur, A. (1971). A review and theoretical analysis of the effects of group processes upon individual and group decisions involving risk. Psychological Bulletin, 76, 231–250.
  219. Vinokur, A., & Burnstein, E. (1974). Effects of partially shared persuasive arguments on group-induced shifts: A group problemsolving approach. Journal of Personality and Social Psychology, 29, 305–315.
  220. Volpato, C., Maass, A., Mucchi-Faina, A., & Vitti, E. (1990). Minority influence and categorization. European Journal of Social Psychology, 20, 119–132.
  221. von Neumann, J. (1966). Theory of self-reproducing automata. Champaign: University of Illinois Press.
  222. Wallach, M. A., Kogan, N., & Bem, D. J. (1962). Group influence on individual risk taking. Journal of Abnormal Social Psychology, 1, 1–19.
  223. Weldon, E., & Mustari, L. (1988). Felt dispensability in groups of coactors: The effects of shared responsibility and explicit anonymity on cognitive effort. Organizational Behavior and Human Decision Processes, 41, 330–351.
  224. Wicklund, R. A., & Brehm, J. W. (1976). Perspectives on cognitive dissonance. Hillsdale, NJ: Erlbaum.
  225. Williams, K., Harkins, S., & Latané, B. (1981). Identifiably as a deterrent to social loafing: Two cheering experiments. Journal of Personality and Social Psychology, 40, 303–311.
  226. Witkin, H. A., Dyk, R. B., Faterson, H. F., Goodenough, D. R., & Karp, S. A. (1962). Psychological differentiation. New York: Wiley.
  227. Wolf, S., & Latané, B. (1985). Conformity, innovation, and the psycho-social laws. In S. Moscovici, G. Mugny, & E. Van Avermaet (Eds.), Perspectives on minority influence (pp. 201– 215). Cambridge, UK: Cambridge University Press.
  228. Wolfram, S. (Ed.) (1986). Theory and applications of cellular automata. Singapore: World Scientific.
  229. Woodside, A. G., & Davenport, J. W. (1974). Effects of salesman similarity and expertise on consumer purchasing behavior. Journal of Marketing Research, 11, 198–202.
  230. Zajonc, R. B. (1965). Social facilitation. Science, 149, 269–274.
  231. Zajonc, R. B. (1980). Cognition and social cognition: A historical perspective. In L. Festinger (Ed.), Retrospections on social psychology (pp. 180–204). New York: Oxford University Press.
  232. Zimbardo, P. G. (1970). The human choice: Individuation, reason, and order versus deindividuation, impulse, and chaos. In W. J. Arnold & D. Levine (Eds.), Nebraska Symposium on Motivation, 1969 (pp. 237–307). Lincoln: University of Nebraska Press.
Persuasion and Attitude Change Research Paper
Environmental Psychology Research Paper