Social Simulation Research Paper

Academic Writing Service

View sample Social Simulation Research Paper. Browse other social sciences research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Computational social simulation involves the use of computer algorithms to model social processes. These algorithms require greater precision and logical rigor than natural language yet lack the generality of closed-form mathematical equations. The use of computer simulation by social scientists has increased in recent years, due to growing interest in modeling nonlinear dynamical processes using graphical interfaces that allow highly intuitive visual representations of the results. Social simulation has also undergone a qualitative change as the emphasis has shifted from prediction to exploration.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


This research paper reviews three ‘waves’ of innovation: dynamical systems, microsimulation, and adaptive agent models (Gilbert and Troitzsch 1999). These three approaches will be summarized as follows.

(1) 1960s: Dynamical systems simulations were empirically grounded, inductive, holistic, and functionalist.




(2) 1970s: Microsimulation introduced the use of individuals as the units of analysis but retained the earlier emphasis on empirically based macro-level prediction.

(3) 1980s: Adaptive agent models revolutionized computational social science by simulating actors not factors (system attributes). These ‘bottom-up’ models explored interactions among purposive decision-makers.

Although these three ‘waves’ overlap (e.g., economists continue to use holistic simulation of productive factors, microsimulation was invented in 1957, and nascent agent-based models appeared in the late 1960s), most of the recent excitement in computational modeling in the social and behavioral sciences has centered on agent-based simulation. So too has much of the controversy.

1. Dynamical Systems

The first wave of computer simulations in social science occurred in the 1960s (see, for example, Cyert and March 1963). These studies used computers to simulate dynamical systems, such as control and feedback processes in organizations, industries, cities, and even global populations. The models typically consist of sets of differential equations that describe changes in system attributes as a function of other systemic changes. Applications included the flow of raw materials in a factory, inventory control in a warehouse, urban traffic, military supply lines, demographic changes in a world system, and ecological limits to growth (Forrester 1971; Meadows et al. 1974).

Models of dynamical systems are functional and holistic (Gilbert and Troitzsch 1999). Functional means that theoretical interest centers on system equilibrium or balance among interdependent input– output units. Wholistic means that the system is modeled as an irreducible and indivisible entity. Although these models allow nested organizational levels, at any given level, they assume that sets of related attributes (such as economic growth, suburban migration, and traffic congestion) are causally linked at the aggregate level.

2. Microsimulation

The second wave of computational modeling in the social sciences developed in the late 1970s. Known as microsimulation, it represents the first step in the progression from factors to actors. In striking contrast to the earlier wholistic approach,

microsimulation is a ‘bottom-up’ strategy for modeling the interacting behavior of decision-makers (such as individuals, families, and firms) within a larger system. This modeling strategy utilizes data on representative samples of decision-makers, along with equations and algorithms representing behavioral processes, to simulate the evolution through time of each decision-maker, and hence of the entire population of decision-makers. (Caldwell 1997)

For example, Caldwell and Keister use CORSIM, a large-scale dynamic microsimulation model of the US population of individuals and families, to integrate individual and family-level wealth behavior with aggregate-level stratification outcomes. This microanalytic approach allows researchers to avoid a serious limitation in the earlier generation of simulations: population homogeneity. For example, the relationship between wealth and age may not be uniform across ethnic or religious subcultures, geographic regions, or birth cohorts. Complex interactions across subpopulations are lost when the focus is on the correlation among aggregate factors. This precludes the ability to predict the effects of policy changes that impact only certain groups.

Microsimulation solves this problem by making individuals, not populations, the unit of analysis. The database structure consists of individual records for a representative sample of a population, with a set of attributes measured at multiple points in time. These individuals are then ‘aged’ by updating their attributes, based on a set of empirically derived state-transition probabilities (e.g., an individual’s move from work to retirement at age 65). These probabilities are then used to predict new states of each individual, and these are then aggregated to make population projections.

3. Agent-Based Models And Complex Adaptive Systems

The third wave, agent-based models, attracted widespread interest beginning in the 1980s. These models extend the microanalytical approach by deriving global states from individual Behaviors, not individual attributes. In microsimulation models, individuals are inert in two ways: (a) the state of an individual is a numerical function of other traits, and (b) individuals do not interact (Gilbert and Troitzsch 1999, p. 8). Agent-based models assume an individual has intentions or goals and makes choices that affect other agents whose choices in turn affect that individual. These models impose three key assumptions.

(a) Agents interact with little or no central authority or direction. Global patterns emerge from the bottom up, determined not by a centralized authority but by local interactions among autonomous decisionmakers.

(b) Decision-makers are adaptive rather than optimizing, with decisions based on heuristics, not on calculations of the most efficient action (Holland 1995, p. 43). These heuristics include norms, habits, protocols, rituals, conventions, customs, and routines. They evolve at two levels, the individual and the population. Individual learning alters the probability distribution of rules competing for attention, through processes like reinforcement, Bayesian updating, or the backpropagation of error in artificial neural networks. Population learning alters the frequency distribution of rules competing for reproduction through processes of selection, imitation, and social influence (Latane 1996). Genetic algorithms (Holland 1995) are widely used to model adaptation at the population level.

(c) Decision-makers are strategically interdependent. Strategic interdependence means that the consequences of each agent’s decisions depend in part on the choices of others. When strategically interdependent agents are also adaptive, the focal agent’s decisions influence the behavior of other agents who in turn influence the focal agent, generating a complex adaptive system (Holland 1995, p. 10).

Thomas Schelling’s (1971) ‘neighborhood segregation’ model was one of the earliest agent-based social simulations using cellular automata, a method invented by Von Neumann and Ulam in the 1940s. Consider a residential area that is highly segregated, such that the number of neighbors with different cultural markers (such as ethnicity) is at a minimum. If the aggregate pattern were assumed to reflect the attitudes of the constituent individuals, one might conclude from the distribution that the population was highly parochial and intolerant of diversity. Yet Schelling’s simulation shows that this need not be the case. His ‘tipping’ model shows that highly segregated neighborhoods can form even in a population that prefers diversity. The aggregate pattern of segregation is an emergent property that does not reflect the underlying attitudes of the constituent individuals.

Another example of emergence is provided by Latane’s (1996) ‘social impact model.’ Like Schelling, Latane also studies a cellular world populated by agents who live on a two-dimensional lattice. However, rather than moving, these agents adapt to those around them, based on a rule to mimic one’s neighbors. From a random start, a population of mimics might be expected to converge inexorably on a single profile, leading to the conclusion that cultural diversity is imposed by factors that counteract the effects of conformist tendencies. However, the surprising result was that ‘the system achieved stable diversity. The minority was able to survive, contrary to the belief that social influence inexorably leads to uniformity’ (Latane 1996, p. 294). Other researchers, including Axelrod (1997), Carley (1991), and Kitts et al. (1999) have used computational models that couple homophily (likes attract) and social influence. If local similarity facilitates interaction which in turn facilitates imitation, it is indeed curious that the global outcome is diversity, not uniformity. However, this outcome depends on interactions that are locally transitive, as in a spatially distributed population.

Schelling and Latane modeled agents with identical and fixed behavioral rules. Many agent-based models assume heterogeneous populations with the ability to learn new rules. The ‘genetic algorithm’ invented by John Holland is a simple and elegant way to model strategies that can progressively improve performance by building on partial solutions. Each strategy consists of a string of symbols that code behavioral instructions, analogous to a chromosome containing multiple genes. The string’s instructions affect the agent’s reproductive fitness and hence the probability that the strategy will propagate. Propagation occurs when two or more mated strategies recombine. If different rules are each effective, but in different ways, recombination allows them to create an entirely new strategy that may integrate the best abilities of each ‘parent’ and thus eventually displace the parent rules in the population of strategies.

Axelrod (1997, pp. 14–29] used a genetic algorithm to study the evolution of cooperation in a ‘prisoner’s dilemma,’ a game in which choices that are individually rational aggregate into collective outcomes that everyone would prefer to avoid. The results showed that strategies based on reciprocity (such as ‘tit for tat’) tend to be very successful. The secret of their success is that they perform well against copies of themselves. Axelrod’s result has been challenged by Nowak and Sigmund (1993), by Binmore (1998), and by Macy (1995). Working independently, these researchers found that ‘tit for tat,’ a strategy that teaches its partner a lesson, can be supplanted by ‘Pavlov’ (aka. ‘win–stay, lose–shift’ and ‘tat for tit’), a strategy that learns (Binmore 1998). The ability to learn appears to be at least as important for emergent social order as the ability to teach.

Learning involves adaptation at the level of the individual rather than the population. Artificial neural networks are simple self-programmable devices that model agents who learn through reinforcement. Like genetic algorithms, neural nets have a biological analog, in this case, the nerve systems of living organisms. The device consists of a web of neuron-like units (or neurodes) that fire when triggered by impulses of sufficient strength, and in turn stimulate other units when fired. The magnitude of an impulse depends on the strength of the connection (or synapses) between the two neurodes. The network learns by modifying these path coefficients in response to environmental feedback about its performance. Neural nets have been used to study the evolution of religion (Bainbridge 1995), kin altruism (Parisi et al. 1995), the emergence of status inequality (Vakas-Duong and Reilly 1995), group dynamics (Nowak and Vallacher 1997), and social deviance (Kitts et al. 1999).

4. Criticisms Of Simulation

Agent-based simulations have been criticized as unrealistic, tautological, and unreliable.

(a) Simulations, like mathematical models, tell us about a highly stylized and abstract world, not about the world in which we live. In particular, agent-based simulations rely on highly simplified models of rulebased human behavior that neglect the complexity of human cognition.

(b) Like mathematical models, simulations cannot tell us anything that we do not already know, given the assumptions built into the model and the inability of a computer to do anything other than execute the instructions given to it by the program.

(c) Unlike mathematical models, simulations are numerical and therefore cannot establish lawful regularities or generalizations.

The defense against these criticisms centers on the principles of complexity and emergence. How can simple agent-based models explain the rich complexity of social life? In complex systems, very simple rules of interaction can produce highly complex global patterns. ‘Human beings,’ Simon contends (1998, p. 53), ‘viewed as behaving systems, are quite simple.’ We follow rules, in the form of norms, conventions, protocols, moral and social habits, and heuristics. Although the rules may be quite simple, they can produce global patterns that may not be at all obvious and are very difficult to understand. Hence, ‘the apparent complexity of our behavior is largely a reflection of the complexity of the environment,’ including the complexity of interactions among strategically interdependent, adaptive agents. The simulation of artificial worlds allow us to explore the complexity of the social environment by removing the cognitive complexity (and idiosyncrasy) of constituent individuals.

4.1 The Exploration Of Artificial Worlds

If Simon, Axelrod, and the complexity theorists are right, then the artificiality of agent-based models is a virtue, not a vice. When simulation is used to make predictions or for training personnel (e.g., flight simulators), the assumptions need to be highly realistic, which usually means they will also be highly complicated (Axelrod 1997, p. 5). ‘But if the goal is to deepen our understanding of some fundamental process,’ Axelrod continues, ‘then simplicity of the assumptions is important and realistic representation of all the details of a particular setting is not.’ As such, the purpose of these models is to generate hypotheses, not to test them (Prietula et al. 1998, p. xv).

Holland (1995, p.146) offers a classic example of the problem of building theories that too closely resemble actuality: Aristotle’s mistaken conclusion that all bodies come to rest, based on observation of a world that happens to be infested with friction. Had these observations been made in a frictionless world, Aristotle would have come to the same conclusion reached by Newton and formulated as the principle that bodies in motion persist in that motion unless perturbed. Newton avoided Aristotle’s error by studying what happens in an artificial world in which friction had been assumed away. Ironically, ‘Aristotle’s model, though closer to everyday observations, clouded studies of the natural world for almost two millennia’ (Holland 1995, p. 146). More generally, it is often necessary to step back from our world in order to see it more clearly.

For example, Schelling’s ‘neighborhood segregation’ model made no effort to imitate actuality or to resemble an observed city. Rather, the ‘residents’ live in a highly abstract cellular world without real streets, rivers, train tracks, zoning laws, red-lining, housing markets, etc. This artificial world shows that all neighborhoods in which residents are able to move have an underlying tendency towards segregation even when residents are moderately tolerant. This hypothesis can then be tested in observed neighborhoods, but even if observations do not confirm the predicted segregation, this does not detract from the value of the simulation in suggesting empirical conditions that were not present in the model that might account for the discrepancy with the observed outcome.

In short, agent-based models ‘have a role similar to mathematical theory, shearing away detail and illuminating crucial features in a rigorous context’ (Holland 1995, p. 100). Nevertheless, Holland reminds us that we must be careful. Unlike laboratory experiments, computational ‘thought experiments’ are not constrained by physical actuality and thus ‘can be as fanciful as desired or accidentally permitted.’

4.2 Unwrapping And Emergence

A second criticism of agent-based modeling addresses a problem known as ‘unwrapping.’ Unwrapping occurs ‘when the ‘solution’ is explicitly built into the program [such that] the simulation reveals little that is new or unexpected’ (Holland 1995, p. 137). Critics charge that all computer simulations share this limitation, due to the inability of computers to act in any way other than how they were programmed to behave. This criticism overlooks the distinction between the micro-level programming of the agents and the macrolevel patterns of interaction that emerge. These patterns are ‘built in’ from the outset yet the exercise is valuable because the logical implications of behavioral assumptions are not always readily apparent. Indeed, a properly designed computational experiment can yield highly counter-intuitive and surprising results. This theoretical possibility is based on the principle of emergence in complex systems. Biological examples of emergence include life that emerges from non-living organic compounds and intelligence and consciousness that emerges from dense networks of very simple switch-like neurons. Schelling’s (1971) neighborhood model provides a compelling example from social life: segregation is an emergent property of social interaction that is not reducible to individual intolerance.

4.3 Rationality And Adaptive Behavior

A third criticism is that simulation models, unlike mathematical models, are numerical, not deductive, and therefore cannot be used to form generalizations. Worse still, simulations of evolutionary processes can be highly sensitive to the basin of attraction in which the parameters are arbitrarily initialized (Binmore 1998). However, game-theoretic mathematical models pay a high price for the ability to generate deductive conclusions: multiple equilibria that preclude a uniquely rational solution. Equilibrium selection requires constraints on the perfect rationality of the agents. Simon appreciates the paradox: ‘Game theory’s most valuable contribution has been to show that rationality is effectively undefinable when competitive actors have unlimited computational capabilities for outguessing each other, but that problem does not arise as acutely in a world, like the real world, of bounded rationality.’ (1998, p. 38) But bounded rationality often makes analytical solutions mathematically intractable. ‘When the agents use adaptive rather than optimizing strategies, deducing the consequences is often impossible: simulation becomes necessary.’ (Axelrod 1997, p. 4) In sum, simulation requires sensitivity analysis and tests of robustness, but properly implemented, it not only offers a solution to the problem of equilibrium selection but can also tell us something about how evolutionary systems move from one equilibrium to another.

5. Conclusion

From a narrowly positivist perspective, agent-based artificial worlds might seem unrealistic, unfalsifiable, and unreliable, especially when contrasted with earlier approaches to simulation that were data driven, predictive, and highly realistic (e.g., flight simulators). However, agent-based models may be essential for the study of open and complex adaptive systems, whether social, psychological, or biological. When used as an exploratory tool, ‘bottom-up’ simulation does not test theory against observation. Rather, these thought experiments suggest possible mechanisms that may generate puzzling empirical patterns.

A key theme that runs through these puzzles is that it is often very difficult to recognize the underlying causal mechanisms using conventional data-analytic methods. As Holland concludes (1995, p. 195), ‘I do not think we will understand morphogenesis, or the emergence of organizations like Adam Smith’s pin factory, or the richness of interactions in a tropical forest, without the help of such models.’

Bibliography:

  1. Axelrod R 1997 The Complexity of Cooperation. Princeton University Press, Princeton, NJ
  2. Bainbridge W 1995 Neural network models of religious belief. Sociological Perspectives 38(4): 483–96
  3. Binmore K 1998 Axelrod’s ‘The Complexity of Cooperation’. The Journal of Artificial Societies and Social Simulation 1: 1
  4. Caldwell S 1997 Dynamic Microsimulation and the Corsim 3.0 Model. Strategic Forecasting, Ithaca, NY
  5. Carley K 1991 A theory of group stability. American Sociological Review 56: 331–54
  6. Cyert R, March J G 1963 A Behavioral Theory of the Firm. Prentice-Hall, Englewood Cliffs, NJ
  7. Forrester J W 1971 World Dynamics. MIT Press, Cambridge, MA
  8. Gilbert N, Troitzsch K 1999 Simulation for the Social Scientist. Open University Press, Buckingham, UK
  9. Holland J 1995 Hidden Order: How Adaptation Builds Complexity. Perseus, Reading, MA
  10. Kitts J, Macy M, Flache A 1999 Structural learning: Attraction and conformity in task-oriented groups. Computational and Mathematical Organization Theory 5: 129–45
  11. Latane B 1996 Dynamic social impact: Robust predictions from simple theory. In: Hegselmann R, Mueller U, Troitzsch K (eds.) Modeling and Simulation in the Social Sciences from a Philosophy of Science Point of View. Kluwer Dorderecht, Boston, pp. 287–310
  12. Macy M 1995 Natural selection and social learning in prisoner’s dilemma: Co-adaptation with genetic algorithms and artificial neural networks. Sociological Methods and Research 25: 103–37
  13. Meadows D L, Behrens W W III, Meadows D H, Naill R F, Randers J, Zahn E K 1974 The Dynamics of Growth in a Finite World. MIT Press, Cambridge, MA
  14. Nowak A, Vallacher R 1997 Computational social psychology: Cellular automata and neural network models of interpersonal dynamics. In: Read S, Miller L (eds.) Connectionist Models of Social Reasoning and Social Behavior. Erlbaum, Mahwah, NJ
  15. Nowak M, Sigmund K 1993 A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisoners’ dilemma game. Nature 364: 56–7
  16. Parisi D, Cecconi F, Cerini A 1995 Kin-directed altruism and attachment behaviour in an evoking population of neural networks. In: Gilbert, Coute R C (eds.) Artificial Societies. University College London Press, London
  17. Prietula M, Carley K, Gasser L 1998 Simulating Organizations: Computational Models of Institutions and Groups. MIT Press, Cambridge, MA
  18. Schelling T 1971 Dynamic models of segregation. Journal of Mathematical Sociology 1: 143–86
  19. Simon H 1998 The Sciences of the Artificial. MIT Press, Cambridge, MA
  20. Vakas-Duong D, Reilly K 1995 A system of IAC neural networks as the basis for self-organization in a sociological dynamical system simulation. Behavioral Science 40: 275–303
Social Stratification Research Paper
Social Security Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!