Microsimulation in Demography Research Paper

Academic Writing Service

Sample Microsimulation in Demography Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Microsimulation is a computer-dependent technique for simulating a set of data according to predetermined probabilistic rules. It was originally applied primarily to problems in the physical sciences, such as gammaray scattering and neutron diffusion. Three characteristics of the method, highlighted in an early description (McCracken 1955), are that the problems to which it is applied depend in some important way on probability; that experimentation is impracticable; and that the creation of an exact formula is impossible. Each of these characteristics makes the method particularly useful in a variety of demographic applications.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. The Monte Carlo Method and Microsimulation

The Monte Carlo method takes its name from the games of chance popularly associated with the resort of the same name. At its simplest, the method is akin to coin-tossing. There is a known probability—onehalf, if the coin is fair—of the toss of a coin resulting in a head or, conversely, in a tail. If we toss a fair coin a number of times we expect to get heads half the time and tails half the time.

A demographic application is the calculation of the average waiting time to conception, that is, the average number of months a woman will take to conceive. In this example, the monthly probability of conception (fecundability) might be set at 0.2 (rather than 0.5, as in the coin-tossing example). One can think of this as requiring a coin that comes up heads with probability 0.2, and tails with probability 0.8: a very unfair coin indeed. The weighted coin is tossed until it comes up heads, and the number of tosses until this happens, which represents (or simulates) the number of months it will take for an individual to conceive, is recorded. The trial is repeated until the desired sample size of women has been achieved. Then, the average waiting time to conception is calculated by summing the number of months each individual ‘woman’ took to conceive, and dividing this sum by the total number of women.




Such a physical experiment—‘a simple game of chance with children’s marbles’—was carried out by de Bethune (1963, p. 1632) in his examination of the expected spacing between births. Taking two green marbles, representing the number of days in a menstrual cycle during which conception is possible, and 26 red marbles, representing the remaining days, de Bethune drew a marble at random and if it was red, returned it to the pot and drew again. His objective was to count the number of draws necessary to produce a green marble, which is equivalent to the number of 28-day cycles until conception occurred. He repeated the experiment 200 times, and tabulated the results. These two problems are so simple that they can be solved, with little effort, algebraically. But with some elaboration of either input distributions or process, or both, such problems quickly become algebraically intractable.

Mechanical solutions like de Bethune’s, while logically possible, are time-consuming and cumbersome. More elegantly, the overriding principle of random selection inherent in the toss of a coin or the selection of a colored marble can be preserved through use of a table of random numbers or, in probabilistic applications, a table of random probabilities. An experimental outcome is simulated by determining where a random probability falls on a known cumulative probability distribution. For example, given a probability of conception in a particular month of 0.18, the selection of a random probability up to and including 0.18 indicates that conception occurs in that month, while selection of a larger random probability indicates that it does not.

What made microsimulation a practical tool was the development of computer technology. The investigator uses a computer program that follows the paths laid out in a flow chart representing the process under investigation. At each point at which a decision must be made (such as whether a simulated individual conceives in a particular month), the program selects a random probability whose value, in comparison with a set of input probabilities, determines what will happen next. (Such computer-generated random numbers are more accurately termed pseudorandom since they are derived algorithmically, but this distinction has no practical significance here.) Information about each simulated individual is saved, and the resulting data set analyzed as though it had been derived in a more conventional way.

2. A Brief History of Microsimulation in Demography

Concerns about the possibility of massive population increase in the nonindustrialized countries after the end of World War II stimulated a great deal of demographic research into the determinants and components of high fertility. This research, united by an interest in how—and sometimes why—high fertility was brought about, took various directions: that of American demographers such as Notestein attempted to codify the forces that had produced the demo graphic transition of Western countries; that of French demographers such as Henry examined the patterns of childbearing of historical populations; that of Americans such as Sheps investigated the childbearing of contemporary high-fertility populations such as the Hutterites. Statistical models drawing on renewal theory appeared to hold out considerable promise, since it is a straightforward matter to envisage childbearing as a Markov renewal process (see for example Sheps and Perrin 1964). The problem was, however, that only oversimplified models were amenable to solution. As models gained in realism, they became intractable.

In the mid-1960s demographers whose investigations based on renewal theory had come to an impasse began to turn, with considerable enthusiasm, to microsimulation (see for example Hyrenius 1965). Notable among these were Sheps and Perrin in the United States, Hyrenius in Sweden, Barrett in Britain, Jacquard in France, and Venkatacharya in India. Their results were interesting and stimulating, and held out considerable promise of continuing gains in the future (see for example Barrett 1971). Undoubtedly, some of the enthusiasm for microsimulation at that time was related to enthusiasm for computer technology itself. Although it is now difficult to imagine performing empirical population analysis without the aid of a computer, this is a recent development in the history of the analysis of population data. It was only in 1964, for example, that the Population Commission of the Economic and Social Council of the United Nations ‘recommended that a study be made of the possibilities of use of electronic computers for expediting and enlarging the scope of demographic analysis’ (United Nations 1965, p. 1). The fact that the scientist commissioned to undertake this study was Hannes Hyrenius, who had a particular interest in microsimulation, may have given modeling in general and microsimulation in particular a degree of prominence that they might otherwise not have achieved.

Somehow, though, the promise held out in the 1960s that microsimulation might become a major weapon in the demographic armory has not been realized. What happened, instead, was that the demographic enterprise turned increasingly to data collection. Under the aegis of the World Fertility Survey (WFS), fertility surveys were conducted between 1973 and 1983 in more than 40 countries some of which had never before been subject even to a census. The expansion of the global demographic database was profound, and a great deal of energy was then directed toward data analysis. Paradoxically, the continued development of computer technology, which in the 1960s had made microsimulation a viable research tool, from the 1970s made data analysis feasible on a scale previously unimagined. This was further enabled by the development and marketing of specialized computer software. Today, new surveys continue to be conducted by the successor to the WFS, the Demographic and Health Surveys (DHS). Extraordinary gains in understanding have resulted from this burgeoning of information, and from the continued expansion of computing capability; but the construction and analysis of simulated data have, as a consequence, assumed a secondary role in the demographic enterprise.

This being said, microsimulation remains an important demographic tool. The technique continues to provide insights in various demographic applications that would otherwise not be forthcoming, as is described below.

3. Applications of Microsimulation in Demography

3.1 The Simulation of Fertility, Family, and Household

The demographic problems to which microsimulation was applied originally in the 1960s and 1970s were dominated by the quest for a better understanding of the relative importance of different components of the birth process, especially in high-fertility populations, and the estimation of levels of fertility implied by different combinations of these input components. Such applications probably remain the most common. Demographers use the term ‘fertility’ to refer to reproductive performance (the term ‘fecundity’ being reserved for the innate biological capacity to conceive), and the estimation of fertility involves the calculation of such statistics as age-specific fertility rates, or the distributions of the number of children ever borne according to maternal age, or average numbers of children ever borne. All these measures can be obtained from a collection of individual fertility histories consisting of the dates of birth of each of a simulated individual’s children. In contrast to real surveyderived fertility histories in which maternal age at each birth is determined by subtracting the mother’s date of birth from the child’s, simulated dates of birth are produced directly in terms of the mother’s age (generally in months) at the time of confinement.

The starting place for a microsimulation model of fertility is a flow chart of the reproductive process and sets of real or hypothetical probability distributions which can take either empirical or functional forms. An individual woman enters the reproductive process at the age when she is first exposed to the risk of conception, such as when she marries, and progresses through ‘life’ one month, or menstrual cycle, at a time. It is generally assumed that she has passed menarche at this point. It is then convenient to assess the age at which she will become sterile, for example, through menopause, although some small proportion of women will become infecund before menopause, or even be infecund before marriage. The simulation then moves forward one month, or menstrual cycle, at a time until the individual conceives, conception being determined in the Monte Carlo manner by comparing a random probability with the individual’s probability of conception (known as fecundability) in that month. If the individual does not conceive in a particular month, then, so long as she remains fecund, she is given a chance to conceive in the next one. Once she conceives, there is a possibility that she will abort spontaneously (miscarry), undergo induced abortion, experience a stillbirth, or produce a live child.

Each of these outcomes has associated with it a duration of gestation (pregnancy), and a duration of postpartum infecundity: pregnancies are short in the case of spontaneous abortions, longer in the case of stillbirths, and longest for live births; postpartum infecundity lasts for at most a few months in the case of a nonlive outcome, but may extend for up to a maximum of about 18 months in the presence of prolonged and intense breastfeeding. Subsequently, if the individual is not deemed to have become infecund in the meantime, she is once again at risk of conception. Ultimately, she does become infecund, and her reproductive history is then brought to an end.

There are many variations and elaborations on this basic structure. One such elaboration concerns exposure to the risk of pregnancy. In the simple example presented above, each individual’s exposure is assumed to continue from some initiating point at least until the individual is no longer fecund, which is equivalent to assuming universal marriage, no separation or widowhood while women are still fecund, and no female mortality. Alternatively, one might posit nonuniversal marriage that can be interrupted by the death of a husband or by separation, and might posit in addition a possibility of remarriage. One might also expose individuals themselves to the risk of death. This may be a useful strategy in certain applications since the simulated histories will then reflect the experience of all women and not just that of survivors as is perforce the case with histories that have been collected by retrospective interviewing.

Another complex of possible elaborations concerns child survival. Although the goal of such models is to estimate fertility, there are a number of reasons for taking account of child death. One is that the death of an unweaned child interrupts breastfeeding and thus leads to the mother’s becoming fecund sooner than if the child had survived. Another is that if the simulated population uses contraception the mother may suspend precautions in response to the death of a child in order to conceive and bear another. This introduces the possibility of another elaboration, related to contraceptive use, which can be incorporated into a simulation model as a method-specific proportional reduction in the monthly probability of conception.

One notable absence from all of these models, however simple or elaborate, is that of men. Microsimulation focuses on an individual at a time which means, in a simulation of fertility, that the focus is on women. However, many parameters that appear to pertain to women only actually pertain to a couple: fecundability, for example, assumes the presence of a man.

The model developed by Hyrenius and Adolfsson (1964) follows in broad terms the flow chart described earlier, and employs rather simple input data: fecundability, for example, is treated as a constant. Even so, the model produced interesting output, and by attracting considerable attention is probably the best candidate for the title of ancestor of all subsequent microsimulation models of fertility. Many of these were directed, like their forebear, at illustrating the implications of certain constellations of input data (Barrett 1971). Santow (1978), for example, calibrated a fertility model by comparison with data on the Hutterites, a religious group resident in farming communities in north America who traditionally demonstrated high and natural fertility. The model was then modified to apply to the Yoruba of Western Nigeria, notably by incorporating long periods of postpartum sexual abstinence. Information on abstinence had been collected in a survey, but not detailed information on fertility other than the number of children borne to women of particular ages. The simulation model filled this gap by providing schedules of age-specific fertility rates.

In principle, variants of the basic model permit the modeling of household structure and kin relations. Once a woman’s childbearing history has been simulated, the simulation of kin relations and household structure is a matter only of extending the model to incorporate factors such as survival or widowhood of the mother, and survival, leaving home, and marriage of her offspring, along with assumptions about the formation of separate households when offspring marry (for examples see Ruggles 1987, Wachter et al. 1978).

3.2 Evaluation of Methods of Analysis

Microsimulation has proved useful for evaluating the validity of a method of analysis, for demonstrating that a method is invalid, for illustrating those conditions under which a method breaks down, and for demonstrating the extent to which a method is biased. It is often possible to show by mathematics whether or not a method ’works,’ but a demonstration by means of microsimulation is both simpler to present and more compelling to an audience of nonmathematicians. In addition, in the event that a method does not work, it may be difficult to quantify by mathematical means alone the extent to which its output is misleading.

In the absence of direct information on births and deaths of individual children, levels of infant and child mortality can be estimated indirectly from tabulations of the proportions of children who have died according to their mother’s age. This indirect method for the estimation of child mortality can be tested by constructing a microsimulation model of childbearing over the reproductive span, with allowance made for given levels of infant and child mortality. The output from the model, consisting of individual histories of childbearing and child survival, can be manipulated to show the total number of children ever borne and children surviving by age of mother; implied mortality probabilities can be calculated according to the indirect method, and the results compared with the known input probabilities of mortality. One such exercise (Santow 1978, pp. 144–9) demonstrated that the indirect method worked well so long as the pace of childbearing was independent of the number of children already borne, but that once measures were adopted to limit family size to a particular number the method broke down. One might have anticipated that this would be the case because the derivation of the indirect method incorporated an empirical function representing the age pattern of natural fertility, which differs from that of controlled fertility, but the simulations permitted quantification of the extent of the bias.

Microsimulation has also been used to evaluate the operation of two indirect methods of detecting fertility control (Okun 1994). The methods are ‘indirect’ in the sense that the data to which they are conventionally applied—those compiled from historical or ThirdWorld sources—do not include direct measures of contraceptive use. Nevertheless, just as childbearing is affected by fertility control, so are histories of childbearing and the aggregate statistics derived from them; and the methods were devised with the intention of detecting the telltale traces of such control. As with the example of indirect estimation of mortality, the evaluation followed the form of a controlled experiment. Fertility histories were simulated under known conditions of contraceptive use, and the extent of fertility control was calculated by means of each of the indirect methods being evaluated. The inferred levels of fertility control were then compared with the known levels, which had been used as input in the microsimulation models. The exercise was valuable because it cast doubt on both the indirect methods examined. In this application, moreover, it is difficult to know how the indirect methods could have been evaluated comprehensively except by means of microsimulation.

For further examples of the use of microsimulation to evaluate methods of analysis see Bracher and Santow (1981) and Reinis (1992).

3.3 Inference of Probable Input when Outcomes are Known

The examples described thus far can be viewed as forward projections, since they deal with the implications for demographic outcomes of particular known sets of input conditions. It is possible, however, also to ‘go backwards’—to use microsimulation to assess what sort of input conditions are consistent with particular known outcomes. This is perhaps a more delicate exercise than the more usual one since it is conceivable that the same outcomes may derive from different combinations of input factors. To take a simple example, a low overall birth rate can result from high birth rates within marriage, very low ones outside marriage, and low proportions married; or from low birth rates within marriage and higher proportions married.

A recent example of such work sought to infer the levels of contraceptive use and the proportions of pregnant teenagers seeking abortion that were consistent with observed shifts in teenage birth rates and teenage abortion rates in Sweden (Santow and Bracher 1999). Microsimulation was indicated because abortion rates expressed in terms of women, which is the form in which official data are tabulated, can fall either because fewer conceive, or because fewer pregnant women seek abortion. The application was justified because the models demonstrated that particular combinations of the proportion using contraception and the proportion who would seek abortion if pregnant created unique pairs of birth and abortion rates.

4. Independence and Levels of Aggregation

In formulating a microsimulation exercise the analyst is always confronted with a critical question concerning the level of detail, or conversely of aggregation, that is necessary in order to represent sufficiently faithfully the process whose implications are being investigated. Hyrenius and Adolfsson (1964), for example, created useful results from simulations that incorporated a monthly probability of conception (fecundability) that was invariant not just according to age, but also over the entire population. In later

applications it was found useful to vary fecundability by age (for example, Santow 1978) or between women (for example, Bracher 1992). The latter application, the aim of which was to assess the effect on the timing of the subsequent conception of various combinations of no breastfeeding, full breastfeeding, and contraception of various efficiencies adopted at six weeks postpartum or at the appearance of the first menstrual period, also incorporated a prospectively obtained distribution of times to the first menstruation among breastfeeding women, menstrual cycles of varying lengths, and varying probabilities of ovulation’s preceding the first postpartum menstruation.

We might contrast this very fine disaggregation of a segment of the childbearing process with the approach typically taken by microsimulators of kinship relations in historical populations. The former problem, concerning the timing of a subsequent conception under varying patterns of breastfeeding and the adoption of postpartum contraception, does not seek to go so far as to simulate complete birth histories of women whence are derivable conventional fertility statistics such as age-specific fertility rates and distributions of children ever born (although it would be possible to elaborate the model to produce such output). Problems of the latter type, however, typically take such conventional fertility measures, as derived from an initial, historical population, as their starting point. Since the aim of such exercises is to examine the implications of particular demographic scenarios for subsequent generations, the microsimulators quite naturally want to start with a known scenario which, if it is to be constructed from the basic flow chart described earlier, would require a great deal of trawling through the available scanty data, and a great deal of trial and error. They thus employ rather aggregated demographic parameters, such as the distribution of children ever born, so that their simulation output will mimic from the outset the properties of a known population.

The problem, as Ruggles (1993) has pointed out, is that demographic behavior is correlated between the generations. Fertility is correlated within population subgroups defined by family background, and also within families; and mortality probabilities are correlated within members of the same kin group. Failure to take such correlations into account leads to an underestimate of the heterogeneity in simulated populations of kin. Ruggles terms the assumption that the characteristics of members of a kin group are uncorrelated the Whopper Assumption, and shows that the resulting error can be large.

The heart of the problem is an assumption of independence where true independence does not exist. Yet one aspect of the power of microsimulation lies in its ability to incorporate heterogeneity in underlying characteristics (such as fecundability and the risk of child death) into the simulated population. Another aspect of the power of microsimulation is its ability to introduce dependence between events—to take account of the fact that different events lead to different subsequent pathways. (Indeed, it was precisely the failure to take account of the lack of independence of such demographic behaviors as using contraception, being infecund after a birth, and being married, that invalidated the analytical model discredited, by means of microsimulation, by Reinis [1992].) Selecting the level of disaggregation of a process, and hence of population heterogeneity, that is appropriate for the issue under examination is a critical element of a useful microsimulation model.

Bibliography:

  1. Barrett J C 1971 A Monte Carlo simulation of reproduction. In: Brass W (ed.) Biological Aspects of Demography. Taylor and Francis, London
  2. de Bethune A 1963 Child spacing: The mathematical probabilities. Science 142: 1629–34
  3. Bracher M 1992 Breastfeeding, lactational infecundity, contraception and the spacing of births: Implications of the Bellagio Consensus Statement. Health Transition Re iew 2: 19–47
  4. Bracher M, Santow G 1981 Some methodological considerations in the analysis of current status data. Population Studies 35: 425–37
  5. Hyrenius H 1965 Demographic simulation models with the aid of electronic computers. World Population Conference. IUSSP, Liege, Vol. 3, pp. 224–6
  6. Hyrenius H, Adolfsson I 1964 A fertility simulation model. Reports 2. Almquist & Wiksell, Stockholm, Sweden
  7. McCracken D D 1955 The Monte Carlo method. Scientific American 192(5): 90–6
  8. Okun B S 1994 Evaluating methods for detecting fertility control: Coale and Trussell’s model and cohort parity analysis. Population Studies 48: 193–222
  9. Reinis K I 1992 The impact of the proximate determinants of fertility: Evaluating Bongaarts’s and Hobcraft and Little’s methods of estimation. Population Studies 46: 309–26
  10. Ruggles S 1987 Prolonged Connections: The Rise of the Extended Family in Nineteenth-Century England and America. University of Wisconsin Press, Madison, WI
  11. Ruggles S 1993 Confessions of a microsimulator: Problems in modelling the demography of kinship. Historical Methods 26: 161–9
  12. Santow G 1978 A Simulation Approach to the Study of Human Fertility. Martinus Nijhoff, Leiden, The Netherlands
  13. Santow G, Bracher M 1999 Explaining trends in teenage childbearing in Sweden. Studies in Family Planning 30: 169–82
  14. Sheps M C, Perrin E B 1964 The distribution of birth intervals under a class of stochastic fertility models. Biometrics 20: 395
  15. United Nations 1965 Study on the use of electronic computers in demography, with special reference to the work of the United Nations. Economic and Social Council E CN.9 195
  16. Wachter K W, Hammel E A, Laslett P 1978 Statistical Studies of Historical Social Structure. Academic Press, New York
Demographic Effects Of Segregation Research Paper
Human Ecology and Demographics Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!