Identification in Life Course Criminology Research Paper

Academic Writing Service

This sample criminology research paper on identification in life course criminology features: 4800 words (approx. 16 pages) and a bibliography with 30 sources. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

In everyday language, the word “identify” refers to the presentation of proof or convincing evidence that something is what it appears to be. This is also what identification means in the social sciences and in life-course criminology. Life-course criminologists try to understand how crime develops, changes, and evolves over the life span. Their efforts usually fall into one or more of the following two categories: (1) describe stability and change in some criminologically interesting characteristic or behavior and (2) discern whether certain factors are causes or correlates of criminal behavior. Within each of these categories, a focus on identification leads to questions about what can be measured, what is being measured, and what are the main sources of uncertainty. This research paper provides a brief discussion of the key identification issues in each category of life-course research.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


General Terminology

The most comprehensive contemporary works in the identification area are those of Charles Manski (1995, 2003). In general, Manski’s work revisits and then extends a much older literature (Koopmans 1949; Cochran et al. 1954). A key theme throughout all of this work is that identification is a matter of degree. So it is more useful to speak of stronger and weaker identification instead of speaking of an estimate as being identified or not identified Manski (2003, p. 2).

Manski further notes that the social sciences have typically overemphasized the importance of point identification. Point estimation occurs when a researcher is able to combine the data with a set of assumptions that are strong enough to yield an answer that can be transmitted with a single estimate. Point estimates can only be obtained when a parameter to be estimated is point-identified. And point identification is generally possible in the social sciences when certain (usually quite strong) “point-identifying assumptions” are met. The main problem is that well-informed researchers often disagree about the validity of the assumptions required for point identification. These disagreements end up creating ambiguity about the accuracy of the point estimate. Manski (2003, p. 1) describes this problem as “the law of decreasing credibility: the credibility of inference decreases with the strength of the assumptions maintained.” With weaker assumptions, point identification may no longer be possible. But it will often still be possible to identify a set of values (a partial identification interval) that include the correct answer. Manski (2003, p. 2) refers to the practice of combining weaker assumptions with data to yield set-identified parameters as “partial identification.” An important feature of this approach is that it will often be possible for well-informed scientists and policy makers to agree on the weaker identifying assumptions, thereby increasing the likelihood of achieving consensus about the validity of the set-identified estimate.




The level of uncertainty in estimates creates a tension in social science research. On the one hand, social scientists are generally trained to think about their analyses and the presentation of their results in terms of point estimates. Such estimates are often presented along with some measure of uncertainty due to sampling error (i.e., standard errors and confidence intervals). On the other hand, a careful “partial identification” analysis typically reveals much greater levels of uncertainty than what one would normally encounter with conventional point estimators (combined with estimates of sampling error). In fact, as Manski’s work (and the work of others) has shown, analyses based on weaker identifying assumptions often lead to uncertainty that dwarfs what social scientists are accustomed to seeing (see, e.g., Manski and Nagin 1998). So, a partial identification analysis will typically transmit high levels of uncertainty in exchange for weaker assumptions – assumptions which will be plausible to a wide range of observers. An emphasis on point identification often transmits small degrees of uncertainty in exchange for assumptions that are much stronger – and often not credible to many observers.

Stability And Change

Life-course criminologists are concerned with questions about what stays the same and what varies over time. Sometimes efforts to answer these questions focus on cross-sectional data. A good example of a cross-sectional analysis with life-course implications comes from Tittle and Ward (1993). In this study, the authors used survey data collected at a single time point (1972) to study the relationship between age and criminal involvement covering ages 15–94 in three states (Iowa, New Jersey, and Oregon). The purpose of the study was to examine predictors and moderating influences on the venerable “agecrime curve” (Gottfredson and Hirschi 1990). Similarly, researchers have long used cross-sectional data from the Federal Bureau of Investigation’s Uniform Crime Reporting (UCR) program to study arrest rates stratified by ageacross the population (Piquero et al. 2003) – with the goal of understanding how criminal involvement covaries with age.

Unfortunately there are ambiguities with each of these efforts. The Tittle and Ward (1993, p. 11) study states that “[i]nterviews were completed successfully with respondents from 57 % of the originally selected households (77 % of the screened subjects).” Despite Tittle and Ward’s (1993, p. 11) claim that the resulting sample matched up closely to census demographic statistics in the surveyed areas, the high rate of missing data does create some ambiguities. And it is well known that a small – but not insignificant – number of law enforcement agencies decline to participate in the UCR program in any given year. After all, police departments can only arrest suspects in crimes that have been reported. Since many victims choose not to report their victimizations to the police, those cases are quite unlikely to lead to an arrest. More importantly if those reporting practices change in important ways over time, the biases in the data can change as well.

Over the past 40–50 years, criminologists have increasingly turned their focus to the analysis of longitudinal micro-data or “panel” datasets. Criminologists have relied on these studies to measure key features of what has come to be known as the “criminal career” (Piquero et al. 2003). They are distinguished from cross-sectional studies by their ability to measure how behaviors remain stable or change over time for the same individual persons. There are generally two methods by which longitudinal micro-data are collected: (1) retrospective studies of administrative records that yield a sequence of dates on which particular events occurred (Tracy et al. 1990; Piquero et al. 2007) and (2) prospective studies of the same individuals over a sustained period of time to measure a range of life events and experiences (Elliott et al. 1989; Piquero et al. 2007; Thornberry et al. 1991; Huizinga et al. 1991; Loeber et al. 1991). These two types of studies are not mutually exclusive. In some instances, the retrospective studies have been supplemented with interviews, and in some instances the prospective studies have included detailed reviews of administrative data. The difference between them is generally a difference in emphasis or focus. Both types of studies yield a stream of information for each individual about particular events, experiences, and perceptions.

These studies also have some measurement obstacles that may impact identification. Administrative records may be incomplete for some individuals, particularly if they move out of the area or become incapacitated. And, as in cross-sectional studies, administrative records only measure what is recorded by an agency. Survey data can be compromised when individuals who are targeted to be in a study cannot be located or do not provide consent for participation. This can be a problem at the time of initial contact (enrollment bias), or it can become a problem as the study progresses when individuals drop out (attrition bias). Still another concern is that survey participants learn over time that they can shorten the survey if they answer questions in particular ways (a testing effect) (Thornberry and Krohn 2000; Lauritsen 1998). Regardless of the source of the measurement obstacle, data that are measured incompletely affect the degree of identification.

A recent example of the ambiguity created by these sorts of issues comes from a study of the National Longitudinal Survey of Youth (1997 version; NLSY97) conducted by Brame et al. (2012). This study’s main objective was to estimate the proportion of the US population that had been arrested or taken into custody at least one time for something other than a minor traffic offense by age 23. The NLSY97 was designed to be based on a nationally representative sample of households with youth between the ages of 12 and 16 years old on December 31, 1996. Two samples were surveyed – a self-weighting random sample and a minority oversample. Brame et al. (2012) focused their efforts on the 7,335 youths who were selected for inclusion in the random sample. Each person in the sample was asked an initial question about prior arrest experiences at the first interview (1997). Then, at each subsequent interview (through 2008), the respondents were asked about arrest experiences that had occurred since the last interview.

Aside from the obvious difficulties of asking individuals to self-report their own arrest experiences, the study encountered three additional problems. First, only 6,748 (91.9 %) of the originally selected 7,335 persons in the sample actually completed a first interview in 1997. Second, among those who did complete a first interview, many persons missed at least one interview by age 23. Third, even among those who were interviewed, some declined to answer questions about their arrest experiences. By age 23, Brame et al. (2012, p. 23) were able to identify 1,858 people who had been arrested at least once and 4,299 people who had not yet been arrested. For the remaining 1,178 (16.1 %) persons, the ever-arrested status could not be determined because of incomplete data.

Criminologists who study individual-level datasets like the NLSY97 often encounter these kinds of incomplete data problems. One way to address these problems is to assume that the missing cases are a simple random sample of the cases that were originally targeted for inclusion in the study (the missing at random assumption) (Manski 1995). With this assumption, Brame et al. (2012) study would yield an estimated arrest rate of 1,858/ (1,858 + 4,299) ¼ 30.2 %. The problem is that there is no way to test the validity of this assumption. It is quite possible the arrest experiences of the missing cases look quite different from those of the observed cases. Since the missing cases are missing, however, there is no way to verify this assumption.

An important insight offered by Manski (1995, 2003) provided the authors with a path forward, however. This way forward was based on the following question: what can the data alone reveal? The answer to this question is obtained by considering two extreme cases: (1) the arrest rate assuming that all of the missing cases were arrested ([1,858 + 1,178]/[1,858 +4,299 + 1,178] = 41.4 %) and (2) the arrest rate assuming that none of the missing cases were arrested (1,858/[1,858 + 4,299 + 1,178] = 25.3 %). So, assuming individuals who participated accurately reported their experiences, the actual arrest rate must lie somewhere between 25.3 and 41.4 %. In the special case where the arrest rate is the same for both the observed and the missing cases, the arrest rate for the entire 7,335 cases originally targeted for the survey would be 30.2 %.

Some observers might look at the [25.3 % and 41.4 %] interval and find it to be excessively wide and therefore relatively useless to draw substantive conclusions. But a careful, scientific treatment of identification leads to a different set of insights. As Manski would point out: (1) the interval before looking at the data was [0 %,100%] so the fact that the data allow us to reduce our uncertainty to [25.3 %,41.4 %] means that significant progress has been made; (2) without additional information about the missing cases, we cannot say with any empirical basis that one value in the interval is more probable than another value; (3) the ambiguity caused by not knowing the status of the missing cases leads to much greater amounts of uncertainty than what we normally encounter when we only consider sampling error; (4) studies often do not consider this kind of uncertainty, and if they did, the conclusions of social scientists would appear to be much more ambiguous and fragile than they are typically presented as being; and (5) the extreme bounds are a starting point for further analysis, not an ending point. Researchers can follow up on the extreme bounds analysis by examining how stronger assumptions may be invoked to tighten the bounds. The advantage of this approach is that it is clear how much of the identification is based on the data alone and how much is based on fundamentally untestable assumptions about what cannot be measured.

Toward this end, Brame et al. (2012) noted that if we are willing to assume that the missing cases are at least as likely to have been arrested as the observed cases, then the bounds shrink from the extreme case of [25.3%, 41.4 %] to [30.2 %, 41.4%]. There are some compelling criminological reasons to make this assumption. In fact, researchers using the NLSY97 have documented higher attrition rates at later survey waves for individuals self-reporting problem behaviors in the initial wave (McCurley 2006). These data suggest that individuals with a higher propensity of problem behaviors have a lower probability to remain within the sample of survey respondents, but if some researchers or policy analysts do not accept it, the bounds revert back to the extreme case. In this sense, the bounds take on a “no-free-lunch” quality. One starts by considering what can be learned with minimal assumptions (i.e., all respondents understand the question that is being asked, they are cognitively able to answer the question, and that the information they provide is accurate) combined with the data. If one is not willing to make further assumptions, then the extreme bounds will be as tight as possible. If one is willing to make further assumptions, then the argument for those assumptions can be made, and if the audience is convinced, the uncertainty will shrink. If the audience is not convinced, the extreme bounds will form what Manski calls the “domain of consensus” about how much the data are able to reduce uncertainty.

A first order of business in any developmental study of criminal behavior is to measure both stability and change in criminal involvement. Yet, criminological datasets used to measure these quantities are nearly always hampered by incomplete or missing data. Criminologists are accustomed to adopting easy and convenient fixes for these problems. But these fixes often do not come fully to terms with the uncertainty created by missing data. The advantage of a comprehensive identification analysis is that it shows the audience exactly what the data alone can reveal. Then, different sets of researchers and policy audiences can see how much a specific conclusion depends on the data and untestable assumptions about the data.

Distinguishing Between Correlation And Cause And Effect

Of course, criminologists are often interested in moving beyond descriptive analyses in order to measure the causal effect of interventions and life experiences on criminality. Early criminologists often thought of the causes of crime in terms of events, circumstances, and experiences that occurred at about the same time as criminal involvement. But contemporary criminologists take a more nuanced position on this issue arguing that some causes of crime are likely proximate to criminal involvement while others are rooted in the formative aspects of the life course – namely infancy, toddlerhood, and early childhood (Grasmick et al. 1993).

For example, Gottfredson and Hirschi’s (1990; see also Hirschi and Gottfredson 1983) theory of self-control maintains that the failure of parents to adequately socialize children contributes to poor development of self-control which, in turn, leads to a greater risk of criminal involvement. From their perspective, while “socialization continues to occur throughout life” (Gottfredson and Hirschi 1990, p. 92), differences in self-control between individuals of the same age are destined to remain about the same throughout the life span (i.e., someone who ranks low in self-control at age 8 will also rank low in self-control at age 58). Their position gives voice to an important axiom of contemporary criminology: there is a rival hypothesis for any apparent effect of a post-childhood life experience or event on criminality. The implications are profound and can be illustrated with a few prominent examples.

Among the most well-replicated findings in life-course criminology are (1) the positive correlation between past and future criminal behavior (Nagin and Paternoster 2000) and (2) the negative correlation between the time since one’s last offense and the likelihood or risk that one commits new offenses (Maltz 1984; Schmidt and Witte 1988; Kurlychek et al. 2012). There are clear causal interpretations of these results. Prior offending may reduce one’s opportunity to live in stable housing, develop normal healthy relationships with family and friends, obtain an education, or secure employment, and these obstacles may, in turn, lead to increased criminality. Similarly, when an individual leaves prison, there may be a sustained effort to avoid criminality, and the actual experience of avoiding criminality each day causally and gradually reduces the risk of recidivism over time. But, of course, it is also possible that higher-risk individuals are more likely to offend at all times inducing a strong positive correlation between past and future offending behavior and a strong negative correlation between time since one’s last offense and the risk of future offending. Hence, an important source of ambiguity in these results is whether the correlations are the result of causal effects or whether they are simply artifacts of failure to measure and properly control for the influence of stable tendencies (like low self-control). Econometricians have long understood that persistent between-individual differences are capable of producing these exact patterns (Heckman 1978). The question is as follows: when these patterns appear, are there reliable ways to discern their source?

This is a classic identification problem. If a researcher can measure the stable between individual differences in factors like self-control, then it would be easy to identify individuals that are comparable in terms of these differences and look at the data to see whether the ones who have offended in the past are more likely to offend in the future or whether variation in time since the last offense is able to predict the limiting risk of future offending. The problem is that researchers will typically only be able to create partial measures of the relevant between-individual differences. Econometric models can help, but these models typically require strong assumptions related to the functional form of the model, the initial conditions preceding the sequence of over-time observations, the probability distribution of the error term, and the stability of measurement error over time. The ambiguity of partial measurement and questions about how much of the analysis depends on these strong and untestable assumptions can lead to intractable debates about whether causal inference is viable.

One way to make progress is for researchers to become more specific about the factors they believe will have important effects on criminality and then design studies to rigorously test for those effects. Toward this end, Daniel Nagin and his colleagues (see Nagin and Odgers 2010 for a useful discussion) have embarked on an increasingly comprehensive program of research to investigate causal effects for residential placement of juvenile offenders, gang membership, grade retention in school, imprisonment, and adolescent employment. A common theme of this work is that an explicit effort is made to ensure that comparisons between individuals are well posed so that the basis for identification of causal effects is transparent.

The first of these efforts – Manski and Nagin (1998) – considers the very difficult problem of estimating the effect of sanctions on recidivism for juvenile offenders. This study relied on a broad-ranging sample of 13,197 juveniles who were adjudicated delinquent in Utah juvenile courts at least 2 years before their 18th birthday. Each of these juveniles was characterized as receiving a residential placement (11 %) or a community placement (89 %). All of the juveniles were then followed up over a 2-year period, and the number who reappeared in the juvenile court within that period was counted for each group. This analysis revealed that 61% of the juveniles returned to the court within the 2 years (77 % for the residential placement group and 59 % for the community group). The key question is what can be learned about the effects of residential versus community placement from these data. It turns out that with only minimal assumptions (i.e., no measurement bias, each person’s outcome is independent of each other person’s treatment), the sign of the sanction effect is not identified. If one is willing to make two assumptions, then it is possible to reduce uncertainty about the sanction effect: (1) sanctioning practices vary between Utah judicial districts, and (2) the average outcome after receiving a given sanction does not vary between Utah judicial districts. This pair of assumptions is called an exclusion restriction and has identifying power because there is no obvious reason to suppose that judicial district affects recidivism. Combining these assumptions with the additional assumption that the highest risk offenders are more likely to receive residential placement (what Manski and Nagin refer to as the “skimming model”) leads to the conclusion that residential placement is criminogenic. On the other hand, combining these assumptions with the assumption that each individual receives the sanction that minimizes their risk of recidivism (what Manski and Nagin refer to as the “outcome optimization model”) leads to the conclusion that residential placement reduces recidivism.

The identification analysis of Manski and Nagin is useful on three different levels. First, it tells us what the data alone (with minimal assumptions) can reveal about the sign and magnitude of sanction effects. Second, it tells us what assumptions are required to make progress. Third, it tells us where the priority needs to be placed for future research. In this case, researchers learn that a better understanding of juvenile judges’ priorities in assigning dispositions is needed in order to better understand the sanction effects. It is noteworthy that efforts to simply condition the comparisons on a set of observed factors in the data would not have led to these insights.

Nagin’s more recent efforts have emphasized the use of semi-parametric, group-based trajectory models to understand population variation in the development of criminal behavior over time. An important by-product of these models is a means to probabilistically classify individuals as belonging to groups or “latent classes” that are comprised of individuals with relatively similar developmental trajectories (Nagin and Odgers 2010) prior to some intervention or experience of interest. These models have been somewhat controversial among criminologists because of concerns about “group reification” (for discussion see Nagin and Tremblay 2005; Sampson and Laub 2005). But these concerns do not negate the fact that the method reliably identifies individuals with relatively similar developmental trajectories of criminality. The critical insight of Nagin and colleagues is that the trajectory groups can be used as the basis for creating reservoirs of comparable individuals prior to the occurrence of a treatment or experience that could affect future criminality. In practice, Nagin and colleagues have used trajectory groups in conjunction with so-called propensity scores (Rosenbaum and Rubin 1983) to identify persons that match each other on background characteristics but experience different treatments or interventions.

The identifying power of these comparisons is based on the idea that two or more people who match each other on an extensive set of pretreatment characteristics can be sufficiently comparable to have a reasonable discussion about causal inference. The advantage of the Nagin approach is that the discussion of identification does not devolve into debates about whether the modeling assumptions are correct. Instead the discussion is based on whether the set of characteristics on which one matches are reasonable and comprehensive. Such discussions can provide the field with an understanding of what a comparison of reasonably well-matched cases looks like and they can inform the priorities for variables that should be measured in subsequent research. An added benefit is that these analyses are easy to explain to nonscientific policy audiences.

A good illustration of these issues appears in Nieuwbeerta et al. (2009). In this study, Nieuwbeerta and colleagues used data from the Netherlands Criminal Career and Life-Course Study to investigate the effects of first-time imprisonment on recidivism. The first step of the analysis was to estimate a series of conviction trajectory models for varying durations beginning at age 12 (with a maximum of age 37). Next, within each trajectory group, a propensity score was estimated for each individual in that group. The propensity score for an individual measured the estimated probability that an individual receives a first term of imprisonment conditional on an extensive set of offense-related, demographic, and life circumstances characteristics that were measured before the sentence was imposed. The propensity score estimator is based on a theorem from Rosenbaum and Rubin (1983) which demonstrates that as the sample size grows infinitely large, matching or stratifying treatment and control cases on the estimated propensity score will tend to balance or “equalize” the factors used to estimate the propensity score between the treatment and the control groups. This is a powerful result and it is increasingly being used in criminology and the other social sciences to estimate treatment effects in ways that are more convincing than the standard regression-based estimators.

The third step of the analysis involved matching each imprisoned person to 1, 2, or 3 non-imprisoned individuals. The matching process guaranteed that matched individuals are similar in terms of their prior conviction histories and in terms of the pretreatment characteristics used to estimate the propensity score. The analysis showed that the recidivism rates of imprisoned individuals were significantly higher than those of non-imprisoned individuals after matching. While this is a compelling result, the analysis had some important limitations including (1) the inability to match some of the most serious imprisoned offenders in the study with a comparable person who did not receive a confinement-based sentence, (2) a wide-ranging but still finite set of covariates and background characteristics to match on, and (3) for the sake of inferential clarity, only first-time imprisonment experiences were studied. But even with these limitations, the Nieuwbeerta study provides a useful model for criminologists to consider as they think about causal inference and identification in life-course criminology.

Conclusion

Criminologists are increasingly devoting attention to issues of identification as they conduct life-course research. This trend is bringing a greater focus and scrutiny to issues such as missing data and estimation of treatment and intervention effects among individuals that are suitably comparable to each other on developmentally appropriate factors and characteristics. It is all too easy for researchers to simply drop cases with missing information from their analysis and proceed with a study of the residual sample of nonmissing cases while hoping that the biases are not large. It is all too easy for researchers to estimate a statistical model-based treatment effect while hoping that the functional form assumptions of that statistical model are consistent with the data. A focus on identification moves away from a criminology of excessive “certitude” (Manski 2011) and toward a criminology of sensitivity analysis, robustness testing, and verification.

When cases have missing data, a focus on identification leads to what-if questions about those cases. What if the missing cases are more likely than the observed cases to be involved in criminal behavior? If that is true, how would the analysis results change? What assumptions are criminologists willing to make to achieve identification? What assumptions will criminologists agree are reasonable? What do the data reveal about criminologically interesting quantities if researchers are only willing to make minimal assumptions? These are all questions that have not been asked much in life-course criminology. But they are starting to be asked more and the methods to make those questions answerable are becoming more widely available.

Methods for causal inference in observational life-course studies are also becoming more sophisticated and thoughtful. Here, a focus on identification leads to equally useful questions. For example, here is a question that should always be asked in a causation-focused study: what is the empirical basis for comparing these cases to each other? Is this comparison an apples-to-apples comparison? How are the groups being compared different from each other in terms of characteristics that are, at least in theory, measurable before an intervention (going to prison) or life experience (getting married or securing employment) occurs?

Criminologists are accustomed to thinking in theoretical terms about the causes of criminal involvement. But a focus on identification leads to broader questions like why some people go to prison while others do not. Why is it that some people get married or get a job and others do not? Does gang involvement lead to criminality? Or does criminality lead to gang involvement? These are questions that criminologists have generally not considered as much as the questions addressed by traditional crime theories. But a focus on identification means these questions are every bit as important as the traditional theoretical terrain.

Ultimately, identification is about researchers coming to terms with what is known and that which remains unknown. If the data are missing, it is of little use to confine our analysis to the simplistic assumption that the data are missing purely at random. It is far more fruitful to present the analysis with reasonable attention to what happens to the analysis if the assumption is wrong. If the analysis relies on a comparison of one group of people to another group of people, then criminologists should make clear and convincing arguments about why those comparisons are sensible and what would happen to the analysis if the assumptions are wrong. In sum, the more the field thinks about identification, the farther the field will advance toward a more thoughtful and rigorous science of criminal behavior.

Bibliography:

  1. Brame R, Turner MG, Paternoster R, Bushway SD (2012) Cumulative prevalence of arrest from ages 8–23 in a national sample. Pediatrics 129:21–27
  2. Cochran WC, Mosteller F, Tukey JW (1954) Statistical problems of the Kinsey Report. J Am Stat Assoc 48:673–716
  3. Elliott DS, Huizinga D, Menard S (1989) Multiple problem youth: delinquency, substance use, and mental health problems. Springer, New York
  4. Gottfredson MR, Hirschi T (1990) A general theory of crime. Stanford University Press, Stanford
  5. Grasmick HG, Tittle CR, Bursik RJ, Arneklev BJ (1993) Testing the core empirical implications of Gottfredson and Hirschi’s general theory of crime. J Res Crime Delinq 30:5–29
  6. Heckman JJ (1978) Simple statistical models for discrete panel data developed and applied to test the hypothesis of true state dependence against the hypothesis of spurious state dependence. Annales de l’inse´e´ 30–31:227–269
  7. Huizinga D, Esbensen F-A, Weiher AW (1991) Are there multiple paths to delinquency? J Crim Law Criminol 82:83–118
  8. Koopmans TC (1949) Identification problems in economic model construction. Econometrica 17:125–144
  9. Kurlychek MC, Bushway SD, Brame R (2012) Long-term crime desistance and recidivism patterns: evidence from the Essex County convicted felon study. Criminology 50:71–103
  10. Lauritsen J (1998) The age-crime debate: assessing the limits of longitudinal self-report data. Soc Forces 77:127–155
  11. Loeber R, Stouthamer-Loeber M, Van Kammen W, Farrington DP (1991) Initiation, escalation, and desistance in juvenile offending and their correlates. J Crim Law Criminol 82:36–82
  12. Maltz MD (1984) Recidivism. Academic, Orlando
  13. Manski CF (1995) Identification problems in social science research. Harvard University Press, Cambridge, MA
  14. Manski CF (2003) Partial identification of probability distributions. Springer, New York
  15. Manski CF (2011) Policy analysis with incredible certitude. Econ J 121:F261–F289
  16. Manski CF, Nagin DS (1998) Bounding disagreements about treatment effects: a case study of sentencing and recidivism. Sociol Methodol 28:99–137
  17. McCurley C (2006) Self-reported law-violating behavior from adolescence to early adulthood in a modern cohort. Final report to the National Institute of Justice, Washington, DC
  18. Nagin DS, Odgers CL (2010) Group-based trajectory modeling in clinical research. Annu Rev Clin Psychol 6:109–138
  19. Nagin DS, Paternoster R (2000) Population heterogeneity and state dependence: state of the evidence and directions for future research. J Quant Criminol 16:117–144
  20. Nagin DS, Tremblay RE (2005) Developmental trajectory groups: fact or a useful statistical fiction. Criminology 43:873–904
  21. Nieuwbeerta P, Nagin DS, Blokland AAJ (2009) Assessing the impact of first-time imprisonment on offenders’ subsequent criminal career development: a matched samples comparison. J Quant Criminol 25:227–257
  22. Piquero AR, Farrington DP, Blumstein A (2003) The criminal career paradigm. Crime Justice Rev Res 30:359–506
  23. Piquero AR, Farrington DP, Blumstein A (2007) Key issues in criminal career research: new analyses of the Cambridge Study in Delinquent Development. Cambridge University Press, New York
  24. Rosenbaum PR, Rubin DB (1983) The central role of the propensity score in observational studies for causal effects. Biometrika 70:41–55
  25. Sampson RJ, Laub JH (2005) Seductions of method: rejoinder to Nagin and Tremblay’s “Developmental Trajectory Groups: fact or fiction?”. Criminology 43:905–913
  26. Schmidt P, Witte AD (1988) Predicting recidivism using survival models. Springer, New York
  27. Thornberry TP, Krohn MD (2000) The self-report method for measuring delinquency and crime. In: Dufee D (ed) Measurement and analysis of crime and justice: criminal justice 2000. National Institute of Justice, Washington, DC, pp 33–83
  28. Thornberry TP, Lizotte AJ, Krohn MD, Farnworth M, Jang SJ (1991) Testing interactional theory: an examination of reciprocal causal relationships among family, school, and delinquency. J Crim Law Criminol 82:3–35
  29. Tittle CR, Ward DA (1993) The interaction of age with the correlates and causes of crime. J Quant Criminol 9:3–53
  30. Tracy P, Wolfgang M, Figlio RM (1990) Delinquency careers in two birth cohorts. Plenum, New York
Geographic Criminology Research Paper
Longitudinal Studies in Criminology Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!