Bullying Prevention Research Paper

This sample bullying research paper on bullying prevention features: 3000 words (approx. 10 pages) and a bibliography with 40 sources. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

In this research paper contemporary methodological issues in bullying prevention research are considered. The findings of extant systematic reviews and metaanalyses of school-based bullying prevention programs are assessed and integrated, with the aim of drawing clearer and more differentiated conclusions regarding their efficacy. Conclusions are drawn based on six reports, of which two included a systematic review but no meta-analysis, two included a systematic review and a meta-analysis, and two were not based on systematic searches of the literature but included some level of metaanalytic assessment. Based on a careful screening of all available meta-analytic investigations, it is concluded that bullying prevention programs are effective in reducing bullying and victimization. However, research users should be careful in identifying those intervention components and implementation procedures that are associated with a reduction in bullying. The paper concludes by identifying important challenges currently faced in the field of bullying prevention and highlighting areas for future research and implications of this work for psychologists and social scientists in general. The findings from this review of reviews are intended to inform both policy and public health practice related to bullying prevention.

Introduction

Research on school bullying has expanded considerably over the past two decades. It is now acknowledged as an established international research program, with worldwide coordinated efforts in founding a concordant methodological terminology for the explanation of this phenomenon (Smith et al. 2002). To a great extent, the strong scientific interest in bullying has been linked to the detrimental concurrent effects of school bullying for both perpetrators and victims. Notably, previous research has also established the long-term significant association of school bullying with internalizing (e.g., depression) and externalizing (e.g., offending and violence) problems (Ttofi et al. 2011a, b, 2012).

It comes as no surprise that a great deal of research has been invested in intervention efforts targeting the school environment (e.g., Waasdorp et al. 2012). Ideally, at the level of primary research, social scientists and research users should aim to draw conclusions from the most successful intervention studies always in line with explicit methodological quality standards. At the level of secondary research, a number of reviews of bullying prevention programs are also available. Ideally, these reviews should critically assess and synthesize the existing evaluation research, with the final aim of minimizing bias in the conclusions on which policymakers and practitioners are based. However, bias is not only the province of primary studies – it may also exist at the level of a review (Wilson 2009).

This is also true in the case of anti-bullying research (with a marked number of competing bullying prevention programs available), and it lays further weight on the importance of carefully assessing and utilizing evidence-based bullying prevention programs. It is for this reason that this research paper focuses primarily on summarizing and juxtaposing what is methodologically the most high-quality evaluation research, with the final aim of drawing conclusions about what has been learned and what needs to be done next in the bullying prevention field. These aims will be accomplished by taking into account existing findings at the level of secondary research, namely, via systematic reviews and meta-analyses.

Methods

Bullying prevention could be seen as a form of early crime prevention as well as a form of early health promotion initiatives (Ttofi et al. 2012). This is true, however, only in the case of efficacious interventions. In an attempt to make suggestions about the most scientific and evidence-based bullying prevention programs, this synthesis of reviews of current evaluation studies is based on explicit inclusion criteria set in advance, namely, (a) reports presenting a systematic review of evaluations of bullying prevention programs aimed to reduce the level of school bullying perpetration and victimization (and not other outcome measures) or, ideally, (b) reports presenting both a systematic and metaanalytic review of the relevant literature. Reports that present some level of meta-analytic synthesis are also reviewed, although they may not be based on systematic searches. Reports that assessed bullying prevention programs based on narrative reviews are excluded, because this type of research carries a high risk of bias. An extensive search was conducted in order to obtain all relevant systematic and meta-analytic reviews.

Results

To date, a growing number of school-based bullying prevention programs have been developed and evaluated, but relatively few attempts have been made to synthesize the relevant rigorous outcome-based research findings (see Table 1). An extensive search of the literature revealed just six reports that met the inclusion criteria. Of the six reports, (a) two included a systematic review but no meta-analysis (i.e., Smith et al. 2004; Vreeman and Carroll 2007), (b) two included a systematic review and a meta-analysis (i.e., Merrell et al. 2008; Farrington and Ttofi 2009; Ttofi and Farrington 2011; references referring to the same project), and (c) two were not based on systematic searches of the literature but included some level of meta-analytic assessment (i.e., Baldry and Farrington 2007; Ferguson et al. 2007; only the latter carried out a full metaanalysis calculating weighted mean effect sizes for bullying perpetration).

Bullying Prevention Research Paper

Consistent with previous literature (Farrington 2003, 2006; Petrosino 2003), it is clear from the review of Table 1 that there is a marked degree of variation in the criteria employed and, to some extent, the quality of the currently available research reviews. Four of the included reviews of bullying prevention programs were based on systematic searches of the literature, but the intensity of the searches varied considerably. For example, the studies ranged from (a) 1 to 35 journals searched, (b) 1 to 18 databases screened, (c) 2 to 14 keywords for the online searches, and (d) the screening of 321–622 relevant documents. Only the most recent review (Farrington and Ttofi 2009; Ttofi and Farrington 2011) coded all relevant manuscripts based on a “relevance scale” in line with the aims of the systematic review.

Bullying Prevention Research Paper

Within each review, the timeline for carrying out searches also varied. While some authors set the beginning of searches in the 1980s and after the first Olweus Bullying Prevention Program evaluation (i.e., Farrington and Ttofi 2009; Merrell et al. 2008; Ttofi and Farrington 2011), others set their timeline for beginning searches in the 1960s, perhaps because the authors included outcome measures other than bullying (e.g., “school violence” and “peer aggression” in the Vreeman and Carroll (2007) review). It is plausible that variations in searching strategies are also affected by the type of language restrictions that the reviewers set in advance. Of the four systematic reviews, two were not transparent on this issue (i.e., Smith et al. 2004; Vreeman and Carroll 2007), and one specified language restrictions in obtaining studies written in English (Merrell et al. 2008). Only the Ttofi and Farrington (2011) systematic review was unrestricted and included studies in other languages such as German, Spanish, and Italian.

Bullying Prevention Research Paper

What reviewers and meta-analysts defined as “systematic” varied greatly in line with the type of inclusion criteria set in advance – and, more importantly, the extent to which these criteria were carefully followed. It is disconcerting that two out of four meta-analytic reviews computed effect sizes from uncontrolled designs despite the fact that their inclusion criteria explicitly defined “either controlled studies or quasi-experimental studies only” (i.e., Baldry and Farrington 2007; Merrell et al. 2008). Another meta-analytic review (i.e., Ferguson et al. 2007) included only evaluations of bullying prevention programs that were implemented using a controlled design. However, since the actual evaluations included in the meta-analysis are not shown in the relevant publication, it is not possible to ensure that all studies met this criterion.

Bullying Prevention Research Paper

All meta-analytic reviews calculated effect sizes based on evaluations of age-cohort designs, consistent with the approach employed by Olweus (2005). However, only one of these reviews (Farrington and Ttofi 2009; Ttofi and Farrington 2011) presents not only a summary effect size across all studies irrespective of their methodological design but also a summary effect size relating to each of the four types of methodological designs (i.e., an overall summary effect size and also four separate summary effect sizes specific to randomized experiments, before-after experimental-control studies, other experimental-control studies, and age-cohort designs).

Bullying Prevention Research Paper

What is also apparent from this synthesis of existing evaluation research is that the results vary greatly depending on the theoretical stance of the authors. For example, three out of six reviews included “bullying and other related antisocial behaviors” as an outcome measure (Ferguson et al. 2007; Merrell et al. 2008; Vreeman and Carroll 2007), whereas other reviews focused specifically on bullying (Smith et al. 2004; Baldry and Farrington 2007; Farrington and Ttofi 2009). The methodological approach of the reviewers can also affect their analytic strategies. For example, Vreeman and Carroll (2007, p. 79) “.. . did not exclude or discount studies based on … retention rates or program intensity because these characteristics are not associated definitely with the strength of treatment effects.” In contrast, the authors of the latest systematic review (Farrington and Ttofi 2009; Ttofi and Farrington 2011) coded the program intensity and duration and correlated these features with the effect sizes in order to examine their association. Ttofi and Farrington (2011) also identified concerns about retention rates – and issues with possible differential attrition – which are related to the effect size measures; some of the effect sizes for specific evaluations included in their review were not based on the published reports but based on results obtained via e-mail communication with evaluators, in order to avoid biased findings resulting from issues related to differential attrition, multiple imputation methods, etc. (Farrington 2006; Farrington and Ttofi 2009).

As another example, the review by Ferguson et al. (2007), which was not based on systematic searches, primarily aimed at extending previous research by conducting publication bias analyses (see pp. 405–406), but they restricted their searches to journal articles only (see p. 407). In contrast, other reviews (e.g., Farrington and Ttofi 2009; Ttofi and Farrington 2011) did not set language or sample size restrictions, or limit the type of manuscripts to be included, because of concerns about publication bias. While there is some disagreement about this issue in the literature, it is common for researchers to extend searches beyond published reports in evaluation research of primary studies (Wilson 2009).

In conclusion, the existing reviews of the efficacy of bullying prevention programs varied greatly in methodology employed, including their inclusion criteria, depth of searches, and screening of relevant documents. Consequently, the resulting meta-analyses presented a summary effect size obtained from 13 (Merrell et al. 2008), 23 (Ferguson et al. 2007), and 44 (Farrington and Ttofi 2009; Ttof i and Farrington 2011) evaluations accordingly. These differences in methodology and number of studies included could explain the marked discrepancies in the summary effect obtained across reviews, as shown in Table 1. This could also explain why some previous reviews (e.g., Ferguson et al. 2007; Merrell et al. 2008) concluded that bullying prevention programs had little effect in reducing the level of bullying perpetration and victimization, while other reviews (Farrington and Ttofi 2009) were optimistic in their findings. Ideally, these reviews should critically assess and synthesize the existing evaluation research, with the final aim of minimizing bias. However, bias is not only the province of primary studies – it may also exist at the level of a review (Wilson 2009).

Open Questions And Future Research Directions

The rapid growth in the research on school bullying has undoubtedly advanced the scientific knowledge in this issue. However, considerable work is still needed for its successful translation into effective practice and policy (Swearer et al. 2010). For example, systematic reviews in social sciences offer a great promise. Unfortunately, reviews published in social science journals quite often lack a methodological rigor or transparency in the methods. This is evident in the current synthesis of previous meta-evaluation studies of bullying prevention programs. When conducting research trials of prevention programs, researchers in psychology and other social sciences should follow the methodological quality criteria of the Consolidated Standards of Reporting Trials (CONSORT) statement (Altman et al. 2001), or the procedures outlined by the Cochrane Collaboration for secondary research reviews, as is common in public health and medicine. There is a steady trend in the social sciences toward greater transparency in the research process, as illustrated by increased efforts to create an equivalent CONSORT statement for the social sciences (Perry and Johnson 2008) and the creation of the Campbell Collaboration’s methodological criteria for secondary analyses (Farrington et al. 2011). These standards for evaluation research can also be used by scholars, policymakers, and the general public to assess the validity of conclusions about the effectiveness of interventions in reducing bullying at school (Ttofi and Farrington 2011). This research paper has provided clear examples of how substantial differences in the procedures followed for secondary reviews can result in marked differences in the conclusions drawn across these reviews regarding the effectiveness of bullying prevention programs.

Another important guideline in future metaevaluation research refers to the issue of conflict of interest. The importance of conducting conflict of interest (COI) analyses is well established in the fields of medicine and public health. Within the field of criminology, recent studies have also shown that the reported effect sizes of prevention and intervention trials are larger when program developers are involved in a study than when trials are conducted by independent researchers; these differences may be due to various different types of biases, including biases resulting from conflict of interest issues (Eisner 2009; Farrington 2006). COI analyses have not been conducted in previous meta-analytic studies of bullying prevention programs, and this is a promising new line of research. It is noteworthy, for example, that the effect sizes obtained from the independent evaluation of a specific bullying prevention program are substantially smaller than those obtained from trials conducted by the developer as evaluator (Eisner 2009), although other explanations are also possible. Eisner and Humphreys (2011) developed an instrument for assessing COI in evaluation research, and initial tests of the scale in the area of family interventions are very promising. Interested scholars in the area of school bullying could begin a new line of evaluation research by testing and perhaps refining the relevant scale.

Another highly neglected topic at the level of meta-analytic research is that of cost-benefit analyses (Farrington and Ttofi 2009). Cost-benefit analyses can be a big “selling point” to policymakers and potential funding agencies, especially given the scarce resources that schools are often faced with. Admittedly, this type of analyses can also be conducted at the level of primary research. It is interesting to indicate, however, that of the 53 different evaluations of bullying prevention programs, only the review by Bagley and Pritchard (1998) included a cost-benefit analysis.

In prevention and intervention research, a distinction is made between efficacy, effectiveness, and dissemination trials (Flay et al. 2005). The importance of this distinction has been highlighted in meta-analyses that have examined factors that affect the size of treatment effects, with larger effect sizes found in efficacy trials (Eisner et al. 2011), most likely due to the increased researcher-imposed support and structure during the implementation process. This distinction is absent in the current meta-analytic studies of bullying prevention programs despite the fact that it may be critical when it comes to translating research findings into policy recommendations. Most evaluations of bullying prevention programs would fall under the category of efficacy trials, with relatively few meeting criteria for effectiveness research (Flay et al. 2005); additional research is needed to examine whether dissemination trials would demonstrate significant effects of bullying prevention programs. Within the area of bullying prevention, there are very few examples of programs being rigorously tested when being brought to scale (e.g., Karna et al. 2011; Waasdorp et al. 2012). Future research should also try to establish factors related to sustainability of the treatment effect when going to scale as well as the issue of sustainability of the treatment effect at longer follow-ups. Notably, the majority of bullying trials have relatively short follow-up periods, which does not allow for the contextual changes, such as in the school climate or school culture, or leadership, which we theorized are important impacts of effective bullying prevention programs (Bradshaw and Waasdorp 2009). Many other recommendations for future research can be made, and for a more detailed discussion, interested readers should seek advice from a paper that is being prepared for a special issue of the American Psychologist journal (Bradshaw et al. submitted).

Conclusions

The mixed findings from the extant reviews of bullying prevention approaches have been particularly disconcerting to policymakers, researchers, and practitioners alike (Bradshaw and Waasdorp 2011), as several reviews have provided a rather pessimistic interpretation of the state of the science in bullying prevention programming (e.g., Ferguson et al. 2007; Merrell et al. 2008), whereas others have been more optimistic (e.g., Farrington and Ttofi 2009; Ttofi and Farrington 2011). As a result, many are unclear where the field stands in terms of the evidence base for bullying prevention programming. The current entry aimed to provide greater transparency across the extant reviews so that various constituents can have a better understanding of potential reasons for the discrepant findings. Given the current focus on contrasting the scientific approaches employed by the different reviews of bullying prevention studies and extracting a general conclusion regarding the efficacy of bullying prevention efforts, it is not possible to endorse any specific program or model. Rather, the current study aimed to provide some guidance for researchers, policymakers, and practitioners on the extent to which each of the reviews employed various research-based approaches, as these methodological issues likely impacted the conclusions drawn from the research. Given the significance of bullying prevention for policy and prevention science, we issue a call for more longitudinal randomized controlled trials of promising programs and programs in wide use (particularly in the USA), as well as more rigorous systematic and metaanalytic reviews, in order to strengthen the current evidence base for preventing bullying programming. Many challenges but also many promising avenues lie ahead for future research in the area of bullying prevention and intervention. The time is ripe for a new research agenda in unexplored scientific avenues such as conflict of interest analyses and cost-benefit analyses to mention a few.

Bibliography:

  1. Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, Gotzsche PC, Lang T, CONSORT GROUP (2001) The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med 134(8):663–694
  2. Bagley C, Pritchard C (1998) The reduction of problem behaviours and school exclusion in at-risk youth: an experimental study of school social work with cost-benefit analyses. Child Fam Soc Work 3:219–226
  3. Baldry AC, Farrington DP (2007) Effectiveness of programs to prevent school bullying. Vict Offender 2(2):183–204
  4. Bradshaw, CP, Ttofi MM, Eisner M (submitted) A public health approach to bullying prevention: translating the research to practice. Am Psychol
  5. Bradshaw CP, Waasdorp TE (2009) Measuring and changing a culture of bullying. Sch Psychol Rev 38:356–361
  6. Bradshaw C, Waasdorp T (2011) Effective strategies in combating bullying. White paper prepared for the 2011 white house conference on bullying, Washington, DC. https://www.stopbullying.gov/sites/default/files/2017-09/white_house_conference_materials.pdf
  7. Cowie H, Olafsson R (2000) The role of peer support in helping the victims of bullying in a school with high levels of aggression. School Psychol Int 21:79–94
  8. Eisner M (2009) No effects in independent prevention trials: can we reject the cynical view? J Exp Criminol 5:163–183
  9. Eisner M, Humphreys D (2011) Measuring conflict of interest in prevention and intervention research: a feasibility study. In: Bliesener T, Beelmann A, Stemmler M (eds) Antisocial behavior and crime: contributions of developmental and evaluation research to prevention and intervention. Hogrefe Publishing, Cambridge, MA, pp 165–180
  10. Eisner M, Malti T, Ribeau D (2011) Large-scale criminological field experiments: the Zurich project on the social development of children. In: Gadd D, Karstedt S, Messner SF (eds) Sage handbook of criminological research methods. Sage, London, pp 410–424
  11. Farrington DP (2003) Methodological quality standards for evaluation research. Ann Am Acad Pol Soc Sci 587(1):49–68
  12. Farrington DP (2006) Methodological quality and the evaluation of anti-crime programs. J Exp Criminol 2(3):329–337
  13. Farrington DP, Ttofi MM (2009) School-based programs to reduce bullying and victimization (Campbell systematic reviews No. 6). Campbell Corporation, Oslo. doi:10.4073/csr.2009.6
  14. Farrington DP, Weisburd DL, Gill CE (2011) The Campbell collaboration crime and justice group: a decade of progress. In: Smith CJ, Zhang SX, Barberet R (eds) Routledge handbook of international criminology. Routledge, New York, pp 53–63
  15. Ferguson CJ, Miguel CS, Kilburn JC, Sanchez P (2007) The effectiveness of school-based anti-bullying programs: a meta-analytic review. Crim Justice Rev 32(4):401–414
  16. Flay BR, Biglan A, Boruch RF, Gonzalez Castro F, Gottfredson D, Kellam S, Moscicki EK, Schinke S, Valentine JC, Ji P (2005) Standards of evidence: criteria for efficacy, effectiveness and dissemination. Prev Sci 6(3):151–175
  17. Karna A, Voeten M, Little TD, Poskiparta E, Alanen E, Salmivalli C (2011) Going to scale: a non-randomized nationwide trial of the KiVa antibullying program for grades 1–9. J Consult Clin Psychol 79(6):796–805
  18. Merrell KW, Gueldner BA, Ross SW, Isava DM (2008) How effective are school bullying intervention programs? A meta-analysis of intervention research. Sch Psychol Q 23(1):26–42
  19. Mueller EE, Parisi MJ (2002) Ways to minimize bullying. Unpublished Master’s thesis. Saint Xavier University, Chicago
  20. Munthe E (1989) Bullying in Scandinavia. In: Roland E, Munthe E (eds) Bullying: an international perspective. David Fulton, London, pp 66–78
  21. Olweus D (1994) Bullying at school: basic facts and effects of a school based intervention program. J Child Psychol Psychiatry 35:1171–1190
  22. Olweus D (2005) A useful evaluation design, and effects of the olweus bullying prevention program. Psychol Crime Law 11(4):389–402
  23. Orpinas P, Horne AM, Staniszewsld D (2003) School bullying: changing the problem by changing the school. School Psychol Rev 32:431–444
  24. Pepler DJ, Craig WM, Ziegler S, Charach A (1994) An evaluation of an antibullying intervention in Toronto schools. Can J Commun Ment Health 13:95–110
  25. Perry A, Johnson M (2008) Applying the consolidated standards of reporting trials (CONSORT) to studies of mental health provision for juvenile offenders: a research note. J Exp Criminol 4(2):165–185
  26. Peterson L, Rigby K (1999) Countering bullying at an Australian secondary school. J Adolesc 22:481–492
  27. Petrosino A (2003) Standards for evidence and evidence for standards: the case of school-based drug prevention. Ann Am Acad Pol Soc Sci 587(1):180–207
  28. Pitts J, Smith P (1995) Preventing school bullying. Home Office, London
  29. Roland E (2000) Bullying in school: three national innovations in Norwegian schools in 15 years. Aggress Behav 26:135–143
  30. Smith PK, Cowie H, Olafsson RF, Liefooghe APD (2002) Definitions of bullying: a comparison of terms used, and age and gender differences, in a 14-country international comparison. Child Dev 73(4):1119–1133
  31. Smith JD, Schneider B, Smith PK, Ananiadou K (2004) The effectiveness of whole-school anti-bullying programs: a synthesis of evaluation research. Sch Psychol Rev 33:548–561
  32. Swearer SM, Espelage DL, Vaillancourt T, Hymel S (2010) What can be done about school bullying? Linking research to educational practice. Educ Res 39(1):38–47
  33. Tierney T, Dowd R (2000) The use of social skills groups to support girls with emotional difficulties in secondary schools. Support Learn 15:82–85
  34. Ttofi MM, Farrington DP (2011) Effectiveness of school-based programs to reduce bullying: a systematic metaanalytic review. J Exp Criminol 7:27–56
  35. Ttofi MM, Farrington DP, Losel F, Loeber R (2011a) Do the victims of school bullies tend to become depressed later in life? A systematic review and meta-analysis of longitudinal studies. J Aggress Confl Peace Res 3(2):63–73
  36. Ttofi MM, Farrington DP, Losel F, Loeber R (2011b) The predictive efficiency of school bullying versus later offending: a systematic/meta-analytic review of longitudinal studies. Crim Behav Ment Heal 21:80–89
  37. Ttofi MM, Farrington DP, Losel F (2012) School bullying as a predictor of violence later in life: a systematic review and meta-analysis of prospective longitudinal studies. Aggress Violent Behav 17:405–418
  38. Vreeman RC, Carroll AE (2007) A systematic review of school-based interventions to prevent bullying. Arch Pediatr Adolesc Med 161(1):78–88
  39. Waasdorp TE, Bradshaw CP, Leaf PJ (2012) The impact of School-wide Positive Behavioral Interventions and Supports (SWPBIS) on bullying and peer rejection: a randomized controlled effectiveness trial. Arch Pediatr Adolesc Med 166(2):149–156
  40. Wilson DB (2009) Missing a critical piece of the pie: simple document search strategies inadequate for systematic reviews. J Exp Criminol 5(4):429–440

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655