Sample Ethical Aspects In Research Publication Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.
Ethical aspects in research publications are all the moral issues and problems raised by any behavioral misuse or abuse of the system for communicating and disseminating scientiﬁc results and information within the international scientiﬁc community. It is intended as a social body genuinely co-operating for the cultural and material improvement of mankind regardless of any personal, political, economic, or racial interests. Misuses or abuses include plagiarism, fraudulent and repetitive publication, guest- and ghost-authorship in multi-authored papers, conﬂict of interests in peer reviewing, censoring, and rejecting publications.
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% OFF with FALL23 discount code
Scientiﬁc knowledge is community property unceasingly increased by individuals or groups in mutual co-operation and fair and honest competition. Even in the professional and competitive world of today researchers are intended as primarily engaged in the disinterested pursuit of truth and in the quest of new insights and discoveries. Honesty towards oneself and towards others is considered as a fundamental condition for co-operation and fair competition since researchers depend on each other and cannot be successful unless they are able to trust each other and their predecessors. Publications are the primary medium through which scientists give an account of their work, co-operate, contribute to the advancement of knowledge, and claim the rights to be recognized for their discoveries. Therefore, dishonesty in publication not merely throws research open to doubt but destroys the very fundamentals of science.
The publication of a fraudulent article (the very outcome of a scientiﬁc fraud consisting in the fabrication, falsiﬁcation or in ‘cooking’ and manipulation of statistical data) remains the most serious transgression of ethics of scientiﬁc publication. It is commonly considered to be an essential part of the scientiﬁc misconduct as well as plagiarism, the deliberate presentation of another’s ideas or texts as one’s own. Compared to these repetitive publications (publishing the same article in different journals), sequential publication (reporting follow-up of the same study with additional subjects but without new results), ‘salami slicing’ (dividing up different parts of the same study for publication in different journals), and guest and ghost authorship (giving or not giving authorship credit to persons who did or didn’t take part in the research or in drafting or revising the article) are all considered as minor transgressions. These result in a huge amount of unwieldy and polluted literature that distorts the direction of science. It has been roughly calculated that ﬂawed papers published on the 5,600 scientiﬁc journals indexed by The Science Citation Index amount to about 10,000 each year. This raises the problem of the reliability of the peer-review system designed to assess the quality of scientiﬁc literature, and that of the fate of the invalid ideas and data generated by fraudulent publication (Friedman 1990).
Good scientiﬁc journals today publish original articles only after they have been examined by competent reviewers for their validity and/originality. Antecedents of peer review practices go back to the seventeenth century and the ﬁrst organizations of scientiﬁc societies in Italy, France, and Germany but the modern system was only fully developed after World War II and it was adopted ﬁrstly and mainly in the Anglo-Saxon cultural area. The system is credited to have greatly contributed to the efficiency and professional organization of the scientiﬁc enterprise but since the eighties it has been severely criticized not only for its enormous costs in time and money but also for the progressive loss of accuracy (Lock 1986). Reviewers have indeed some moral obligation to make every necessary effort to recognize manipulations and falsiﬁcations. However, to expect irregularities to be reliably detected would be misguiding: the original data are not available to reviewers, and if they were they would not have time to replicate experiments and observations. This is why, according to many, peer review is so vulnerable to dishonest conduct. Other ﬂaws have been detected in negligence (it resulted from inquiries that the higher the academic status and age of the reviewer, the lower the quality of the review), in vested interests of the referees and in many forms of bias (against particular individuals, subjects, or institution), the worst of which is that against the innovators. Some critics maintain that the peer review system favors mediocre basic science and fails to recognize and promote creativity, originality, and innovation (Horrobin 1990, Campanario 1996). Even the anonymity of the referees has been questioned for its ﬁnality as contrary to the moral norms of universalism, communality, disinterestedness, and organized skepticism that are thought to characterize the ethic of science. In fact the peer review system holds some risks for the authors because ideas, research ﬁndings, and texts which are unprotected by patents or intellectual property rights are submitted to persons whose identity is unknown to the authors and who may happen to be their direct competitors.
As for the fate of invalid data generated by transgressions to the ethics of scientiﬁc communication, the scientiﬁc community has yet to agree on the level of inaccuracy required to mandate a retraction vs. an erratum or who may retract an article. Although early in 1987 the International Committee of Medical Journal Editors published guidelines for the format and terminology of retraction notices, articles known to be fraudulent may be not retracted at all, or are retracted with ambiguous retraction. In other cases, only part of the data or conclusions are retracted. Editors of journals have little leeway for action since publishing corrections is fraught with juridical risks unless all authors jointly sign them. However, omitted or ambiguous retractions are not the only reasons of invalid work not being effectively purged from the scientiﬁc literature. It resulted from inquiries on US authors (Pfeifer and Snodgrass 1990) that after retraction the invalid information they produced continues to be used since the majority of retractions are not indexed by their own journals and no reliable source exists to locate fraudulent or erroneous work.
Some ethical relevance also has the soft censoring activity by which administrations and companies tend to control and restrict the dissemination of sensitive scientiﬁc and technological knowledge on the grounds of national security and economic reasons (Relyea 1994).
Such ethical problems depend on the very structure of the scientiﬁc enterprise as shaped in the seventeenth century but they have only grown in relevance and extension in recent times. Cases of plagiarism or fraudulent publication have been noticed in the past history of science, some of which involving such outstanding ﬁgures as Ptolemy, Luca Pacioli, Ernst Haeckel and even Galilei and Newton. However, it is generally admitted that if not the number, the relevance and the social impact of the transgressions to the ethics of scientiﬁc publication began to rise in the 1940s. This was when modern science policy was established in view of the expected contributions of the scientiﬁc community to national defense and industrial progress. The new reward and funding system enhanced the traditional competition between scientists and it was then, in 1942, that the sociologist L. Wilson apparently coined the expression ‘publish or perish.’ It could hardly be said however that in the ﬁrst 30 years competition did in some way sensibly promote fraudulent behaviors or the pollution of scientiﬁc literature. The turning point has in fact been stated in the late 1970s when the social structure of the scientiﬁc community underwent some crucial changes due mainly to the exponential increase of the number of people involved in research and the proportional (though not absolute) decreasing rate in assigning funds to research activity. The ensuing economic constraint (upon which the resources poured in by private companies had no or little effect) led to a situation of even stronger competition among scientists for academic rewards, position, or research funds. One crucial decision was that of the adoption of publication as the main performance indicator and of measuring productivity as the number of publications per length of time. The additional tool of citation analysis which was introduced in those very years (Garﬁeld 1972, 1979) and based on the key concept of ‘impact factor’, (as the ratio of the number of citations a journal receives to the number of papers published over a particular time period) and then extended to calculate the impact of the work of individuals, groups, or departments generated a race to produce, in the shortest possible time, the least publishable unit in a journal with the highest impact factor. At the same time the growth of the scientiﬁc population implied a statistical decline in ethical standards and in the quality of production. Lotka’s law of productivity in fact states that in a certain ﬁeld, half of the scientiﬁc literature is produced by a population of the square root (√N) of all N scientists. De Solla Price pointed out that the exponential growth of the scientiﬁc population implies a proportional increase in the number of scientists able to author professional papers, but not important and innovative articles. From a statistic compiled by the Institute for Scientiﬁc Information (ISI) 55% of the scientiﬁc papers published between 1981 and 1985 in journals indexed by the Institute received no citation at all (Hamilton 1990). This led some students to draw the conclusion that more than half, or perhaps more than three-quarters, of the scientiﬁc literature are worthless. Others reacted against the notion that most uncited papers are without value. As for the use of citations as a measure of impact and of the quality of a publication, it has been questioned whether this may not lead to abuses such as citation cartels. It was estimated by the same ISI, that self-citation accounts for between 5 percent and 20 percent of all citation.
It is from, or in any case within, those changes that according to the current views (see, for example, Petersdorf 1986) the pathological phenomenon of scientiﬁc misconduct arose to which the major ethical issues in scientiﬁc publishing are tightly related. As a consequence the analysis of these issues, the evaluation of the frequency and gravity of the transgressions, and the discussions on how to manage the problems they pose have been conducted as part of a general study on scientiﬁc misconduct.
Unfortunately, despite increasing publicity during the past 25 years, attempts to assess the quantitative dimensions of the problem of scientiﬁc misconduct did not lead to conclusive results: its magnitude is unknown and its detection largely serendipitous. Guidelines and actions adopted to manage scientiﬁc and (as a consequence) publication frauds and other abuses vary according to which of the two main different approaches they are based on. Sociologists and the vast majority of scientists share the opinion that the frequency and incidence of misconduct did not sensibly change during the twentieth century. The impression that fraud has vastly increased may be the result of the heightened self-consciousness in society coupled with strong moral expectation that scientists in the pursuit of reliable knowledge will live up to a higher standard of probity. This view seems to be substantiated both by the reports of the two most important US authorities responsible for dealing with misconduct cases, the Office of Inspector General (OIG) of National Science Foundation (NSF), and the Office of Scientiﬁc Integrity (ORI) of the Public Health Service and by that of analogous authorities in Denmark, Norway, Finland, and Australia. The ensuing suggested strategy is ﬁrstly, and mainly intended to save the autonomy right and self-regulation privilege of the scientiﬁc community from legal or government unnecessary intervention. It also recommends that inquiries primarily be made by local academic bodies, at the same time stressing the crucial role of education to improve the ethical standards of researchers.
Opposite is the view according to which, though there are no official data on the scale of the problem, the reports suggest that only a comparatively small proportion of cases of misconduct is actually detected and that what emerges is just the tip of an iceberg (Lock 1993). According to this view, self-regulation has proven to be ineffective and unsuitable. Scientists are not trained in conﬂict and conﬂict of interests resolution. Their intuitive response is usually wrong and they tend to set up shaky ad hoc procedures that do not guarantee the accused notice of all the charges; the opportunity to respond to all the charges and to the evidence; and a decision based on rigorous standards. In this view the setting up of national agencies, rather that local committees, is recommended. The attention is focused on the economic and social background. It is assumed that those transgressions simply reﬂect the problems of a science which is today too big, too entrepreneurial, and too competitive to guarantee its very scope—the cultural and material improvement of mankind, in due and reasonable proportion to the intellectual and economic resources and at the same rate of the past three centuries. As far as scientiﬁc publication is concerned this second approach led the Committee on Publication Ethics (COPE), created in the UK by 20 editors of scientiﬁc journals, to ask for an independent agency to detect, handle, and prevent research publication abuses.
However, there is no general agreement on this view that led some of its proponents to recommend for some radical adjustment in the scientiﬁc policy ﬁrst of all in connection to the ratio between scientiﬁc population and funds. While universities, scientiﬁc societies, funding agencies, and governments have opposed those proposals tacitly or patently, even their advocates acknowledge the importance of improving the ethical standards via the teaching of research ethics. This is currently considered by competent institutions and single students, besides and before, any disciplinary, economic, or legal action. Likewise, all agree on the necessity of suitable ethical and professional guidelines and measures which have in fact been proposed or adopted, though independently and according to different experiences and aims, by learning societies, agencies, journals, or individual students.
The primary importance of ethical (coupled with methodological) education emerge particularly in connection to some of the problems of scientiﬁc publishing, which seem less directly linked to the social and economic structure of the scientiﬁc system than to the peculiarity of research areas which involve cultural diversity or human and animal rights. In fact though science is largely transcultural, the human community has not yet developed a completely corresponding body of transcultural ethics. In some areas this may affect both methodology and ethics. Uncritical and unwitting reliance on the researcher’s own culturally received principles and values regarding personal and physical identity, health, loving, suffering, death, etc. may generate potentially distorted initial observations which can lead to systematic divergence from truth in anthropological, psychological, sociological, or medical inquiries (where also systematic infringements of human or animal rights or the exploitation of subjugated populations may occur). Scientiﬁc publications based on such inquiries are, in principle, to be considered as poorly or improperly designed and performed studies. Current ethical and methodological standards do not include clear and effective rules for authors, editors, and peer-reviewers though the problems they raise have been debated in connection to some controversial cases.
A ﬁrst general attempt to mitigate the pressure to publish that lies behind most deviant practices is, according to many, to place a ceiling on the number of publications that can be considered in evaluating a candidate for promotion or funding. However, some experts have also suggested that the number of scientiﬁc journals be drastically reduced (Bracey 1987). On the same line; although the use of quantitative indicators of scientiﬁc performance seems unavoidable, it has been recommended that impact factors shall not by themselves serve as the basis for funding decisions and that the quality of publication must be the primary consideration (Bucholz 1995). While no effective measures (other than severe sanctions) seem foreseeable against major authorship transgressions, as plagiarism or the publication of fabricated data, it has been recommended to journals to cooperate in cross-checking control activities and to research institutes and universities to store the primary data on which publications are based. All sources and methods used to obtain and analyze those data should be, when required, fully disclosed and the disappearance of primary data from a laboratory considered as an infraction of the basic principles of careful scientiﬁc practice. On the other hand, since there is no universally agreed deﬁnition of authorship, speciﬁc criteria and requirements have been invoked as an acknowledgment or footnote in every co-authored paper to state the exact contribution by each author. Or, the signing of a form where each author affirms having seen the ﬁnal version and agrees to its publication. According to those proposed criteria the award of authorship should balance intellectual contributions to the conception of studies or experiments, to the generation, analysis, and interpretation of the data, and to prepare the manuscript against the collection of data and other routine work. If there is no task that can reasonably be attributed to a particular individual, then that individual should not be credited with authorship. In principle all authors must take public responsibility for the content of their paper and honorary or guest authorship is in no way acceptable. Against repetitive publication since 1969 the so-called ‘Ingelﬁnger rule’ has been proposed (Angell and Kassirer 1991) according to which papers are submitted to journals with the understanding that they (or their essential part) have been neither published nor submitted elsewhere. Reviewers are supposed to vigilate upon transgressions and it is stated in some journals’ guidelines that if they suspect misconduct, they should write in conﬁdence to the editor. As for the same reviewers it is reminded that they are bound to the duty of conﬁdentiality in the assessment of a manuscript and asked to disclose conﬂict of interests and to provide speedy, accurate, courteous, unbiased, and justiﬁable reports. The submitted manuscript should not be retained or copied and both editors and reviewers should not make any use of the data, arguments, or interpretations, unless they have the authors’ permission. Editors should regard their function as ensuring that authors are treated fairly and their decisions to accept or reject a paper for publication should be based only on the paper’s importance, originality, and clarity. To avoid the suppression of originality and creativity it has also been recommended that studies that challenge previous work should be given an especially sympathetic hearing and that independent ombudsmen for journals be appointed. Some few suggestions have also been advanced on the subject of external pressures as, for example, that editorial decisions must not be inﬂuenced by advertising revenue or reprint potential, but it seems that little can be done for the secrecy imposed by industries or governments on sensitive research ﬁelds.
Envisaged actions and sanctions rank from a letter of reprimand and warning as to future conduct, or publication of a notice of redundant publication or plagiarism, to formal withdrawal or retraction of the paper from the scientiﬁc literature, informing other editors and the indexing authorities or reporting the case to the authorities or organizations which can investigate and act with due process.
There is however no suitable insight on the impact those guidelines and measures had, or may have, in the long term.
- Angell M, Kassirer J P 1991 The Ingelﬁnger rule revisited. New England Journal of Medicine 325: 1371–3
- Bracey G W 1987 The time has come to abolish research journals: too many are writing too much about too little. Chronicle of Higher Education 30: 44–5
- Bucholz K 1995 Criteria for the analysis of scientiﬁc quality. Scientometrics 32: 195–218
- Budd J M, Sievert M-E, Schultz T 1998 Reasons for retraction and citations to the publications. Journal of the American Medical Association 280: 296–7
- Campanario J M 1996 Have referees rejected some of the mostcited papers of all times? Journal of the American Society for Information Science 47: 302–10
- CIOMS 1991 International guidelines for ethical review of epidemiological studies Law, Medicine & Health Care 19(3–4) (Appendix I): 247–58
- Committee on Publication Ethics 1998 Annual report 1998.BMJ Publishing Group London
- De Solla Price J D 1963 Little Science, Big Science. Columbia University Press, New York
- De Solla Price J D 1969 Measuring the size of science. Proceedings of the Israel Academy of Science and Humanities 4: 98–111
- Friedman P J 1990 Correcting the literature following fraudulent publication. Journal of the American Medical Association 263: 1416–9
- Garﬁeld E 1972 Citation analysis as a tool in journal evaluation. Science 178: 471–9
- Garﬁeld E 1979 Citation Indexing: Its Theory and Application in Science, Techonolgy and Humanities. Wiley, New York
- Hamilton D P 1990 Publishing by—and for?—the numbers. Science 250: 1331–2
- Horrobin D F 1990 The philosophical basis of peer review and the suppression of innovation. Journal of the American Medical Association 263: 1438–41
- LaFollette M C 1992 Stealing into Print: Fraud, Plagiarism and Misconduct in Scientiﬁc Publishing. University of California Press, Berkeley, CA
- LaFollette M 2000 The evolution of the ‘scientiﬁc misconduct’ issue: An historical overview. Proceedings of the Society for Experimental Biology and Medicine 224: 211–5
- Lock S 1986 A Difficult Balance: Editorial Peer Review in Medicine. ISI Press, Philadelphia, PA
- Lock S 1993 Research misconduct: a resume of recent events. In: Lock S, Wells F (eds.) Fraud and Misconduct in Medical Research. British Medical Journal, London pp. 5–24
- Petersdorf R G 1986 The pathogenesis of fraud in medical sciences. Annals of Internal Medicine 104: 252–4
- Pfeifer M P, Snodgrass G L 1990 The continued use of retracted, invalid scientiﬁc literature. Journal of the American Medical Association 263: 1420–3
- Relyea H C 1994 Silencing Science: National Security Controls and Scientiﬁc Communication. Ablex, Norwood, NJ
- Serebnick J 1991 Identifying unethical practices in journal publishing. Library Trends 40: 357–72
- Shankman P 1996 The history of Samoan sexual conduct and the Mead–Freeman controversy. American Anthropology 3: 555–67
- Sponholz G 2000 Teaching scientiﬁc integrity and research ethics. Forensic Science International 113: 511–4
- Stossel T P 1985 SPEED: An essay on biomedical communication. New England Journal of Medicine 313: 123–6
- Whiteley W P, Rennie D, Hafner A W 1994 The scientiﬁc community’s response to evidence of fraudulent publication. Journal of the American Medical Association 272: 170–3