Predictive Medicine Research Paper

Academic Writing Service

View sample predictive medicine research paper. Browse research paper examples for more inspiration. If you need a health research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Abstract

In the last part of the twentieth century, predictive medicine has gained currency as an important ideal in biomedical research and health care. Research in the genetic and molecular basis of disease suggested that the insights gained might be used to develop tests that predict the future health state of asymptomatic individuals. The desirability of this ideal and the technologies developed to realize it have been the subject of much ethical debate and analysis. This contribution discusses the most important ethical concerns about predictive medicine and points out how these have coevolved with scientific and technological developments in the field. The focus is on genetics and genomics-based predictive testing of individuals for health purposes, but many issues are pertinent for nongenetic predictive testing as well. Early on, many ethical analyses asked how knowledge of one’s future health state might impact an individual’s quality of life, family relations, and social relations. Later discussions shifted toward the interpretation of test results, to what should be considered as a benefit of predictive testing and who should define this. In addition, there have been recurring worries about potential shifts in attributing responsibility for one’s health and about the impact of predictive testing on society.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Introduction

Prevention has been a long-standing ideal in public health policies around the world. In the past, preventive interventions were usually low tech and targeted at populations. Recent developments in biomedical research, however, seem to open up possibilities to predict health risks of individuals and thus for “personalized prevention.” In particular, the surge of biomedical research in genetics, genomics, and molecular biology is often justified in terms of the “predictive and preventive medicine” it might enable. This ideal has invited much ethical reflection. Is the practice of predictive medicine as desirable as suggested?

After having sketched the historical background of predictive medicine, this contribution discusses a number of the attendant ethical issues while also pointing out how ethical concerns have shifted with developments in science and technology. The focus is on predictive testing of individuals for health, not on prediction for reproductive purposes. Although the examples mainly refer to prediction based on genetic and genomic testing, many issues also pertain to predictive testing with nongenetic molecular markers.




History And Development: Background

Predicting the future of individual patients has always been part and parcel of medicine. Providing a prognosis of a patient’s future was an important part of Hippocratic medicine, and prognostication has remained a central task of medical professionals ever since. With the nineteenth century rise of statistics and the increasing interest of states in furthering public health, a different form of prediction emerged. By collecting data about a population’s health state and linking these to its other characteristics (like lifestyle or environment), subgroups “at risk” of future health problems could be identified. Such epidemiological knowledge was used to justify and design public health interventions targeting these subgroups of the population (like smokers or obese people). Prediction in prognostics is usually part of clinical care, since it focuses on the diseased individual and is about the expected disease course in this particular person. Predicting the future in epidemiology, in contrast, is part of public health because it focuses on populations and is about health risks and prevention of disease in general. For a long time, these two predictive practices were clearly distinct, not only in aim but also with regard to the professionals involved and the type of knowledge and tools used.

This gradually changed in the late twentieth century, with the growth of biomedical interest in genetics and the start of the Human Genome Project. The increasing awareness that genes play an important role in disease causation brought along the idea that identifying a person’s genetic makeup might enable predictions about his or her future health. If population studies would succeed in linking genes with disease outcomes, genetic testing of individuals could indicate what the future has in store for a particular person. Such predictions might subsequently propel an individual to take preventive action. Thus, the preventive aim of public health merged with the orientation at the individual typical for prognostics and clinical care in general.

By the beginning of the twenty-first century, the growing body of research in medical genetics had shown that the assumption that genes are linked in a clear and unambiguous way to states of health and disease was way too optimistic. The search for reliable predictors of disease continued, but attention shifted, first from single genes to combinations of genetic variations (targeted in genomics research) and subsequently to almost any bodily molecule (like RNA, proteins, enzymes, metabolites, or neurotransmitters, targeted in transcriptomics, proteomics, metabolomics, and the like). At present, huge efforts and amounts of money are spent on the identification of so-called molecular biomarkers. These biomarkers may actually serve many different functions; the subset of predictive biomarkers measures a change in molecular bodily processes that is associated with later emergence of disease. Changes in the concentration of the tau protein in cerebrospinal fluid, for example, are now thought to precede the emergence of symptoms typical of Alzheimer’s disease. Testing for such biomarkers thus seems to enable individualized predictions in a way analogous to predictive DNA testing. It is also liable to many of the ethical concerns voiced in the context of predictive genetic testing.

Conceptual Clarification

Genetics and genomics are both scientific fields studying the genetic constitution of organisms. Whereas genetics investigates the functioning and composition of the single gene, genomics addresses all genes and their interactions, in order to identify their combined influence on the growth and development of the organism.

The genome is the entire set of genetic material (DNA) of an individual; the exome is the subset of this genetic material that has a coding function. Current estimates suggest that about 1–2 % of the whole genome has a coding function. A single nucleotide polymorphism is a difference in a single DNA base pair found among individuals. SNPs are the most common type of genetic variation.

On the other hand, a biomarker is usually defined as a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacological responses to a therapeutic intervention. A gene or a set of genetic variations can be a biomarker; however, biomarkers may also be based on other types of molecules.

Ethical Dimensions

When the rise of genetics and the start of the Human Genome Project in the late twentieth century fueled the expectation that predictive genetic testing might become a reality, most ethical questions centered around its potential impact. Is it truly desirable to learn what the future will have in store for you with regard to health and disease? What good is to be expected from knowing that somewhere in the future one will contract bowel cancer, sickle cell anemia, or cystic fibrosis? On the one hand, such anticipatory knowledge might help the person tested to prepare for the disease. On the other hand, in particular, if nothing can be done, such knowledge may be very depressing and have a huge negative impact on a person’s life and the lives of relatives. Moreover, the information that particular individuals are prone to a specific disease may invite other parties to use that knowledge for their own purposes, leading to discrimination and exclusion. Such worries were colored, moreover, by the (mis)use of genetic knowledge for eugenic purposes in the nineteenth and first half of the twentieth century, when in many countries whole families with purportedly undesirable hereditary characteristics were sterilized or even extirpated to “breed” a better human race (Kevles 1985).

Predictive Testing For Monogenetic Disease

One of the earliest predictive genetic tests that became available targeted the gene for Huntington’s disease. “Huntington” is a progressive neurological disorder associated with motor and cognitive problems as well as personality, mood, and behavioral changes. Symptoms in the majority of cases become manifest at the age of about 40, and patients die relatively young. The disease is clearly hereditary; individual members of a “Huntington family” usually know they are at risk of the disease, but in the past they had to wait whether or not it would strike them personally. The identification of the gene for Huntington (in 1993) enabled predictive testing of healthy individuals. It was expected that the test result would have a positive impact on the quality of life of those proven not to carry the mutation, relieving them from worry. In contrast, the quality of life of those who do carry a mutation might decrease, as they would learn that they would become a patient and probably die early. Similar concerns were voiced about predictive testing for other diseases, like breast/ovarian cancer, thalassemia, or hypercholesterolemia, even though the availability of effective treatment for some diseases might tip the scales in favor of testing. In such cases, however, predictive testing followed by treatment might still have far-reaching consequences for the person involved. Women carrying a mutation for breast/ovarian cancer might have to subject themselves to regular screening or opt for preventive removal of breasts and ovaries. In the case of hypercholesterolemia, individuals might have to take drugs for the rest of their lives. And in case of thalassemia, knowledge that both parents are mutation carriers may influence the decision whether or not to start a family.

In view of its potentially far-reaching psychosocial impacts, both genetic professionals and ethicists argued that predictive testing should be embedded in extensive counseling procedures before and after testing. Such counseling, moreover, should aim for autonomous decision-making by the client, not for testing as many individuals as possible. Individuals have a “right not to know” or, as some have formulated it, the “right to an open future,” unbothered by predictive knowledge about what will happen later on (Chadwick et al. 1997). This approach fitted with the professional ethos of most clinical geneticists, in which nondirective counseling had been promoted as a means to stay away from the specter of eugenics. The international guidelines to regulate predictive genetic HD testing, drawn up in 1994 by the International Huntington Association and the World Federation of Neurology, clearly reflect this ethical reasoning. They stressed the importance of autonomous, informed decision-making and of nondirective counseling to facilitate this (MacLeod et al. 2013). Even though there is much debate as to whether nondirectiveness is really possible, these guidelines soon became a model for guidelines concerning other predictive tests and for good clinical practice regarding predictive genetic testing more generally. They were recently updated to accommodate evidence and novel technologies that have emerged since 1994, but the ethical principles informing the guidelines have not really changed (MacLeod et al. 2013).

Predictive Genetic Testing, Autonomy, And Family

One of the problems with an ethical approach that stresses autonomy, however, is that the meaning of “autonomy” is not straightforward in the context of genetic testing. Since genetic diseases run in the family and the test targets a gene that may be (or have been) passed on to next generations, both the decision-making process and the test result have implications for family members. If most of your relatives decided to test, it is probably harder to stick to your desire not to know. And if a parent knows he/she is a carrier, should he/she inform his/her children, and if so, when? What if an (adult) child wants to be tested, but the parent does not want to be informed of his or her carrier status? After all, if the child is a mutation carrier, the parent knows he/she is a carrier too.

In particular, the question of whether children should be tested for late onset diseases has received a lot of attention (Borry et al. 2006). Even if we grant that adults should have the right to decide for themselves whether or not they want to be tested, the principle of autonomy is harder to apply in the case of minors. After all, they are (supposedly) unable to make well considered decisions yet. Moreover, the impact of predictive knowledge may be bigger at a young age. The internationally endorsed view has been, therefore, that children should be tested only for diseases when treatment or prevention is available and that all other testing should be postponed till the child can decide for himself or herself. This view is based on the idea that testing should be allowed only if the child clearly benefits and on the view that a child’s future options should remain open till he/she has matured. There is more controversy regarding testing for childhood-onset diseases for which no intervention is available. In these cases, after all, it is unclear whether the child actually benefits (Borry et al. 2006).

Even in the case of predictive testing of adults, however, putting autonomy center stage does not solve all ethical issues. Leaving the decision to the individual does not help resolve potential tensions between personal values and those of family members or between the desire (not) to know and the value of good family relations (Rehmann-Sutter and Muller 2009). As several ethicists have argued, the isolated individual that is so often assumed in autonomy discourse does not seem to be a fitting point of reference for the ethics of genetic testing. Hermeneutical, communitarian, and care ethics views pointing out that the self-respect and development of an individual are always dependent on his/her relations to others may be more to the point (Rehmann-Sutter and Muller 2009). Even if counseling aims for individual autonomy, it seems wise not to interpret this as an invitation to relegate all deliberations to the private sphere. Individuals deliberating whether or not to test or how to deal with results might profit from firsthand stories about how genetic testing is dealt with in other families. Such stories show that actual deliberations and ways of doing include a broad array of values and that people often have inventive ways to accommodate different, seemingly conflicting values and interests within a family.

Social Impacts Of Predictive Genetic Testing

In addition to psychological impacts, predictive testing may have broader social impacts for the tested individual. Knowledge of carrier status (e.g., for Huntington’s disease but also for other serious diseases) may be relevant for employers and insurance companies, among others. This has engendered fear of genetic discrimination: “the differential treatment of asymptomatic individuals or their relatives on the basis of their real or assumed genetic characteristics” (Otlowski et al. 2012). Mutation carriers might, for example, be refused a job or life insurance, or the costs of health insurance might be increased. The use of genetic risk knowledge by insurance companies and employers may sometimes be justified by their legitimate interests, for example, when a client wants to contract life insurance for a very large amount of money when he knows he will die young, or when the risk concerns a disease that is closely associated with the job at hand. However, many ethicists agree that treating mutation carriers differently in most cases is unjust and in case of insurance violates the value of solidarity and risk sharing that is underlying the idea of insurance in general. Several countries have drawn up some form of regulation to prevent genetic discrimination in the public and sometimes also for the private sector. Nonetheless, recent research showed that genetic discrimination is a common and global phenomenon (Otlowski et al. 2012).

Predictive genetic testing may also affect society as a whole. Several critics have pointed out that predictive genetic testing might contribute to the “geneticization” of society: the tendency to interpret all individual differences in terms of genes (Hedgecoe 2009). This concept and the critique it puts forward are reminiscent of earlier, more broad-ranging critiques of the “medicalization” of society, which targeted the practical, cultural, and political implications of labeling a phenomenon as a “disease.” In comparison to the medicalization critique, geneticization discourse focuses less on the power shifts entailed by the labeling processes; it mainly addresses identity issues and their political implications. Geneticization is considered a problematic phenomenon for two reasons. First, labeling differences as “genetic” suggest that these are innate and therefore hard to change, and secondly, a genetic explanation (like a disease label) invites biological remedies where social solutions might be more fitting. Attributing health differences to genetic variety, for example, may deviate attention from inequalities in income or housing and thus depoliticize health problems. The geneticization thesis has been hotly disputed, in particular by social scientists who question its empirical adequacy (Hedgecoe 2009). The thesis seems to ignore the active role of individuals themselves in assuming a genetic identity and how such an identity invites them to take responsibility toward their own and others’ lives (Novas and Rose 2000). In practice, members of families with Huntington disease, for example, take an active role in the way they deal with the (possibility of) predictive testing. In doing so, they shape life strategies that they think are responsible, develop their own expertise, and start co-shaping science, for example, by lobbying for specific types of research (Novas and Rose 2000). Finally, the geneticization thesis hinges on the idea that genes are viewed as automatically causing disease (and other phenotypic traits). This view has been put into perspective since the concept of geneticization was coined.

Predictive Testing For Multifactorial Disease

As indicated above, predictive genetic testing for Huntington’s disease has been a paradigm case for ethical reflection on predictive testing in general. Since its emergence in the 1990s, however, it has become clear that the “one gene, one disease” model that makes genetic testing for Huntington very informative does not apply for most other diseases. A distinction is now made between “monogenetic disease,” caused by a mutation on one gene, and “multifactorial disease,” which is caused by the interactions of (possibly multiple) genes and nongenetic factors like lifestyle and environment. The majority of diseases is now thought to be multifactorial. This implies that most predictive genetic tests indicate an increased (or decreased) risk of disease, not a certain fate. This disenchantment with genetics brought about a shift in the search for predictive markers. Hopes were now set on genomic and on nongenetic biomarkers. Genomic markers try to predict future health by combining many small variations in the genome. Nongenetic markers usually are based on molecular changes implied in “real time” (but presymptomatic) disease processes. In practice, however, these technologies also indicate increased or decreased risk only. Predictive testing, then, whether focusing on genetic, genomic, or other molecular biomarkers, produces probabilities, not certainty.

As a result, many of the ethical issues raised at the dawn of predictive genetic testing seem to lose some of their urgency in case of multifactorial diseases. If the test results are not announcing a dramatic, certain fate, their impact is probably less dramatic than anticipated. This does not mean, however, that predictive testing for multifactorial diseases is ethically unproblematic. Novel, ticklish issues abound, often closely related to the interpretation of test results: How to make sense of the probabilities offered, and how to relate this knowledge to existing identities, ways of life, and the values embedded in these? How to deal with the uncertainties surrounding the knowledge produced? Such questions require attention to the details of predictive testing.

BRCA-testing is an instructive example here, even though it is usually seen as a test for monogenetic disease. BRCA1 and BRA2 are two genes known to be implicated in hereditary breast and ovarian cancer. Women carrying a mutation in either gene have a significantly increased, but not a 100 %, chance of both diseases. Moreover, the two BRCA-genes explain only a limited set of all hereditary breast and ovarian cancer (suggesting that other, unknown genetic factors may be involved). This poses a problem when healthy women from families with a clear pattern of hereditary breast and ovarian cancer apply for predictive testing. If no mutation is found, it is not clear what this means: the woman may be free from a mutation that runs in the family, but she (and her relatives) may also have mutation(s) in one or more other genes that were not tested or have not been linked to breast/ovarian cancer. In both the UK and the Netherlands, this uncertainty is dealt with by asking healthy women who opt for a predictive BRCA-test to engage their diseased family members in the testing process (Boenink 2011). Testing one or more (former) patients first helps to interpret an eventual test result of the healthy person. This procedure, however, turns predictive testing into a family affair and assumes that healthy women are willing and capable to involve their relatives. In the USA, in contrast, BRCA-testing is offered to healthy individuals directly (even the involvement of medical professionals may be quite limited). This means, however, that the test result is often much less informative, because of the uncertainties at stake (Boenink 2011). It seems, then, that there is a trade-off between the value of individual autonomy and the value of informative results and that the actual trade-off depends on the “testing architecture”: the way technical and social elements hang together in a testing practice. This example also shows that speaking of “the” BRCA-test (or of “the predictive test for X” in general) may be misleading; depending on the architecture, predictive testing can mean different things and therefore have different impacts in different settings.

Dealing with uncertainty is a major issue in predictive tests for multifactorial diseases in other ways as well. Since the test produces probabilities, individuals tested have to decide when (if ever) a probability is high enough to justify action. Angelina Jolie apparently considered a lifetime risk of 87 % to get breast cancer and of 50 % to get ovarian cancer sufficient to justify preventive mastectomy and ovariectomy. In cultures where the presence of breasts is experienced even more strongly as an integral part of female identity, the “options” offered by BRCA-testing may cause serious moral dilemmas. However, it is important to be aware that in many cases probabilities associated with genetic mutations are much lower than the ones mentioned above, both in an absolute and a relative sense. A slippery slope may emerge of ever smaller risks being considered relevant, in particular since many people are prone to “anticipated regret.” This way of thinking implies that it is preferable to act now to avoid a small chance of an adverse event in the future than to be sorry when the event actually occurs. It has been suggested that, instead of focusing on “risk” and striving to reduce all risks to zero, it may be more fruitful to accept the continuous vulnerability of human health as a given and to strengthen resilience (Palmboom and Willems 2010).

Direct-To-Consumer Testing

When genetic research became booming, the issue of gene patenting attracted a lot of ethical and legal discussion. Several companies, but also academic institutions, started to file for patents on specific genetic tests and even on genes or genetic sequences themselves. The American company Myriad, for example, for years tried to patent both the BRCA-1 and BRCA-2 genes, as well as several elements of BRCA-analysis in both the USA and Europe. This led to huge controversies and a series of conflicting legal judgments, which seems to have finally ended (at least for the USA) in June 2013 when the US Supreme Court ruled that segments of DNA making up the human genome are not eligible for patenting because they are a product of nature (Kesselheim et al. 2013). Reviewing the full debate on gene patenting goes beyond the scope of this research paper . In practice, the opportunities for profit turned out to be smaller than expected anyway because, as discussed above, most predictive genetic tests were less informative than anticipated. Hopes to commercially profit from genetics (or rather genomics) were nonetheless refueled, when in the late 1990s science changed track and started to perform so-called genome-wide association studies (GWAS). Instead of focusing on single genes, these studies determine the correlation between a large number of genetic variations, usually single nucleotide polymorphisms (SNPs), and the occurrence of disease in large populations. The idea is that a large number of variants, each with a very mall effect on disease risk on its own, in combination might explain the difference between diseased and disease-free groups and thus predict individual risks. Chips were developed that screen individual samples of blood or saliva for sometimes more than a million SNPs. With the knowledge produced by GWAS, it has become possible to test an individual sample and produce a “risk profile,” indicating an individual’s increased or decreased risk for a number of diseases at the same time.

This possibility was taken up by newly emerging companies like deCODE Genetics, 23andMe, and Navigenics, which from 2007 started offering direct-to-consumer (DTC) predictive testing via the Internet. Customers can order a “spit kit,” send in saliva, and access their results online a number of weeks later. Next to testing SNPs related to health conditions, many companies offer tests for other phenotypic traits (like alcohol flush or eye color), drug response, ancestry, and/or paternity. Providers claim that this information empowers users to make their own choices and take responsibility for their own health (or even life). DTC-testing has caused a huge ethical controversy and is the subject of ongoing regulatory attempts in many countries. Partly as a result, this is a highly volatile market. Although many commercial initiatives have emerged, success is in no way guaranteed. Providers frequently change their products, their business model, or marketing strategy and rapidly emerge and disappear to sometimes reappear in a different country (Tutton 2014).

The ethical concerns related to DTC-testing are twofold. First, there are several issues with the validity and interpretation of results, suggesting that customers may be deceived for profit’s sake. In addition, there are worries about the impact of providing results without a professional intermediary (Tutton 2014). With regard to the first, there are doubts concerning companies’ claims about the clinical validity and utility of their tests. The commercial interests of the companies offering such testing often seem to lead to inflated promises. Many critics point out that knowledge about the meaning of SNPs is limited and that therefore the interpretation of the variation found is challenging. GWA studies as yet have not produced many results that are clinically useful, and many have not been reproduced. Moreover, the far majority of populations studied in GWAS have a white European bias, so results may not be applicable to people with different ancestry or ethnicity (thus adding to the injustice caused by the fact that most predictive testing predominantly targets diseases prevalent in affluent societies). In practice, different companies look at different sets of SNPs and use different GWAS and different sets of reference populations to interpret test outcomes. As a consequence, the same client may receive very different results from different companies (Tutton 2014).

Questions have also been raised regarding the impact of the test results and the associated health advice. Customers might be unnecessarily frightened or erroneously relieved about their risk when no professional counseling is provided. It might even cause them to use health-care services without clear clinical need. Several commentators have argued that to prevent harm, tight regulation of DTC-testing is warranted. They suggest that the norms used to regulate clinical genetic testing should apply. Others argue that DTC-testing should not be put on a par with clinical genetic testing, because the utility of such tests goes beyond the medical domain. They can provide self-knowledge and entertainment, in addition to empowering individuals to make their own life choices. Whether or not a test is useful, this argument goes, can and should be decided by individual clients alone. In response to attempts at regulation (e.g., by the American FDA), many companies have indeed started downplaying claims about medical utility, stressing the “personal utility” of their tests. Others have started to involve medical professionals (Tutton 2014).

Framing DTC-testing as “medical” or as “personal” indicates the relative weight given to ethical principles of “protection against harm” and “individual choice.” The usefulness of both principles has been questioned, however, because they presuppose boundaries between medical and nonmedical information, experts and lay people, patients and donors, academic research, health care, and commercial companies that are rapidly eroding (Prainsack et al. 2008). Increasingly, DTC companies offer their services in combination with options to actively participate in research. The company 23andMe, but also noncommercial organizations like DIY Genomics, encourages clients/participants to provide medical and lifestyle data in addition to their genetic profile to set up large databases that may enable further research. Their claim is that clients will personally profit from the diagnostic and/or therapeutic applications these biobanks may generate and contribute to the public good at the same time. In “crowd sourced” projects, participants may be offered a say in what should be investigated and in the interpretation of results. In such cases, decisions about the meaning and utility of results are no longer the prerogative of a limited number of experts, nor left to individuals; they are becoming a collective effort of (both commercial and academic) geneticists, bioinformaticians, clients, and (web) communities (Prainsack et al. 2008).

Whole Genome Analysis

The most recent development of note is that sequencing the whole genome is becoming so cheap that it can be applied almost routinely. The “thousand-dollar genome” may become a reality soon. Many ethical issues raised before are relevant to whole genome sequencing and analysis (WGS/WGA) as well. However, there are also ethical questions more specific to WGS/WGA. First of all, sequencing in principle opens up all “raw” genetic data of an individual. In practice, however, filters are often applied, sequencing only those parts of the genome that have a coding function (the “exome”) or that are most relevant to the knowledge aimed for. To make sense of the raw data, moreover, the sequence has to be analyzed. Such analysis, again, can be performed on all data or on filtered data only. The choice whether or not to apply filters (and if so, which ones) is crucial for the knowledge gained and the potential impact of this knowledge (Bredenoord et al. 2011). This is all the more relevant since WGA may reveal information with regard to health and disease but also about other characteristics, like personality or cognition. It is crucial, then, who decides how broad or targeted analysis and subsequent disclosure of results should be. For research settings, it has been suggested that enabling participants to choose among different “packages” of genetic information with different clinical utilities and/or different significance might be a good way to steer between the values of autonomy and clinical utility or actionability (Bredenoord et al. 2011). A similar procedure could of course be used outside research settings.

WGA, when it is not targeted at a specific disease or the explanation of a specific problem, can be considered as a form of screening (Dondorp and de Wert 2013). The basic idea is that WGA results may inform choices of lifestyle, prevention, and treatment, as well as reproductive decisions. As in other cases of screening, potential drawbacks of GWA are that one may receive unwanted or unsought for information, that it may increase worries about both negative and uncertain outcomes, and that it may lead to unnecessary use of health-care services. A complicating factor is, again, that WGA is likely to reveal much information of uncertain significance. Typical moments to perform a WGA could be at birth or in early adulthood. At present, there seems to be broad international consensus that it should not be performed in newborns unless it is used to clarify a disorder, hence for diagnostic purposes. Although some have argued that predictive knowledge about late onset disease may be of value to (parents of) children, most ethical bodies judge that the knowledge gained would infringe on the child’s future, anticipated autonomy (Dondorp and de Wert 2013). As for WGA in adults, some argue that the value of such an analysis will soon be evident because of the anticipated possibilities for tailoring prevention and treatment to one’s genomic profile. Others suggest that, regardless of the quality of the information gained, individuals have a right to know about their genetic constitution. For the moment, however, many seem to agree that in the absence of valid, reliable, and actionable predictors, such a right is rather premature (Dondorp and de Wert 2013).

Finally, analysis of the genome is of course dependent on the state of knowledge regarding the link between genetic variations and health status (or other characteristics, for that matter). Since this knowledge is evolving, any interpretation of sequencing results has a temporary character: future insights may change the meaning of what was revealed. This raises issues of where and how to store one’s genetic sequence and who (if anyone) is responsible to communicate novel insights that might affect its meaning (Dondorp and de Wert 2013).

Individualizing Responsibility And The Erosion Of The Common Good

Next to the myriad of issues related to interpretation of the predictive information different technologies produce, there are ethical and social issues regarding the responsibility to act on this information. If risk for disease is increasingly framed as a bodily characteristic, how does that affect thinking about responsibility for health? Whereas a genetic mutation for a monogenetic disease may be interpreted as an excuse for one’s health state (since the fault is in your genes, you cannot be blamed), probabilistic predictions seem to increase one’s personal responsibility. Now you can know what diseases you are prone to, you can also be expected to take precautionary measures. Increasing “empowerment” in that case means increasing individual responsibility. Whether or not individual responsibility for one’s health gains in strength as a social norm, and whether this is associated with a concurrent decrease in collective responsibility for health, is as yet unclear. It should at least be noted that these are not automatic, unavoidable effects of the availability of predictive tests (Juengst et al. 2012). The conditions under which such tests are implemented can make a lot of difference. And even if the architecture of a predictive test invites individual “responsibilization,” users may respond in unanticipated, creative ways, for example, by bringing together those with similar risks and taking collective action to improve health care for this group.

It has been pointed out, finally, that the central role currently assigned to (genetic and nongenetic) biomarker testing in predictive medicine does shift the aim and meaning of prevention. Whereas prevention in the past was seen as a crucial pillar of collective policies to improve public health, current thinking about prevention tends to focus on improving the health of individuals (Dickenson 2013). This shift may encourage a stance toward prevention in which personal interest, not the common good, is paramount, This in turn could contribute to an erosion of the notion of public health and decrease support for and willingness to participate in interventions targeting public health (e.g., in vaccination policies). In view of the frequently limited benefits of individualized prediction and prevention, moreover, spending public funds on predictive testing may be much less cost-effective than spending it on collective measures like environmental infrastructure, vaccination, and health education (Juengst et al. 2012; Dickenson 2013).

Conclusion

Biomedical research into genetics, genomics, and molecular medicine in the late twentieth century has brought along a vision of predictive medicine. In this vision, testing of an individual’s molecular functioning enables predictions about his or her future health state. Such predictions, it is thought, facilitate disease prevention and early treatment but also help to prepare for the future and to make life choices. Fueled by the high expectations about genetics and the Human Genome Project, ethicists early on were concerned about the potentially negative impacts of such predictive knowledge on an individual’s quality of life, on family relations, and on social relations. The right to have an open future emerged as an important argument to balance against the potential benefits of predictive testing. Moreover, autonomous decision-making was put forward as an important condition for ethically acceptable predictive testing, even though it may be hard to achieve in case of genetic information. When it became clear that most predictive testing (whether based on genetic, genomic, or other biomarkers) results in probabilistic information, ethical attention increasingly focused on issues related to the interpretation and understanding of test results. Awareness has grown that the conditions under which such tests are offered, used, and interpreted are crucial to their actual impact. Moreover, it is now acknowledged that the rapidly evolving knowledge base in molecular medicine means that interpretations of test results evolve as well, complicating the assessment of their impact. Next to these concerns about testing practice, worries remain about potential shifts in thinking about responsibility for health, as well as about broader impacts of predictive testing on society and culture.

Bibliography :

  1. Boenink, M. (2011). Unambiguous test results or individual independence? The role of clients and families in predictive BRCA-testing in the Netherlands compared to the USA. Social Science & Medicine, 72(11), 1793–1801.
  2. Borry, P., Stultiens, L., Nys, H., Cassiman, J.-J., & Diericks, K. (2006). Presymptomatic and predictive genetic testing in minors: A systematic review of guidelines and position papers. Clinical Genetics, 7, 374–381.
  3. Bredenoord, A. L., Onland-Moret, N. C., & van Delden, J. J. M. (2011). Disclosure of individual genetic data to research participants: The debate reconsidered. Trends in Genetics, 27(2), 41–47.
  4. Chadwick, R., Levitt, M., & Shickle, D. (Eds.) (1997). The right to know and the right not to know. Aldershot: Averbury.
  5. Dickenson, D. (2013). Me medicine vs we medicine:
  6. Otlowski, M., Taylor, S., & Bombard, Y. (2012). Genetic discrimination: International perspectives. Annual Review of Genomics and Human Genetics, 13(1), 433–454.
  7. Novas, C., & Rose, N. (2000). Genetic risk and the birth of the somatic individual. Economy and Society, 29(4), 485–513.
  8. Palmboom, G. E. R., & Willems, D. (2010). Risk detection in individual health care: Any limits? Bioethics, 24(8), 431–438.
  9. Prainsack, B., Reardon, J., Hindmarsh, R., Gottweis, H., Naue, U., & Lunshof, J. (2008). Personal genomes: Misdirected precaution. Nature, 456(7218), 34–35.
  10. Rehmann-Sutter, C., & Muller, H. (Eds.). (2009). Disclosure dilemmas: Ethics of genetic prognosis after the “right to know/not know” debate. Surrey: Ashgate.
  11. Tutton, R. (2014). Genomics and the reimagining of personalized medicine. Dorchester: Ashgate.
  12. Chadwick, R., Levitt, M., & Shickle, D. (Eds.) (1997). The right to know and the right not to know. Aldershot: Averbury.
  13. De Vries, G. H., & Horstman, K. (Eds.) (2008). Genetics from the laboratory to the clinic. Basingstoke: Palgrave Macmillan.
  14. Rehmann-Sutter, C., & Muller, H. (Eds.). (2009). Disclosure dilemmas: Ethics of genetic prognosis after the “right to know/not know” debate. Surrey: Ashgate.
  15. Tutton, R. (2014). Genomics and the reimagining of personalized medicine. Dorchester: Ashgate. Reclaiming biotechnology for the common good. New York: Columbia University Press.
  16. Dondorp, W. J., & de Wert, G. M. W. R. (2013). The ‘thousand-dollar genome’: an ethical exploration. European Journal of Human Genetics, 21, S6–S26.
  17. Hedgecoe, A. M. (2009). Geneticization: Debates and controversies. In Encyclopedia of the life sciences. Chichester: Wiley.
  18. Juengst, E. T., Settersten, R. A., Fishman, J. R., & McGowan, M. L. (2012). After the revolution? Ethical and social challenges in “personalized genomic medicine”. Personalized Medicine, 9(4), 429–439.
  19. Kesselheim, A. S., Cook-Deegan, R. M., Winickoff, D. E., & Mello, M. M. (2013). Gene patenting – The supreme court finally speaks. The New England Journal of Medicine, 369(9), 869–875.
  20. Kevles, D. J. (1985). In the name of eugenics: Genetics and the uses of human heredity. Cambridge: Harvard University Press.
  21. MacLeod, R., Tibben, A., Frontali, M., Evers-Kieboom, G., Jones, A., Martinez-Descales, A., Roos, R. A., et al. (2013). Recommendations for the predictive genetic test in Huntington’s disease. Clinical Genetics, 83, 221–231.
Health Economics Research Paper
Leprosy Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!