Decision Support Systems Research Paper

Academic Writing Service

Sample Decision Support Systems Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. iResearchNet offers academic assignment help for students all over the world: writing from scratch, editing, proofreading, problem solving, from essays to dissertations, from humanities to STEM. We offer full confidentiality, safe payment, originality, and money-back guarantee. Secure your academic success with our risk-free services.

1. Introduction

A decision is a choice made by some agent between competing beliefs about the world or between alternative courses of action to achieve the agent’s goals. The ability to reason flexibly about problems and make good choices are fundamental to any kind of intelligence and the need to do them well is highly valued in society. Consequently, there is a growing demand for decision support systems which can assist human decision makers make important choices more effectively. This demand has stimulated research in many fields, including the behavioural and cognitive sciences (notably psychology, artificial intelligence, and management science) as well as various mathematical disciplines (such as computer science, statistics, and operations research).

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Behavioral scientists have traditionally drawn a distinction between two kinds of theory that address problems in decision making. Prescriptive theories set out formal methods for reasoning and decision making, and criteria for ‘rational’ inference. Descriptive theories typically seek to explain how people make decisions and account for the errors they make that may result in violations of rational norms. Countless empirical studies of human judgement, in the lab-oratory and in real-world settings, have demonstrated that human decision making is subject to a variety of systematic errors and biases compared with prescriptive decision models.

There are many reasons for this. Mistakes are clearly more likely when someone is tired or overloaded with information, or if a decision maker lacks the specialist knowledge required to make the best choice. Even under ideal conditions however, with full information, people still make ‘unforced errors.’ One of the major reasons for this is that we do not seem to be very good at managing uncertainty and complexity.




In the 1970s and 1980s cognitive scientists came to the view that people are not only rather poor at reasoning under uncertainty, but that they also revise their beliefs by processes that bear little resemblance to formal mathematical calculation, and it is these processes that seem to give rise to the characteristic failures in decision making. Kahneman and Tversky developed a celebrated explanation that people use various heuristics in making decisions under uncertainty, as when they judge things to be highly likely when they come easily to mind or are typical of a class rather than by means of a proper calculation of the relative probabilities. Such heuristic methods are often reasonable approximations but they can also lead to systematic errors.

Psychological research on ‘deductive reasoning,’ has looked at categorical rather than uncertain inferences, as in syllogistic reasoning but with similar questions in mind, namely: how do people carry out logical tasks? And how well do they do them by comparison with prescriptive logical models? Systematic forms of error and failures to recognize and avoid logic fallacies have also been found, and this is probably because people do not arrive at a conclusion by applying inference rules the way a logician might, but they appear to construct a concrete ‘mental model’ of the situation, and manipulate it in order to determine whether a proposition is supported in the model.

In summary, many behavioral scientists now take the view that human cognition is fundamentally flawed (e.g., Sutherland 1992). Good reviews of different aspects of research on human reasoning and decision making can be found in Kahneman et al. (1982), Evans and Over (1996), Wright and Ayton (1994), Gigerenzer and Todd (1999) and Stanovich and West (2000).

2. Mathematical Methods And Decision Support Systems

If people demonstrate imperfect reasoning or decision making then it would presumably be desirable to support them with techniques that avoid errors and comply with rational rules. There is a vast amount of research on decision support systems that are designed to help people overcome their biases and limitations, and make decisions more knowledgeably and effectively. If we are to engineer computer systems to take decisions, it would seem clear that we should build those systems around theories that give us some appropriate guarantees of rationality. In a standard text on rational decision making, Lindley summarizes the ‘correct’ way to take decisions as follows:

… there is essentially only one way to reach a decision sensibly. First, the uncertainties present in the situation must be quantified in terms of values called probabilities. Second, the consequences of the courses of actions must be similarly described in terms of utilities. Third, that decision must be taken which is expected on the basis of the calculated probabilities to give the greatest utility. The force of ‘must’ used in three places is simply that any deviation from the precepts is liable to lead the decision maker to procedures which are demonstrably absurd. (Lindley 1985, p. vii)

The above viewpoint leads naturally to Expected Utility Theory (EUT) which is well established and very well understood mathematically. If its assumptions are satisfied and the expected utilities of alternative options are properly calculated, it can be argued that the procedure will reliably select the best decision. If people made more use of EUT in their work, it is said, this would result in more effective decision making. Doctors, for example, would make more accurate diagnoses, choose better treatments, and make better use of resources. Similar claims are made about the decision making of managers, politicians, and even juries in courts of law.

3. Limits To Theory

As living and moving beings, we are forced to act … [even when] our existing knowledge does not provide a sufficient basis for a calculated mathematical expectation. (John Maynard Keynes, quoted by Bernstein 1996).

Many people think that the practical value of mathematical methods of decision making like SEU and the ‘irrationality’ of human decision makers are overstated. First of all, an expected-utility decision procedure requires that we know, or can estimate reasonably accurately, all the required probability and utility parameters. This is frequently difficult in real-world situations since a decision may still be urgently required even if precise quantitative data are not available. Even when it is possible to establish the necessary parameters, the cost of obtaining good estimates may outweigh the expected benefits.

Furthermore, in many situations a decision is needed before the decision options, or the relevant information sources, are fully known. The complete set of options may only emerge as the decision making process evolves. Neither logic nor decision theory provide any guidance on this evolutionary process. Lindley acknowledges this difficulty:

The first task in any decision problem is to draw up a list of the possible actions that are available. Considerable attention should be paid to the compilation of this list [though] we can provide no scientific advice as to how this should be done.

In short, the potential value of mathematical decision theory is limited by the frequent lack of objective quantitative data on which to base the calculations, the limited range of functions that it can be used to support, and the problem that the underlying numerical representation of the decision is very different from the intuitive understanding of human decision makers.

There are also many who doubt that people are as ‘irrational’ as the prescriptive theories appear to suggest. Skilled professionals may find it difficult to accept that that they do not make decisions under uncertainty as well as they ‘should’ and that under some circumstances their thinking can actually be profoundly flawed. Most of us have no difficulty accepting that our knowledge may be incomplete, that we are subject to tiredness and lapses of attention, and even that our abilities to recall and bear in mind all relevant factors are imperfect. But we are less willing to acknowledge an irremediable irrationality in our thought processes (‘people complain about their memories but never about their judgement’).

A more optimistic school of thought argues that many apparent biases and shortcomings are actually artefacts of the artificial situations that researchers create in order to study reasoning and judgement in controlled conditions. When we look at real-world decision making we see that human reasoning and decision making is in fact far more impressive than the research suggests. Herbert A. Simon observes that ‘humans, whose computational abilities are puny compared with those of modern super-computers or even PCs, are sometimes able to solve, with very little computation, problems that are very difficult even by computer standards—problems having ill-defined goals, poorly characterized and bounded problem spaces, or which lack a strong and regular mathematical structure.’ (Simon 1995)

Shanteau (1987) has investigated ‘factors that lead to competence in experts, as opposed to the usual emphasis on incompetence,’ and he identifies a number of important positive characteristics of expert decision makers: they know what is relevant to specific decisions, what to attend to in a busy environment, and they know when to make exceptions to general rules. Second, experts know a lot about what they know, and can make decisions about their own decision processes: they know which decisions to make and when, and which to skip, for example. They can adapt to changing task conditions, and are frequently able to find novel solutions to problems. Classical deduction and probabilistic reasoning do not capture these meta-cognitive skills. Consider medical decision making as an example. Most of the research in this area has viewed clinical decision making primarily in terms of deciding what is wrong with a patient (determining the diagnosis) and what to do based on the diagnosis (selecting the best treatment).

In practice a doctor’s activities and responsibilities are extremely diverse. They include:

(a) recognizing the possibility of a significant clinical problem

(b) identifying information that is relevant to under-standing the problem

(c) selecting appropriate investigations and other procedures

(d) deciding on the causes of clinical abnormalities and test results

(e) interpreting time-dependent data and trends (e.g., blood pressure)

(f) setting out clinical goals

(g) formulating treatment plans over time

(h) anticipating hazards

(i) creating contingency plans

(j) assessing the effectiveness of treatment

Each of these tasks involves different types of patient information and requires many different types of knowledge in order to interpret the information and act upon it appropriately. We must understand this complexity if we are to develop theories that are sufficiently sophisticated to capture the diversity or to build decision support systems which can emulate and improve upon human capabilities.

It has also been strongly argued that people are actually well adapted for making decisions under the adverse conditions caused by time pressure, lack of detailed information, and knowledge, etc. Gigerenzer, for instance, has suggested that human cognition is rational in the sense that it is optimized for speed at the cost of occasional and usually inconsequential errors (Gigerenzer and Todd 1999).

4. Effects Of Tradeoffs On Effectiveness Of Decision Making

… cognitive mechanisms capable of successful performance in a real world environment do not need to satisfy the classical norms of rational inference: the classical norms may be sufficient, but are not necessary, for a mind capable of sound reasoning. (Gigerenzer and Goldstein, ‘Reasoning the fast and frugal way,’ Psychological Review, 1996)

Tradeoffs simplify decision making and in practice may entail only modest costs in the decision maker’s accuracy and effectiveness. This possibility has been studied quite extensively in the field of medical decision making. In the prediction of sudden infant death, for example, Carpenter et al. (1977) attempted to predict death from a simple linear combination of eight variables. They found that weights can be varied across a broad range without decreasing predictive accuracy.

In diagnosing patients suffering from dyspepsia, Fox et al. (1980) found that giving all pieces of evidence equal weights produced the same accuracy as a more precise statistical method (and also much the same pattern of errors). Fox et al. (1985) developed a system for the interpretation of blood data in leukemia diagnosis, using the EMYCIN expert system software. EMYCIN provided facilities to attach numerical ‘certainty factors’ to inference rules. Initially a system was developed using the full range of available values ( -1 to +1) though later these values were replaced with just two: if the rule made a purely categorical inference the certainty factor was set to be 1.0 while if there was any uncertainty associated with the rule the certainty factor was set to 0.5. The effect was to increase diagnostic accuracy by 5 percent.

In a study of whether or not to admit patients with suspected heart attacks to hospital, O’Neil and Glowinski (1990) found no advantage of a precise decision procedure over simply ‘adding up the pros and cons.’ Pradhan et al. (1996) carried out a similar comparison in a diagnosis task and showed a slight increase in accuracy of diagnosis with precise statistical reasoning, but the effect was so small that it would have no practical clinical value.

In a recent study of genetic risk assessment for 50 families the leading probabilistic risk assessment soft-ware was compared with a simple procedure made up of if … then … rules (e.g., if the client has more than two first-degree relatives with breast cancer under the age of 50 then this is a risk factor). Despite the use of a simple weighting scheme for each rule, the rule-based system produced exactly the same risk classification for all cases as the probabilistic system (Emery et al. 2000).

While the available evidence is not conclusive a provisional hypothesis is that, at least for certain kinds of decision such as clinical diagnosis and patient management decisions, the strict use of quantitatively precise decision-making methods may not add much practical value to the design of decision support and artificial intelligence systems.

5. Nonclassical Methods For Decision Making

In artificial intelligence (AI) the desire to develop versatile automata has stimulated a great deal of research in new methods of decision making under uncertainty, ranging from sophisticated refinements of probabilistic methods such as ‘Bayesian networks,’ and nonprobabilistic methods such as fuzzy logic and possibility theory. Good overviews of the different approaches and their applications are Krause and Clark (1993) and Hunter and Parsons (1998). These approaches are similar to probability methods in that they treat uncertainty as a matter of degree. However, quantitative approaches in general have also been questioned in AI because they require much data and do not capture varied human intuitions about the nature of ‘belief,’ ‘doubt,’ and natural justifications for decision making.

Consequently, interest has grown in the use of non-numerical methods for reasoning under uncertainty that seem to have some ‘common sense’ validity. Attempts have been made to develop qualitative approximations to quantitative methods, such as qualitative probability (Wellman 1990). In addition new kinds of logic have been proposed, including:

(a) nonmonotonic logics which express the everyday idea of changing one’s mind, as though the probability of some proposition being true is 1.0 at one point but at some later point the probability becomes zero;

(b) default logic, a form of nonmonotonic logic which formalises the idea of assuming that something is true until there is reason to belief otherwise;

(c) defeasible reasoning in which one line of reasoning can ‘rebut’ or ‘undermine’ another line of reasoning

Cognitive approaches, sometimes called ‘reason-based’ decision making, are gaining ground, including the idea of using informal ‘endorsements’ for alternative decision options and logical formalizations of everyday strategies of reasoning about competing beliefs and actions based on ‘argumentation.’ Models of argumentation are reviewed by Krause and Clark (1993) and the role of argumentation techniques in decision support systems is surveyed by Girle et al. (2001).

Some advocate an eclectic approach to the formalization of uncertainty that sanctions the use of different representations under different circumstances. Fox and Das (2000, Chap. 4) discuss the possibility that many techniques for reasoning under uncertainty, from quantitative and qualitative probability to de-fault logic and representation of uncertainty in natural language, capture different intuitions about ‘belief’ and decision making but can all be viewed as different technical specializations of the informal notion of argumentation.

Finally, as noted above, human decision makers often demonstrate ‘metacognitive’ capabilities, showing some ability to reason about the nature of the decision, the relevant information sources, and applicable forms of argument, and so forth. Metacognition may be at the heart of what we call ‘intelligence.’ Designers of decision support systems now have available a range of formalisms based on mathematical logic which can emulate such ‘meta-level reasoning’ (Fox and Das 2000) while conventional algorithmic decision procedures seem to be confined to reasoning about the input data, not the decision problem itself.

It appears from the foregoing that the claim that decision support systems should not be modeled on ‘irrational’ human cognitive processes is less compelling than it first appeared. This offers greater flexibility for the designers of decision support systems; they can adopt different decision theoretic frameworks for different applications. Indeed, they may also be able to apply various metacognitive strategies, as people appear to do, to select the most effective representations and reasoning methods in light of the demands and constraints of the current task.

Bibliography:

  1. Bernstein P 1996 Against the Gods. Wiley, New York, p. 185
  2. Carpenter R G, Garnder A, McWeeny P M, Emery J L 1977 Multistage scoring system for identifying infants at risk of unexpected death. Archives of Disease in Childhood 52: 606–12
  3. Emery J, Walton R, Murphy M, Austoker J, Yudkin P, Chapman C, Coulson A, Glasspool D, Fox J 2000 Computer support for interpreting family histories of breast and ovarian cancer in primary care: Comparative study with simulated cases. British Medical Journal 321: 28–32
  4. Evans J St B, Over D E 1996 Rationality and Reasoning. Erlbaum, London
  5. Fox J, Barber D C, Bardhan K D 1980 Alternatives to Bayes? A quantitative comparison with rule-based diagnosis. Methods of Information in Medicine 10(4): 210–15
  6. Fox J, Das S K 2000 Safe and Sound: Artificial Intelligence in Hazardous Applications. American Association of Artificial Intelligence and MIT Press, Cambridge, MA
  7. Fox J, Myers C D, Greaves M F, Pegram S 1985 Knowledge acquisition for expert systems: Experience in leukaemia diagnosis. Methods of Information in Medicine 24(1): 65–72
  8. Gigerenzer G, Todd P M 1999 Simple Heuristics that Make us Smart. Oxford University Press, Oxford
  9. Gigerenzer G, Goldstein D G 1996 Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review 103: 650–69
  10. Girle R, Hitchcock D, BcBurney P, Verheij B 2001 Decision support for practical reasoning: A theoretical and computational perspective. In: Reed C, Norman T (eds.) Proceedings of the Bonskeid Symposium on Argument and Computation
  11. Hunter A, Parsons S (eds.) 1998 Applications of Uncertainty Formalisms. Lecture Notes in Computer Science 1455. Springer, Berlin
  12. Kahneman D, Slovic P, Tversky A (eds.) 1982 Heuristics and Biases. Cambridge University Press, Cambridge, UK
  13. Krause P, Clark D 1993 Representing Uncertain Knowledge. Kluwer Academic Publishers, Norwell, MA
  14. Lindley D V 1985 Making Decisions, 2nd edn. Wiley, London
  15. O’Neil M J, Glowinski A J 1990 Evaluating and validating very large knowledge-based systems. Medical Informatics 15(3): 237–51
  16. Pearl J 1988 Probabilistic Reasoning in Intelligent Systems. Morgan-Kaufmann. Palo Alto, CA
  17. Pradhan M 1996 The sensitivity of belief networks to imprecise probabilities: An experimental investigation. Artificial Intelligence Journal 85: 3636–97
  18. Shanteau J 1987 Psychological characteristics of expert decision makers. In: Mumpower J (ed.) Expert Judgement and Expert Systems. NATO ASI Series, Vol. F35
  19. Simon H A 1995 Artificial intelligence: An empirical science. AI Journal 77(1): 95–127
  20. Stanovich K E, West R F 2000 Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences 22(5)
  21. Sutherland N S 1992 Irrationality: The Enemy Within. Con-stable, London
  22. Wellman M P 1990 Fundamental concepts of qualitative probabilistic networks. Artificial Intelligence Journal. 11(2): 145–72
  23. Wright G, Ayton P (eds.) 1994 Subjective Probability. Wiley, Chichester, UK
Dependency Theory Research Paper
Interviewing in Social Sciences Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!