Methodology of Laboratory Experiment Research Paper

Academic Writing Service

Sample Methodology of Laboratory Experiment Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Experiments are special empirical investigations. They differ from nonexperimental empirical research methods. This research paper examines this difference (see Sect. 1).

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Many researchers feel that only the experimental method offers the advantage of specifying causes and effects. This advantage is based on the unique possibility of controlling extraneous variables that can invalidate the empirical results (see Sect. 2).

By means of this control, an attempt is made to meet the various quality criteria of an experiment. The criteria of internal and external validity were developed within the context of an inductivistic theory, which has been criticized. Sect. 3 presents an alternative deductivistic theory of experiments.




Books dealing with the experimental method refer to the ideal control experiment. Results of empirical studies show the experiment to be a social interaction between the experimenter and the subject and thus to differ greatly from the ideal experiment. Sect. 4 focuses on these differences and their consequences for experimental research in the area of social and behavioral sciences. The experiment has been criticized for various other reasons (see Sect. 5).

1. Experimental vs. Nonexperimental Research

Let us use an example to illustrate the difference between experimental and nonexperimental research. The example concerns whether a specific film documenting the life of Jews in the Warsaw Ghetto affects the attitude of viewers towards Jews. A first study investigates the attitude of subjects who already had seen the film, and compares it to the attitude of subjects who had not. Imagine that this comparison shows that those who had seen the film have a more positive attitude towards Jews. A second study comes to the same conclusion, but in this study the investigator randomly assigns the subjects to viewing the ‘film’ (experimental condition) or to a ‘no film’ control condition and then measures their attitudes.

Both studies include two variables. The first variable is dichotomous (‘film viewed’ vs. ‘film not viewed’), whereas the second (social attitude) has more than two values. How can the result of the first study be interpreted? Does the film positively influence attitude, or did those who watched the film have a more positive attitude towards Jews prior to the study? This question cannot be answered. It is therefore impossible to classify the variables as independent and dependent. In the second study, the investigator manipulates the first variable and randomly assigns the subjects to the ‘film’ and ‘no film’ conditions using what is in essence, a coin toss. In this way he creates a temporal sequence between the manipulated conditions (independent variable) and the dependent variable ‘social attitude.’ The subjects are randomly assigned to the conditions in order to prevent pre-experimental attitudes towards Jews differing between the two groups of participants. The result of this study can be interpreted in such a way that the viewing of the film had a positive causal effect on the attitude.

The second study is an example of experimental research. An experiment can be defined by the following criteria. The experimenter creates the conditions, systematically varies them, and applies the principle of randomization. Experiments exhibit at least one independent and at least one dependent variable. The same also applies to quasiexperiments, though here the subjects are by definition, not assigned to conditions according to the principle of randomization (Cook and Campbell 1979). By contrast, the first study is an example of a nonexperimental, correlational study. The variable, film no film, was not manipulated (varied) but selected by the investigator. The characteristic of ‘randomization’ is also missing. ‘Selected’ means that the investigator selected subjects for the study who already had assigned themselves to the conditions.

The terminology just outlined is widely accepted. However, sometimes the term ‘experiment’ is used with an adjunct. In these cases, the study being referred to is not always an experiment as defined above. This applies, for example, to the term ‘ex post facto experiment’ (Chapin 1947), where the conditions are not manipulated but selected by the investigator. The ex post facto experiment differs from a correlational study in that the former includes subsequent controls in order to permit the inference of a cause-and-effect relationship. A criticism of this procedure can be found in Campbell and Stanley (1963). On the other hand, the terms ‘laboratory experiment’ and ‘field experiment’ refer to true experiments. The adjuncts refer to different settings in which the experiments take place.

The present research paper deals with true (randomized) laboratory experiments and excludes other research methods. It describes the principles of experimentation in ways that are independent of any given discipline within the social and behavioral sciences.

2. Experimental Error And Experimental Validity

Not all research questions can be experimentally examined. Certain variables such as sex and age cannot be experimentally manipulated. Thus, if the relationship between age and intelligence is of interest, there is no opportunity for experimentation. However, this opportunity did exist for studying whether a specific film influences attitudes towards Jews. In the opinion of many researchers, such an opportunity should be taken because the experiment is superior to other methods for defending causal interpretations (e.g., Carlsmith et al. 1976). Random assignment permits us to conclude that viewing the film is the cause of the more positive attitude.

Experiments are subject tOverrors that jeopardize interpreting the relationship between an independent and dependent variable as unequivocally causal. A differentiation must be made between systematic and random errors (Cox 1961). An internal systematic error is present if the subjects who are observed under various experimental conditions also differ with respect to other variables. In this case the independent variable is confounded with extraneous variables. The criterion of internal validity (Campbell and Stanley 1963) has not then been met.

An external systematic error is present if the conditions of the experiment deviate ‘from the conditions under which it is proposed to apply the conclusions from the experiment’ (Cox 1961, p. 44). Such an error would exist if the experiment in the example given in Sect. 1 was performed using college students, but the conclusions regarding the effect of the film were not limited to persons with a higher education. In cases of external errors, the criterion of external validity (Campbell and Stanley 1963) has not been met.

Although random errors are not correlated with the independent variable, their control is important. Random errors affect the variation of the dependent variable within the experimental and control conditions. The attitude values of the subjects within the experimental condition ‘film’ deviate more or less from a mean, and the same applies to the attitudes under the control condition ‘no film.’ These dispersions are the result of so-called random errors. The smaller these dispersions are, the greater the precision of the experiment will be (Bredenkamp 1969). Since the relationship between the independent and dependent variable is statistically analyzed, the reduction of the dispersion resulting from random errors is very important. The concept of a random error differs from that of an error of measurement in classical test theory. The dispersion of the random errors within the experimental and control conditions includes not merely the variance of the measurement errors, but also that of the true scores as defined in classical test theory.

2.1 Internal Validity

Controls to ensure internal validity promote an unequivocal causal interpretation of the relationship between the independent and dependent variables. In order to avoid the confounding of a known extraneous variable with the independent variable, the control techniques of ‘elimination’ and ‘constancy of conditions’ are employed. An internal error exists, for example, if a female experimenter measures the attitude of the subjects under the experimental condition, while a male experimenter measures the attitude under the control condition. The sex of the experimenter may have an effect on the dependent variable. To control this error, a single experimenter could collect all the data (constancy of conditions). An extraneous variable is eliminated, for example, if background noise that might reduce the audibility of speech is removed.

Unknown extraneous variables can be controlled by randomization. Randomization ensures that the expected values of the extraneous variables are identical under different conditions. Specific instructions exist concerning the random assignment of the subjects to the experimental conditions (e.gq., Keppel 1973 see Random Assignment: Implementation in Complex Field Settings).

Despite these controls, there remains the possibility that a factor is present that jeopardizes the internal validity of the experiment. Thus, for example, simply watching a film—regardless of its content—may have an effect on the social attitude. An experimenter who compares the attitudes under the conditions of film no film, will overlook this possible error, even if the method of randomization was employed to control internal errors. There should therefore be at least one further condition included under which subjects view a film that is neutral with regard to its attitude toward Jews.

2.2 External Validity

In their highly influential article on the evaluation of experimental and quasi-experimental designs, Campbell and Stanley (1963) also introduced the criterion of external validity. External validity asks the question of generalizability: To what populations, variables, and settings can the experimental effect be generalized? Holding an extraneous variable constant in order to control internal errors always brings along the risk of reducing the external validity of an experiment. If a single experimenter performs the experiment on social attitudes towards Jews, the observed effect might then exist for this experimenter only. In order to examine this question, a second independent variable must be added to the experiment as a control factor so that its effects can be separated from the effects of the experimental independent variable. The control factor is introduced in such a way that it is not confounded with the experimental independent variable.

Of critical importance to external validity is the finding that the more positive attitude resulting from the ‘Ghetto film’ condition only applies to certain experimenters, not others. This is an example of a so-called ‘disordinal interaction’ between two independent variables (Bracht and Glass 1968). An interaction is called ordinal if the more positive attitude under the ‘Ghetto film’ condition occurs with all experimenters, but to varying degrees. In this case, who the experimenter is influences the dependent variable differentially by correlation, but the causal sign of the difference stays the same across all experimenters. The analysis of variance is frequently used for the statistical analysis of multifactorial experiments (e.g., Keppel 1973), and it can determine whether an interaction between the independent variables is present. However, it cannot identify the type of interaction involved.

Only disordinal interactions are problematic for the external validity of an experiment because they identify situations where the causal sign between the conditions of an independent variable varies, being positive with some experimenters in the example chosen but negative with others. Thus, identifying the type of interaction is important, and statistical procedures for doing this are presented in Bredenkamp (1982).

Questions about the population to which an experimental effect can be generalized often concern disordinal interactions (Bracht and Glass 1968). If only persons with college education participate in an experiment, then its external validity is endangered. It would be better to include variation in the level of education in the experiment as a control factor.

The question concerning the variables to which an experimental effect can be generalized presumes that these variables are fallible indicators of a theoretical construct. ‘Social attitude’ is a construct that can be operationalized in diverse ways. If different dependent variables, all assumed to be indicators of the same construct, are included in the investigation, it is called a multivariate experiment. This type of experiment allows us to examine whether the effects observed for a specific dependent variable also apply to other indicators of the same construct. Indeed, one could also compare several experiments using different

 indicators of the same construct (Bredenkamp 1980). In order to avoid a mono-operation bias, wherever it is feasible, a given construct should be operationalized in different ways (Cook and Campbell 1979).

Whether an experimental effect can be generalized to experimenters other than those used in a study touches on only one aspect of the setting in which a cause–effect relationship is examined. With regard to generalizability, field experiments are more externally valid than laboratory experiments whose validity for ‘the real world’ is questionable. Although laboratory experiments offer the advantage of controlling internal errors, they have the disadvantage of limited external validity. However, such considerations presuppose an inductivistic theory of experimental validity, and this has been criticized (see Sect. 3).

2.3 Precision

Increasing the precision of an experiment is important since statistics are used to make the decision about whether there is any relationship between the independent and dependent variable in a given experiment. Two errors can arise in this statistical decision. If there is no relation between the independent and dependent variable (i.e., the statistical null hypothesis is valid), but the statistical analysis suggests such a relationship, then a type I error has been made whose probability is called α. The second error occurs if a relationship between the independent and dependent variable exists (i.e., the statistical alternative hypothesis is valid), but the statistical test fails to detect this relationship (type II error). The probability of this error is called β. The greater the experimental precision, the lower is the probability β. In other words, the greater the precision, the lower is the risk of retaining a false null hypothesis.

Controls reducing the dispersion of random errors serve to increase the precision of an experiment. One method consists of adding a control factor to the experiment. As mentioned, this measure is also employed in order to check external validity. If, for example, different experimenters collect the experimental data, taking them into account as a control factor can contribute to reducing the variance of random errors (error variance). The effects of the experimenters and their interaction with the treatment (see Sect. 2.2) can then be separated from the effect of the experimental independent variable. The larger the effect of the control factor and the interaction, the higher is the degree to which the error variance is reduced.

In some cases, this statistical procedure is also employeo increase precision when pretests are available. It is conceivable that one’s attitude towards Jews could be influenced by a test carried out prior to the application of the treatments. To deal with this, subjects are first ranked according to their pretest values. Subjects with the first k values are then randomly assigned to the k experimental conditions. The same applies to subjects with ranks k+1 to 2k, etc. This method is called matching. The degree to which precision is increased depends on the magnitude of the correlation between pretest and dependent variable (Feldt 1958). Under the aspect of external validity, the performance of a pretest has the disadvantage that the result may be generalizable only to pretested subjects (Campbell and Stanley 1963).

Still other procedures for increasing the precision of an experiment exist. Among these is the possibility of applying all treatments to each subject in a random sequence. In this case, the inter-individual variation of random errors is controlled. The effect of the experimental independent variable is then tested against the intraindividual error variance that is generally smaller than the interindividual error variance. A possible disadvantage of this procedure is the risk of decreasing internal validity due to repeated testing (e.g., carryover effects, fatigue).

3. A Deductiviristic Theory Of The Experiment

Campbell and Stanley’s theory of internal and external validity (1963) is based on inductive inference. This is particularly apparent with regard to external validity. Since no conclusive induction principle exists but statements about internal and external validity presuppose just such a principle, the theory has been criticized by Gadenne (1976).

An alternative, deductiviristic theory of the experiment has been proposed by various authors (Bredenkamp 1980, Gadenne 1984, Hager and Westermann 1983, Erdfelder and Bredenkamp 1994). According tOverdfelder and Bredenkamp (1994) various methodological rules of experimentation can be derived from the general principle of maximizing the fairness and strength of an experiment. Fairness is defined as the probability that a scientific hypothesis will be confirmed in an experiment, if it is actually true. Strength is defined as the probability that a false scientific hypothesis will not be confirmed. If scientific hypotheses are tested statistically, and the acceptance of the statistical null hypothesis is the result that is in accordance with the scientific hypothesis, the following applies (see Erdfelder and Bredenkamp 1994):

Methodology of Laboratory Experiment Research Paper

Methodology of Laboratory Experiment Research Paper

If acceptance of the statistical alternative hypothesis is the result that supports the scientific hypothesis, then Eqns. (3) and (4) hold:

Methodology of Laboratory Experiment Research Paper

Methodology of Laboratory Experiment Research Paper

α and β are the probabilities of type I and type II statistical errors (see Sect. 2.3), g0 is the probability that the statistical null hypothesis is valid if the scientific hypothesis is not true, and h0 is the probability that the statistical null hypothesis is not valid if the scientific hypothesis is true. Accordingly, g1 is the probability that the statistical alternative hypothesis is valid if the scientific hypothesis is not true, and h1 is the probability of the statistical alternative hypothesis being untrue if the scientific hypothesis holds.

Further discussions will be restricted to Eqns. (3) and (4). They can be similarly applied to Eqns. (1) and (2). If the scientific hypothesis and the statistical alternative hypothesis are logically equivalent, then g1= 0 and h1= 0. In this case, fairness and strength correspond to the probabilities of correct statistical decisions, 1-β and 1-α. This equivalence seldom applies in psychology. Psychological hypotheses are often related to processes occurring within individuals, while statistical hypotheses refer to populations. Nevertheless, the statistical analysis does make sense provided that the scientific hypothesis (S) implies a statistical hypothesis, for example, the alternative hypothesis (H1). Randomization is indispensable for establishing such an implication. If the implication S→H holds, then h1=0. In this case, fairness is completely determined by the probability of a correct decision in favor of the statistical alternative hypothesis. The strength remains unknown because the value for g1 is not known. However, the falsification theory put forth by Popper (1982) contains methodological rules that can be applied to reduce g1. The smaller g is, the greater is the strength for a given pair of values of α and β. If g1< 1 and S→H1 holds, then a reduction of the statistical error probability, α, at a fixed value of experimental fairness will increase the strength.

Up to this point, we have ignored the fact that scientific hypotheses usually refer to theoretical constructs that must be operationalized. This operationalization leads to the empirical hypothesis that is actually tested. Obviously, the result of a statistical test cannot be used to assess a scientific hypothesis if its logical relation to the empirical hypothesis is unknown. This problem can be largely solved by stochastic modeling techniques. The stochastic model comprises a structural and a measurement model. The structural model contains the central statements of the scientific hypothesis, while the measurement model formalizes the relationships between the latent constructs and empirical variables (Erdfelder and Bredenkamp 1994).

Within the context of the deductiviristic theory, some of the problems originally formulated by Campbell and Stanley (1963) need to be reformulated (Bredenkamp 1980). In psychology, scientific hypotheses are interpreted as deterministic or statistical universal hypotheses that refer to an unlimited number of individuals. Each subject belonging to the population the hypothesis refers to is fully representative for the purpose of the experiment. If an experiment is performed with more than one subject, this is done merely to control internal errors by randomization and not for generalization since random samples cannot be taken from an unlimited population. In this view, random assignment serves to justify the use of statistical tests and has nothing to do with generalization (Edgington 1995). Indeed, no generalization of the experimental results to extra-experimental settings is attempted. Experiments serve the single purpose of testing scientific hypotheses.

Practical problems are solved by means of technological statements derived from a scientific hypothesis that has been confirmed experimentally. Technological hypotheses refer to treatments that must be applied in order to have specific, desired effects (Gadenne 1976). The technological statement can be the subject of empirical studies in field settings. On the other hand, controlled tests must be performed in order to test scientific hypotheses. The laboratory offers the best conditions for controlling extraneous variables.

Such a deductiviristic theory of the experiment combines aspects of Popper’s falsification theory (1982), Lakatos’ methodology of scientific research programs (1970), as well as Neyman and Pearson’s theory of statistical hypothesis testing (Neyman 1950). The latter theory permits the (compulsory) control of the statistical error probability, β. The book by Cohen (1988) is very useful for social and behavioral scientists in applying this theory.

Recently, Westmeyer (1998) has criticized the deductiviristic theory of experiments. He doubts the compatibility of the theories of Popper, Lakatos, and Neyman and Pearson. In addition, he demands singlesubject experiments to test hypotheses that refer to individuals. This latter demand does not contradict principles of the deductiviristic theory. However, for vast areas of experimental psychology, internally valid single-subject experiments that require repeated measurements cannot be carried out (see Sect. 2.3). In these cases, the logical link between scientific and statistical hypotheses and the resolution of the resulting problems are of great significance.

4. The Ideal And The Real Experiment

Textbooks contain rules for conducting the ideal experiment. In the end, it is the aim of all rules to make the experiment distinguishable from other procedures by controls that make it possible to trace variation in the dependent variable back to the independent variable while excluding all extraneous factors from the interpretative system.

Empirical studies seem to violate these requirements. According to Friedman (1967) the ideal experiment is a non-social situation. The view that all extraneous variables are controllable assumes that both the experimenter and the subjects are conceived as nonpersons whose behavior towards one another remains constant. The suspicion that this is not the case can be inferred from the fact that textbooks concerning the experimental method set forth no rules for the experimenter regarding his or her behavior during the pre-experimental phase (greeting the participants, questions regarding the individual). An experimenter can induce experimental errors in many different ways. Friedman (1967) discovered variations in behavior during the pre-experimental phase (how often the experimenter smiles at the subject, looks at the subject, etc.) both between and within experimenters. These differing behaviors can be effective extraneous variables (Friedman 1967).

If the experiment is viewed as a social situation in which the experimenter and subject interact, various extraneous factors on the part of the experimenter and the subject could well come into play. Detailed information concerning this can be found in the book by Rosenthal and Rosnow (1969). On the part of the experimenter, it is the expectations that can influence the results if he or she communicates his or her hypothesis to the subjects in a nonverbal manner (experimenter bias). In addition, the experimenter’s personality traits can influence the results (experimenter effect).

On the other side of the dyadic relation stands the participant, who has taken his or her role as subject either voluntarily or involuntarily, who is searching for the hypotheses of the experiment, who aims at producing results that will confirm or disconfirm the hypothesis, and who is more or less apprehensive about evaluation. All these factors can become effective extraneous variables that endanger the clear interpretation of an experimental result. For this reason, various controls have been suggested to minimize these errors.

Does the fact that the experiment is actually a social situation affect internal and external validity in the inductive theory or strength and fairness in the deductiviristic approach? It is obvious that not allpotential disruptiviristic factors in an investigation can be controlled. The brightness of the room in which a social psychology experiment is performed usually remains uncontrolled because the experimenter assumes noinfluence of brightness on the results. However, the stakes are quite different in experiments on visual perception. Potential sources of experimental errors that cannot be controlled by randomization have to be controlled by use of other techniques if there is a well-founded assumption they could produce artificial results. This also applies to the control of possible artifacts produced by the experimenter or subject. Only the confirmation of general artifact hypotheses (e.g., ‘in most studies, the results can be traced back to the experimenter’s expectations’; ‘in many studies, the results can be traced back to the subject’s attempts to produce findings that confirm the hypothesis,’ etc.) would make general control of the artifacts necessary. To date, general artifact hypotheses have not been confirmed (e.g., Bredenkamp 1980). Of course, this does not preclude the possibility of artificial results in single investigations.

5. Objections To The Experiment

Among all empirical research methods, laboratory experiments have most often been criticized. The controls carried out in the experiment serve the single purpose of deciding about a scientific hypothesis with as little bias as possible. The objection to this is that results obtained under conditions of strict control are artificial and cannot be generalized to extra-experimental settings where many factors that were experimentally controlled now come into play. Section 3 has already discussed this problem. It should be reiterated that laboratory experiments test scientific hypotheses, and that they must employ control techniques in order to avoid false conclusions. They serve the scientific goal of explanation. The solution of practical problems poses other questions that must be answered within the context of technological research.

A second objection refers to the fact that the real experiment is a social situation that differs significantly from the ideal experiment. It has been argued that for this reason all advantages of the experiment refer to a method that does not exist in reality. However, there is no basis for arguing that social processes generally bias experimental results (see Sect. 4).

A final objection posits that experimentation with humans is simply morally unacceptable. It cannot be reconciled with human dignity because, in the interest of reaching scientific goals, the human is reduced to an ‘object.’ However, a similar objection can also be raised to nonexperimental investigations. If a ban on empirical research using human subjects is derived from this argument, behavioral and social sciences become impossible. Indeed, in many experimental studies, human dignity does not suffer to any greater extent than in everyday situations in which, for example, children are expected to cope with the effects of educational reforms or changes in social relationships resulting from a move to a different place. When carrying out empirical studies, principles No. 9 (research with human participants) and No. 10 (care and use of animals) of the American Psychological Association (1990) must in all cases be observed. Of particular importance is that ‘the investigator protects the participant from physical and mental discomfort, harm, and danger that may arise from the research procedures’ (American Psychological Association 1990, p. 395).

The problem cannot be resolved by simply relinquishing experimentation. For example, the application of certain therapeutic procedures whose effectiveness has not been demonstrated by controlled experiments or quasi-experiments appears questionable. Thus, while experiments with scientific goals can pose ethical problems, so too can the failure to perform such experiments. Deciding not to experiment may pose ethical problems of equal weight to deciding to do them and must be justified every bit as clearly and cogently.

Bibliography:

  1. American Psychological Association 1990 Ethical principles of psychologists. American Psychologist 45: 390–5
  2. Bracht G H, Glass G V 1968 The external validity of experiments. American Educational Research Journal 5: 437–74
  3. Bredenkamp J 1969 Experiment und Feldexperiment [Experiment and field experiment]. In: Graumann C F (ed.) Sozialpsychologie ( Handbuch der Psychologie, Band 7.1) Social psychology ( Handbook of psychology, Vol. 7.1) . C. J. Hogrefe, Gottingen, Germany
  4. Bredenkamp J 1980 Theorie und Planung psychologischer Experimente [Theory of Psychological Experiments and their Planning]. Steinkopff, Darmstadt, Germany
  5. Bredenkamp J 1982 Verfahren zur Ermittlung des Typ der statistischen Wechselwirkung [Procedures for the identification of detecting the type of statistical interaction]. Psychologische Beitrage 24: 56–75, 309
  6. Campbell D T, Stanley J C 1963 Experimental and quasiexperimental designs for research on teaching. In: Gage N L (ed.) Handbook of Research on Teaching. Rand McNally & Company, Chicago, IL
  7. Carlsmith J M, Ellsworth P C, Aronson E 1976 Methods of Research in Social Psychology. Addison-Wesley Publishing Company, Reading, MA
  8. Chapin F S 1947 Experimental Designs in Sociological Research. Harper, New York and London
  9. Cohen J 1988 Statistical Power Analysis for the Behavioral Sciences. 2nd edn. L. Erlbaum Associates, Hillsdale, NJ
  10. Cook T D, Campbell D T 1979 Quasi-Experimentation—Design & Analysis for Field Settings. Rand McNally College Pub. Co., Chicago
  11. Cox D R 1961 Design of experiments: The control of error. Journal of the Royal Statistical Society Series A 124: 44–8
  12. Edgington E S 1995 Randomization Tests, 3rd rev. and expanded edn. M Dekker, New York
  13. Erdfelder E, Bredenkamp J 1994 Hypothesenpruefung [Hypothesis testing]. In: Herrmann T, Tack W H (eds.) Methodologische Grundlagen der Psychologie [Methodological Foundations of Psychology] ( Enzyklopaedie der Psychologie, Band B I 1 [Encyclopedia of psychology, Vol. B I 1]. Hogrefe, Goettingen, Germany
  14. Feldt L S 1958 A comparison of the precision of three experimental designs employing a concomitant variable. Psychometrika 23: 335–53
  15. Friedman N 1967 The Social Nature of Psychological Research: The Psychological Experiment as a Social Interaction. Basic Books, New York
  16. Gadenne V 1976 Die Gueltigkeit psychologischer Untersuchungen (The Validity of Psychological In estigations). Kohlhammer, Stuttgart, Germany
  17. Gadenne V 1984 Theorie und Erfahrung in der psychologischen Forschung (Theory and Experience in Psychological Research). JCB Mohr, Tuebingen, Germany
  18. Hager W, Westermann R 1983 Planung und Auswertung von Experimenten (Planning and analyzing experiments). In: Bredenkamp J, Feger H (eds.) Hypothesenpruefung (Hypothesis Testing) Enzyklopaedie der Psychologie, Band BI 5 (Encyclopedia of Psychology, Vol. B I 5). Hogrefe, Goettingen, Germany
  19. Keppel G 1973 Design and Analysis: A Researcher’s Handbook. Prentice-Hall, Englewood Cliffs, NJ
  20. Lakatos I 1970 Falsification and the methodology of scientific research programs. In: Lakatos I, Musgrave A (eds.) Criticism and the Growth of Knowledge. Cambridge University Press, Cambridge, UK
  21. Neyman J 1950 First Course in Probability and Statistics. Holt, New York
  22. Popper K R 1982 Logik der Forschung. Mohr, Tuebingen (1959 The Logic of Scientific DiscOvery. Hutchinson & Co., London)
  23. Rosenthal R, Rosnow R L (eds.) 1969 Artifact in Behavioral Research. Academic Press, New York
  24. Westmeyer H 1998 Psychologische Methoden und soziale Konventionen (Psychological methods and social conventions). In: Klauer K C, Westmeyer H (eds.) Psychologische Methoden und soziale Prozesse (Psychological Methods and Social processes). Pabst, Lengerich, Germany
Laboratory Studies Research Paper
Knowledge Representation Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!