Quantitative Approaches to Communication Research Paper

Academic Writing Service

View sample communication research paper on quantitative approaches to communication. Browse research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

Quantitative, social scientific communication research involves the application of a set of social scientific methods for testing defensible knowledge claims about human communication based on empirical data, statistical description, and statistical inference. As the name implies, quantitative approaches to communication research use numbers—more specifically, quantitative data—to draw conclusions about communicative phenomena. Various aspects of communication are quantified to assess their prevalence or to show systematic relationships between variables.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Common examples of quantitative communication research include survey research, content analysis, and experimental research. Quantitative methods are used to investigate all types and aspects of communication, and they are widely used in research on interpersonal communication, mass media, new technology, cross-cultural communication, and organization communication research, to name a few areas.

The term quantitative approach implies a contrast between quantitative research and qualitative research. The former seeks to quantify constructs of interest, whereas the latter does not. Qualitative research is sometimes portrayed as more exploratory, being useful in generating new ideas and understandings, while quantitative research is often seen as involving more formal hypothesis testing. Both quantitative and qualitative research, however, can serve either function. Generally, quantitative research is useful when the phenomena of interest can be classified either as present or absent or when the phenomena have measurable attributes that vary in degrees or amounts. If something can be measured or counted, a quantitative approach can be used. A primary advantage of quantitative research is that statistical evidence can be used to enhance confidence in a knowledge claim. A second advantage is that many quantitative methodologies offer mechanisms to control nuisance variables, ruling out rival explanations and enhancing confidence in knowledge claims.




More useful than the quantitative/qualitative distinction, however, is a broader one between social scientific approaches and nonscience approaches such as rhetorical criticism, postmodern analysis, feminist scholarship, and critical scholarship. What makes an approach social scientific or not rests on issues other than whether or not numbers are involved. Science-based and nonscientific modes of research reflect very different understandings about the nature of knowledge, how knowledge is generated, what is useful to know, and how we can have confidence in what we know. The quantitative-qualitative distinction need not involve these deeper philosophical differences about the nature of knowledge and knowledge generation. Nevertheless, most social scientists find it useful to use quantitative methods at least some of the time, and the use of quantitative methods usually implies a social scientific approach to knowledge generation, whereas qualitative research may or may not be social scientific in character.

Philosophical Underpinnings

Most quantitative social scientific research adopts the philosophical approach of scientific realism (Pavitt, 2001, chap. 1). The presumption is that there is a real world that exists beyond our perceptions that is potentially, and at least partially, knowable. The goal of research is to get our understanding more closely aligned with this objective reality. The word verisimilitude describes this idea of closeness to reality. In quantitative research, we want our theories and findings to have verisimilitude, and the extent to which we can make a case that our theories and findings have verisimilitude is the bottom line in quantitative communication science.

Sometimes quantitative social science approaches are mistakenly equated with logical positivism or operationalism, but these problematic philosophical perspectives have long been out of favor (Meehl, 1986) and never held much sway in quantitative communication research anyway (Miller & Berger, 1999). Logical positivism was a philosophical view that held that the only meaningful knowledge is what we prove either by objective observation or by logical proof. Operationalism is a view of measurement that equates attributes of things with their measures. For example, to an operationalist, communication apprehension is a score on a communication apprehension scale. The idea from Karl Popper (1959, chap. 1) that hypotheses and theories need to be falsifiable, however, is both useful and widely accepted. That is, for ideas to be scientifically useful, they must be testable.

Quantitative social scientific research is usually empirical, meaning that knowledge claims are based on data and the data stem from observation. Quantitative social scientific research also strives for objectivity. Complete objectivity is impossible to obtain, but methods are designed and evaluated by the extent to which the data are likely free from bias and the personal idiosyncrasies of the researcher. Finally, quantitative social scientific research strives to be self-correcting. Confidence in a finding or conclusion is enhanced through replication, and it is presumed that incorrect conclusions will ultimately be rejected because they fail to replicate. A replication is essentially a retest. If a finding has verisimilitude, other researchers should be able to produce the finding under conditions similar to the original research. For the social scientist, objective empirical observation, coupled with replication, provides the best path to verisimilitude over time.

Research predictions and knowledge claims in quantitative communication research are usually probabilistic in nature, general within some specified conditions, and contextualized to within those specified conditions. Knowledge claims are usually probabilistic in that they are often based on statistic inferences that provide estimates of how likely or unlikely the data are given some set of assumptions. Findings and conclusions are general in that they tell us what is usual or typical within a situation or context. For example, a finding might tell us that people tend to be truth biased and are more likely to believe other people whether or not the other people are actually honest (see Chapter 52, “Deception,” for a review of research on deception). Such a finding does not imply that people believe everything they hear or that they never think others are lying; it is just that this tends to be the case on average. Finally, knowledge claims are contextualized in that they have boundary conditions—that is, conditions (specified or unknown) under which they apply and do apply. For example, truth bias is not expected to hold in situations where the person whose message is being judged has a strong motive to lie and that motive is known to the person doing the judging.

Types of Quantitative Research

Quantitative communication research can take many forms, and there are many ways to distinguish between different types of research. One common distinction is between basic and applied research. The main purpose of basic research is to advance knowledge and understanding. This knowledge may have practical implications, but that is of secondary concern. Rather, the primary purpose of basic research is to develop or test some theory or theories or to answer some question suggested by informed curiosity. The bottom line in basic research is learning something new and enhancing understanding. Applied research, in contrast, seeks to solve some real-world problems or test the utility of some realworld solutions. Applied research is often divided into formative research and evaluation research. Formative research is used to generate knowledge that will aid in developing an application, and evaluation research seeks to test the effectiveness of something. For example, in health communication campaigns, one might do formative research in developing the campaign and evaluation research after the campaign has been implemented to assess its impact in terms of effectiveness and unintended consequences.

Another distinction that is often made is between theoretical research and exploratory research. Truly theoretical research seeks to pit different theories against one another in order to test which one applies in some context, test hypotheses that are deduced from a theory, test the boundary conditions of a theory, or develop a theory. The primary advantages of theory include helping a researcher prioritize among variables and hypotheses and providing an avenue for generalization that cannot be achieved empirically.

Exploratory research, in contrast, is guided by informed curiosity rather than formal theory. Exploratory research simply tries to assess if there are reliable differences or associations. Although exploratory work is sometimes devalued relative to theoretical research, many important discoveries are stumbled on by accident.

Other ways to classify types of quantitative research rest on the type of methodology used. Typology formation studies and content analysis seek to classify communication phenomena and study frequency and prevalence. Mass survey research investigates public opinions using surveys given to random or representative samples. Paperand-pencil questionnaire research often seeks to assess the correlations among communication concepts. Both experimental and quasi-experimental studies are common in communication journals. Communication research sometimes involves physiological measurement, such as brain scans. Archival data can be analyzed with quantitative methods. In short, quantitative communication research is topically and methodologically diverse and can be applied to anything that can be quantified.

The Basic Elements of Quantitative Research

In this section, the basic elements of quantitative research are summarized. The basic elements include constructs, variables, variance, and how variables are related to one another.

Constructs and Variables

Quantitative research involves variables. Variables are symbols to which numerals or numbers are assigned. Variables are also observable things that vary or that can take on different values. In this sense, variables are contrasted with both constants and constructs. Constants are things whose values are fixed; they do not vary. Constructs are conceptual or theoretical entities that exist in the mind of researchers, whereas variables are observable. For example, the idea of communication apprehension is a construct, while the score on a communication apprehension scale is a variable.

Quantitative researchers are interested in constructs, usually how two or more constructs are related to each other. Constructs are ideas that are the topic of study. To research constructs, they must be measured and values must be assigned. The resulting values comprise variables, and relationships among variables can be tested, often with statistical analyses. When the variables are found to be statistically related in some manner, then it is inferred that the constructs are likewise related in a similar manner. Thus, constructs are (albeit imperfectly) measured, and when values are assigned to represent these constructs, we call the resulting collection of values a variable. Variables are tested for statistical association or relationship, and inferences are made about how constructs are related based on observed relationships among the corresponding variables.

When statistical analyses are used to test the relationships among variables and some variables are conceived of as predictors or causes of other variables, the variables that are the predictor or cause variables are called independent variables, while the variables that are predicted effects or outcomes are called dependent variables. Often, the notation x is used to refer to the independent variable and y to the dependent. When graphing the relationship between x and y, x is plotted on the horizontal axis and y on the vertical.

Independent variables can be further classified as active or measured variables. The values of an active independent variable are set by the researcher. That is, they are induced or manipulated. This is not true for measured variables where the values are not under research control. Dependent variables are always measured and never active.

Variance

The extent to which a variable varies is called variance. The more scores differ from one another, the more they vary, and hence the greater the variance. Statistically, variance has a more precise meaning. Variance refers to the average squared amount by which scores differ from the average score.

Variance may be the single most important concept in quantitative research. Obviously, not all people are the same. Situations, too, differ from one another. And messages too vary. Quantitative research wants to know why, when, and how much things vary. This is often done by seeing if and how the variable we are interested in varies systematically with something else. That is, much, if not most, quantitative communication research seeks to predict and/or explain how some variable of interest is related to another variable or variables of interest. This involves demonstrating that the variance in a variable is systematically related to the variance in another variable. When variables are related, that is, one predicts, causes, or is associated with another according to some specified function, the variables are said to covary. If x is an independent variable and y a dependent variable, we can say that y is a function of x. Symbolically, y = f(x). The trick, of course, is to know the function. Nevertheless, regardless of the specific function, it is a fundamental law of quantitative research that variance is required for covariance. That which does not vary cannot covary. In short, most quantitative research is about understanding variance (e.g., why people are different from other people in some way), and understanding variance requires having variance to observe.

The flip side of variance is constancy. Constants are also extremely important in quantitative research because they are central to the idea of control. Because that which does not vary cannot covary, constants cannot be related to anything. Hence, holding something constant is a surefire way to control its impact. What researchers try to do is to assess the variance and covariance of the variables of interest while holding constant as much else as possible. Because constants never affect other things, they provide the best mechanism for the “control” of nuisance variance in research. This is the best way to rule out rival explanations so we can understand what really leads to what.

These principles of constancy and covariance provide the conceptual basis for experimentation. If some variable x has a causal impact on some other variable y, then changes in x will systematically produce changes in y. In an experiment, the researcher systematically alters the values of x and observes values of y. If values of y systematically change when x is changed but y stays constant when x is constant, then evidence that x leads to y is obtained. Other potential causes of y are held constant so that they cannot have an impact on y and so that the impact of x can be isolated. The tighter the control over nuisance variables, the stronger the inference that is obtained from observing y vary as a function of induced or created variance in x.

Constancy, variance, and covariance are also central in nonexperimental quantitative research. In nonexperimental research, variance is observed rather than created, and statistical analyses are used to document differences or association. Again, variance is essential because variance is required for covariance. Kerlinger and Lee (2000) offer a nice basic introduction to variables and variance in quantitative social science.

Relationships Among Variables

Variables can be related to each other in a variety of different ways. Given that the goal of quantitative communication research is usually to document and explain how variables are related, knowing about different types of relationships between variables is essential.

One possibility is that no relationship exists. That is, the variables are completely unrelated, and there is no covariance. Statistics cannot be used to prove the lack of a relationship, but statistical techniques such as meta-analysis or equivalence tests can show that a relationship is not strong or substantial (Levine, Weber, Park, & Hullett, 2008).

If variables are related, the simplest possibility is that variance in one variable causes variance in the other. This situation is called a direct causal relationship. Documenting a direct causal relationship requires showing that (a) the variables covary, (b) the cause variable precedes the effect variable in time, and (c) the effect is not explainable by some other variable called a spurious cause. If some other variable causes both the independent and dependent variables, then it will look like there is a direct relationship when there is not. The relationship is said to be spurious. A well-known example is that towns with more churches tend to have more bars. It would be a mistake, however, to conclude that churchgoing and alcohol consumption are causally related based on such an association. Obviously, both are related to population size. Larger towns tend to have more of everything. Cook and Campbell (1979) offer an excellent discussion of the concept of causation.

Sometimes direct causal relationships are stringed together. So variable x may lead to variable y, and y, in turn, leads to z. This is called a mediated relationship, and y is said to mediate the relationship between x and z. Mediated relationships are sometimes confused with moderated relationships. A moderated relationship exists when the effect of an independent variable on a dependent variable varies as a function of a third variable. That is, the focal relationship of interest is variable. For example, if the relationship between self-disclosure and liking is stronger for women than men, then sex moderates the effects of self-disclosure and liking. Evidence for moderators is reflected to be a statistical interaction effect. In the example, there is a two-way interaction between self-disclosure and liking on liking. Baron and Kelly (1986) is the most cited reference on mediated and moderated relationships.

Quantitative Research Design and Measurement

Research Design

Good research requires planning, and the research plan is called the research design or the method. Research designs can be classified into experimental, quasi-experimental, and nonexperimental designs. Experiments require (a) at least one active independent variable, (b) at least one comparison or control group, and (c) that participants are randomly assigned to experimental conditions. Designs that meet the first two conditions but lack random assignment are called quasi-experiments. Nonexperimental research includes only measured variables. A primary advantage of experimental designs is that they tend to provide better evidence of causal relationships. Because experiments and quasi-experiments involve active independent variables, time ordering is known. Also, the use of comparison and control groups helps control and rule out spurious nuisance variables. Finally, the random assignment that is required for true experiments offers an additional degree of control over nuisance variables.

The quality of a research design is typically assessed in terms of internal and external validity. Internal validity refers to how much confidence we have that variation in the dependent variable is really attributable to the independent variable and not some spurious, nuisance variable. External validity refers to the extent to which findings can be generalized to other people, situations, and times. Obviously, without internal validity, external validity is moot. Also, theory-based quantitative research is often aimed at testing generalizations rather than making generalizations (Mook, 1983). Campbell and Stanley (1963) is the classic work on design validity, and an updated treatment is offered by Cook and Campbell (1979). (See also Brewer, 2000; Smith, 2000.)

Measurement

To quantify a construct and enable observation, constructs must be measured. Measurement is the act of assigning numbers or numerals to represent attributes of people, objects, or events.

Measurement can be either categorical or continuous. The distinction between categorical and continuous measurements is important for determining both what type of statistical analyses make sense and how to interpret results. Sometimes measurement is discussed as forming four levels: nominal, ordinal, interval, and ratio.

Categorical measurement can be either binary, that is, present or absent, or involve placing things in categories. Content analysis often involves categorical measurement. Categorical measurement is also called nominal measurement. In nominal measurement, everything that is the same gets the same value, and things that are different must get a different value, with each different category getting a different value. In nominal measurement, the values assigned to represent categories do not have number meaning, and the values do imply quantity or ordering.

In continuous measurement, scores reflect at least rank ordering. That is, values reflect more or less of something. Self-report scales are a common type of continuous measure.

Regardless of the type of measure, reliable and valid measurement is essential for good quantitative research. Measurement reliability has several meanings in quantitative research. Perhaps the most common is the extent to which a measure is free from random response error. When researchers report Chronbach’s alpha, for example, the higher the value, the less random the response error. Researchers want random response error to be as small as possible because random errors create variance that cannot be explained by other variables, making the findings artificially small. This type of reliability is most often encountered when researchers are using multiple-item scales to measure a construct, and it is sometimes called internal consistency reliability.

Reliability also sometimes refers to repeatability. In this sense, a measurement is reliable to the extent to which it is stable over time. This is sometimes called test-retest reliability. Finally, there is intercoder reliability.

Intercoder reliability assesses the extent to which two or more coders or judges agree in rating or classifying something, adjusted for chance agreement. Cohen’s kappa, Scott’s pi, and Krippendorff’s alpha are common measures of intercoding reliability. Intercoder reliability is often encountered in content analysis and observational research.

Measurement validity refers to how closely the values produced by a measure reflect the thing being measured. That is, a measure is valid to the extent that there is fidelity between scores and that which the scores are meant to represent. There are many subtypes of measurement validity. Researchers talk of face validity, content validity, construct validity, structural validity, convergent validity, and discriminant validity. Each of these reflects different ways of getting at the question “Are we really measuring what we think we are measuring?” Obviously, the results of research cannot be any more valid than the measures used to generate the findings. Thus, measurement validity is essential.

Nunnally and Bernstein (1994) and Pedhazur and Schelkin (1991) offer especially good treatments of measurement in social science research.

Statistical Analysis

Statistics play an essential role in quantitative communication research, and virtually all quantitative communication research uses some form of statistical analyses to help understand data. Statistical analyses can be divided into descriptive statistics and inferential statistics. The goal of descriptive statistics is to characterize and summarize some existing data. Inferential statistics, on the other hand, is used to move beyond the results of a specific study and makes more general claims. Abelson (1995) offers a wellwritten and accessible treatment of statistical analysis in social science.

Descriptive Statistics

Common uses of descriptive statistics fall under at least five different categories. The first is raw counts and percentages, which are often used in conjunction with nominal data. These tell us about the frequency or prevalence of something and are very common in content analysis or public opinion survey work. For example, we might want to know about how much violence there is on television. This type of analysis can also be used to show relative frequency by breaking down percentages within different categories. For example, a researcher might report separate frequencies for children’s shows and prime-time dramas.

A second common category of descriptive statistics conveys information about central tendency. Central tendency measures include the mean, the median, and the mode. The mean is an average obtained by summing scores and dividing by the number the scores. The median is the middle score when scores are ranked from highest to lowest. The mode is simply the most frequently occurring score(s). In highly skewed distributions, the median is usually preferred over the mean because the mean is sensitive to extreme scores. A skewed distribution is asymmetrical where extreme scores tend to be in one direction.An example is how often people lie per day.According to an unpublished national survey, the average lies told per day is about 1.6. Sixty percent of people, however, reported telling no lies at all. Because 4% of the people surveyed reported telling more than 10 lies per day, the mean of 1.6 lies per day departs from the median and mode, which were 0. In this case, the median and mode tell us more about the average person.

The third type of descriptive statistics is measures of dispersion. Measures of dispersion tell us about how much variability there is in the data, and the two most common ways to assess variability are the variance and the standard deviation. The variance was described earlier. The standard deviation is the square root of the variance and can be thought of as an approximation of the average amount that scores differ from the mean.

A fourth type of descriptive statistics conveys information about the shapes of distributions. When scores are lumped toward the low or high ends rather than being symmetrical, the data are said to be skewed. Kurtosis refers to the steepness or flatness of a distribution. Frequency distributions, histograms, and stem-and-leaf plots are common ways to show the shape of distributions of single variables. As the example about lying above demonstrated, describing the distribution of scores often essential to understanding what is going on with some data. When examining how two variables interrelate, a scatterplot is used to show how two variables covary. In a scatterplot, scores of the dependent variable are graphed on a vertical axis, and scores of the independent variable are plotted on the horizontal axis, and dots are used for each data point.

A final common category of descriptive statistics is called measures of effect size. Measures of effect size tell us how strongly two variables covary. The correlation, the squared correlation, the multiple correlation, d, and eta squared (η2) are common measures of effect size.

Inferential Statistics

The other major type of statistics is inferential statistics. Inferential statistics are used to make claims that go beyond one’s current data. They can be used to make inferences about a population based on sample data or to rule out chance factors as rival explanations for findings. The two most common types of inferential statistics in communication research are null hypothesis significance tests (significance testing for short) and confidence intervals.

By far, the most common use of inferential statistics in the social sciences and communication research is the null hypothesis significance test. The purpose of significance testing is to test a hypothesis against chance. It is called null hypothesis significance testing (NHST) because the researcher’s hypothesis is pitted against its negation, called the null hypothesis. If the observed results differ from what is expected under the null hypothesis with some specified degree of confidence (usually 95%), then support for the researcher’s hypothesis is inferred.

In conventional significance testing, there are two mutually exclusive and exhaustive statistical hypotheses, the null (H0) and the alternative (H1). The alternative hypothesis reflects a researcher’s predictions and is usually stated in a research article. The null hypothesis is the negation of the alternative hypothesis. For example, if a researcher predicts a difference between two means, the alternative hypothesis is that the two means are different, and the null is that the means are exactly equal. The null hypothesis is seldom stated in research reports, but its existence is always implied in NHST. Usually, the null hypothesis is simply that there is no difference or no association.

A researcher selects a single arbitrary alpha level up front, usually the conventional α = .05. The smaller this alpha level, the more confidence one can have in the result if it is “significant.” With α = .05, 95% confidence is claimed. Once data are collected, a test statistic (e.g., t, F, χ2) from whichever type of statistic is used, and its corresponding p value is calculated, most often by computer. The p value indicates the probability of obtaining a value of the test statistic that deviates as extremely (or more extremely) as it does from the null hypothesis prediction if the null hypothesis were true for the population from which the data were sampled. If the p value is less than or equal to the chosen alpha, then the null hypothesis is rejected on the grounds that the observed pattern of the data is sufficiently unlikely conditional on the null being true. That is, if the data are sufficiently improbable if the null were true, it is inferred that the null is likely false. Because the statistical null hypothesis and the statistical alternative hypothesis are written so that they are mutually exclusive and exhaustive, rejection of the null hypothesis provides the license to accept the alternative hypothesis reflecting the researcher’s substantive prediction. If, however, the obtained p value is greater than alpha, the researcher fails to reject the null, and the data are considered inclusive. Null hypotheses are never accepted. Instead, one makes a binary decision to reject or to fail to reject the null hypothesis based on the probability of the test statistic conditional on the null being true.

In short, a statistically significant result is one that is unlikely to be obtained by chance if the null is true. So when research reports a finding that is “significant at p < .05,” what that means is we can have 95% or better confidence that the finding is not exactly zero presuming that the test was done correctly.

Many people find the logic of significance testing confusing, and critics find much fault with the approach (Levine, Weber, Hullett, Park, & Lindsey, 2008). Nevertheless, significance testing is the statistical approach most often taught in research methods classes; it is the approach used by the major statistical software packages such as SPSS, and it is what most communication researchers use to test their hypotheses.

Statistical hypothesis testing in communication research most often takes one of two basic forms. One form tests for differences between two or more means or percentages, and the other tests for a linear association between two or more variables. Chi-square tests, t tests, and the analysis of variance (ANOVA) test for differences with correlation and regression test association. Which specific type of statistical test is used depends on whether a researcher is interested in documenting differences or association, the number of variables involved, and whether the variables are categorical or continuous. When testing for differences, if the variables are count data or percentages, chi-square can be used. To test the difference between two means, t tests are used. ANOVA is used to look for differences among three or more means or when there is more than one categorical independent variable and a single continuous dependent measure. Correlations test the association between two continuous variables. Multiple regression is used when there are two or more independent variables. When multiple dependent variables are tested, multivariate analyses such as MANOVA or canonical correlation are needed.

The second main approach to inferential statistics is the confidence interval. Confidence intervals are used to make inferences about a population based on sample data.A population is the entire collection of units under consideration, and a sample is some subset of that population. For example, all registered voters in the United States a population of interest, and we might study voter’s opinions by taking a random sample of voters. Confidence intervals are used to estimate a range of values where the population value might fall given some sample data.

Readers are likely to be familiar with news coverage of opinion polls. In reporting polls, it will be reported that some percentage of people think such and such with some margin of error. For example, 24% (±2%) of Americans surveyed might think that Congress is doing a good job. That plus or minus or margin of error is the confidence interval. What that means is that it is 95% likely that if all Americans were surveyed, the percentage obtained would fall within that range.

The validity of confidence intervals rests directly on the quality of the sampling. Samples need to be representative of the population for good inference. As a consequence, survey research that aims to be informative about populations has developed sophisticated methods of sampling aimed at ensuring the representativeness of the sample. Nevertheless, it is important to keep in mind that the accuracy of confidence intervals in this context rests on the quality of sampling.

Other Statistical Analysis

A number of other types of statistical analyses are evident in quantitative communication research, and most researchers have a wide statistical repertoire. Readers of research are likely to encounter factor analysis, path analysis, structural equation modeling, network analysis, metaanalysis to name a few. Factor analysis is used to find patterns in correlations and is most often encountered in measurement validation research. Exploratory factor analysis and principle component analysis are used to see if variables can be collapsed into a more parsimonious set, while confirmatory factor analysis is used to test if items designed to measure a construct intercorrelate in the way intended. Path analysis and structural equation modeling are used to test causal models. Network analysis is used to assess linkages between people or other entities and is therefore very useful in communication research.

Another very useful analysis is meta-analysis. Metaanalysis is essentially a study of studies. It is a set of statistical analyses used to cumulate findings across studies. In meta-analysis, each study in a literature becomes a data point. So meta-analysis is very valuable in summarizing large literatures.

A Research Example

A research study that exemplifies many of the ideas presented here is McCornack and Levine’s (1990) investigation of the effects of suspicion on deception detection accuracy among heterosexual dating couples in colleges.A previous study found that as relationship closeness increased, people became less accurate in detecting their partners’ lies because the trust in a relational partner tended to make the people believe their partners more often and, consequently, mistake lies for honest messages (McCornack & Parks, 1986). The McCornack and Levine (1990) experiment tested if induced suspicion might overcome this bias. It was predicted that suspicion would improve accuracy to a point but too much suspicion would be counterproductive. That is, it was anticipated that moderate levels of suspicion would yield higher accuracy than either low or high suspicion. The reasoning provided was that when people lack suspicion, they would miss the lies, but if they were too suspicious, they might mistake honesty for deception. Moderate suspicion might be just right.

The primary independent variable was state suspicion, which was conceptually defined as information from an outside source that another person might not be honest. The dependent variable was deception detection accuracy, which referred to the extent to which people were able to correctly distinguish truths from lies. In all, 107 dating couples were recruited to participate in the experiment. On arriving at the communication interaction lab, the couples were separated from their partners. One partner of each dating couple was interviewed on videotape. They had to answer 12 questions, and they were instructed to lie on 6 questions and give honest answers on the other 6 questions. The questions were about beliefs that the person held. The videotape was then shown to their respective partners, who made truth or lie judgments about each of the 12 answers their partner gave.Accuracy was calculated as the percentage of judgments that were correct. Thus, the actual percent correct score was the operational definition of accuracy.

State suspicion was experimentally varied by the instructions given, with one-third of the participants assigned at random to one of high-, moderate-, or lowsuspicion conditions. Participants in the high-suspicion condition were told that their partner was definitely lying on some of the answers and that their job was to guess which ones were lies. In the moderate-suspicion condition, it was casually mentioned that their partner might not be completely honest. In the low-suspicion condition, no mention was made of lying, and the participants did not know that the study was about deception. Thus, the instructions served as the operational definition of state suspicion. Also, this study was a true experiment because state suspicion was an active independent variable under experimenter control; participants were randomly assigned to condition; and the low-, moderate-, and high-suspicion conditions provided a basis for comparison.

The results were tested withANOVA and were consistent with the hypothesis. The highest accuracy was observed in the moderate-suspicion condition (64.6%), and this value was statistically greater than the accuracy in either the lowsuspicion (53.2%) or the high- suspicion (57.2%) conditions. The differences were statistically significant at p < .05. However, these findings have not since been replicated, so we can have only limited confidence in these results. A replication of these findings, however, under way.

Conclusion

Quantitative approaches to communication research use numbers to help us understand how people communicate. Quantitative communication research applies a set of social scientific methods for testing defensible knowledge claims about human communication based on empirical data, statistical description, and statistical inference.

Although quantitative approaches are usually contrasted with qualitative work, a more meaningful distinction is between science-based and nonscientific approaches to communication. Quantitative research almost always implies a scientific approach to understanding communication.

Quantitative research encompasses a diverse collection of topics and methods. Quantitative research can be applied to all topic areas in communication. It can be used in basic and applied research and in theoretical and exploratory research. Any aspect of communication that can be quantified can be studied with a quantitative approach.

The key skills for a quantitative researcher include research design, measurement, and statistical analysis. All three are essential. The goal of scientific research is to get our understanding closer to the truth of how things really are, and the quality of design, measurement, and analysis allow the quantitative communication research to make defensible knowledge claims and increase our collective understanding of how humans communicate.

Bibliography:

  1. Abelson, R. P. (1995). Statistics as principled argument. Hillsdale, NJ: LEA.
  2. Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182.
  3. Boster, F. J. (2002). On making progress in communication science. Human Communication Research, 28, 473–490.
  4. Brewer, M. B. (2000). Research design and issues of validity. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in personality and social psychology (pp. 3–16). Cambridge, UK: Cambridge University Press.
  5. Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multi-trait-multimethod matrix. Psychological Bulletin, 56, 81–105.
  6. Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasiexperimental designs for research. Boston: Houghton Mifflin. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation. Boston: Houghton Mifflin.
  7. Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–301.
  8. Kerlinger, F. N., & Lee, H. B. (2000). Foundations of behavioral research. Fort Worth, TX: Harcourt.
  9. Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196–217. Kirk, R. E. (1995). Experimental design. Pacific Grove, CA: Brooks/Cole.
  10. Krippendorff, K. (2004). Content analysis. Thousand Oaks, CA: Sage.
  11. Lakatos, I. (1978). The methodology of scientific research programmes. Cambridge, UK: Cambridge University Press.
  12. Levine, T. R., Weber, R., Hullett, C. R., Park, H. S., & Lindsey, L. (2008). A critical assessment of null hypothesis significance testing in quantitative communication research. Human Communication Research, 34, 171–187.
  13. Levine, T. R., Weber, R., Park, H. S., & Hullett, C. R. (2008). A communication researcher’s guide to null hypothesis significance testing and alternatives. Human Communication Research, 34, 188–209.
  14. Maxwell, S. E., & Delaney, H. D. (1990). Designing experiments and analyzing data. Pacific Grove, CA: Brooks.
  15. McCornack, S. A., & Levine, T. R. (1990). When lovers become leery: The relationship between suspicion and accuracy in detecting deception. Communication Monographs, 57, 219–230.
  16. McCornack, S.A., & Parks, M. R. (1986). Deception detection and relationship development: The other side of trust. In M. L. McLaughlin (Ed.), Communication yearbook 9 (pp. 377–389). Beverly Hills, CA: Sage.
  17. Meehl, P. E. (1986). What social scientists don’t understand. In D. W. Fiske & R. A. Shweder (Eds.), Meta-theory in social science (pp. 315–338). Chicago: University of Chicago Press.
  18. Miller, G. R., & Berger, C. R. (1999). On keeping the faith in matters scientific. Communication Studies, 50, 221–231. (Original work published 1978)
  19. Miller, G. R., & Nicholson, H. E. (1976). Communication inquiry. Reading, MA: Addison-Wesley.
  20. Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 379–387.
  21. Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. New York: McGraw-Hill.
  22. Pavitt, C. (2001). The philosophy of science and communication theory. Huntington, NY: Nova.
  23. Pedhazur, E. J., & Schelkin, L. P. (1991). Measurement, design, and analysis: An integrated approach. Mahwah, NJ: LEA.
  24. Phillips, D. C. (2000). The expanded social scientist’s bestiary: A guide to fabled threats to, and defenses of, naturalistic social science. Oxford, UK: Rowman & Littlefield.
  25. Popper, K. R. (1959). The logic of scientific discovery. NewYork: Routledge.
  26. Rozin, P. (2001). Social psychology and science: Some lessons from Solomon Asch. Personality and Social Psychology Review, 5, 2–14.
  27. Smith, E. R. (2000). Research design. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in personality and social psychology (pp. 17–39). Cambridge, UK: Cambridge University Press.

More Communication Research Paper Examples:

Communication Research Paper

Rhetorical Criticism in Communication Research Paper
Qualitative Approaches to Communication Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!