Published In Marketing Letters, 12 (2), 2001, 171-187.

4m ago
27 Views
0 Downloads
213.48 KB
15 Pages
Transcription

Published in Marketing Letters, 12 (2), 2001, 171-187.Hypotheses in Marketing Science: Literature Review and Publication AuditJ. Scott ArmstrongThe Wharton School, University of PennsylvaniaRoderick J. Brodie and Andrew G. ParsonsUniversity of Auckland, New ZealandAbstractWe examined three approaches to research in marketing: exploratory hypotheses, dominant hypothesis,and competing hypotheses. Our review of empirical studies on scientific methodology suggests that theuse of a single dominant hypothesis lacks objectivity relative to the use of exploratory and competinghypotheses approaches. We then conducted a publication audit of over 1,700 empirical papers in sixleading marketing journals during 1984-1999. Of these, 74% used the dominant hypothesis approach,while 13 % used multiple competing hypotheses, and 13% were exploratory. Competing hypotheseswere more commonly used for studying methods (25%) than models (17%) and phenomena (7%).Changes in the approach to hypotheses since 1984 have been modest; there was a slight decrease in thepercentage of competing hypotheses to 11%, which is explained primarily by an increasing proportionof papers on phenomena. Of the studies based on hypothesis testing, only 11 % described the conditionsunder which the hypotheses would apply, and dominant hypotheses were below competing hypothesesin this regard. Marketing scientists differed substantially in their opinions about what types of studiesshould be published and what was published. On average, they did not think dominant hypothesesshould be used as often as they were, and they underestimated their use.1. IntroductionSome researchers have criticized the rate of progress in marketing science. Leone and Schultz (1980)concluded that few generalizations could be drawn from research in marketing. Hubbard and Armstrong (1994)found that few marketing findings were replicated or extended, thus limiting the ability to generalize; furthermore,when studies were replicated or extended, the findings often differed from those in the original studies. Anderson(1994) criticized the ability of marketing science to deliver solutions to business problems. Additional concerns havebeen expressed by the AMA Task Force on Marketing (1988), Bloom’s (1987) review of the quality of research onmarketing, a special issue in the Journal of the Academy of Marketing Science in 1992, Wells' (1993) assessment ofprogress in consumer research, and Bass' (1993) assessment of marketing science.Why is it that progress in some sciences is more rapid than in others? Chamberlin addressed this issue in an1890 paper (reprinted in Chamberlin, 1965). He concluded that the formulation of hypotheses has important effectson progress. His contention was that the more successful sciences used the method of multiple competinghypotheses. Platt (1964) examined scientific progress in molecular biology and concluded that Chamberlin wascorrect (he used the term “strong inference” for such studies). In contrast, McDonald (1992) argued that stronginference offers no advantages over testing single hypotheses.We provide an exploratory study that first summarizes evidence on different approaches to the use ofhypotheses. We then provide results from an audit of marketing science publications. Our purpose is to provide abasis for discussion on these topics. As we show, marketing scientists differ substantially in their beliefs about whatshould be published and what was being published.

2. Empirical Studies on the Use of HypothesesFollowing the scheme in Armstrong (1979), we discuss three approaches to the formulation of hypotheses:exploratory (inductive), a dominant (single) hypothesis, and multiple competing hypotheses. We searched forevidence on the effectiveness of these approaches. Because little evidence related solely to marketing studies, thissearch covered various areas of science.2.1. Exploratory ApproachExploratory (inductive) studies start with no formally stated hypotheses. This is appropriate when one haslittle explicit knowledge about a phenomenon. The purpose is to develop hypotheses as a result of the study. It mightalso be relevant for discovering alternative explanations in areas where much previous work has been done. Thisapproach allows for a broad search for hypotheses and theories, and may serve to aid a researcher's objectivity.However, this is not to say that it ensures objectivity because a researcher might have unstated or subconsciousbeliefs that affect the search for evidence and its interpretation. Also on the negative side, the exploratory approachcan be inefficient because it may not be clear as to what data to collect or how to do the analysis.2.2. Dominant HypothesisA hypothesis provides a structure for the collection and analysis of data. It also aids in the cumulativedevelopment of knowledge by summarizing evidence from previous studies. One could argue that the use of adominant hypothesis might be appropriate under the following conditions: a) after the exploratory phase to helprefine a plausible hypothesis on a topic; b) when it may not be feasible to develop competing hypotheses; c) when itmay be too costly to test alternatives; d) when an efficient “market” for ideas exists, such as when parallel teamspursue solutions to the same problem at the same time, with well-established criteria for evaluation and goodcommunication among teams; e) where the task is to clarify the conditions under which an accepted hypothesisholds.The dominant hypothesis, designed to rule out a null hypothesis, often becomes a search for evidence tosupport a favored hypothesis. Null hypotheses are often selected to represent the absence of a relationship (or theineffectiveness of a model or method) even when this is unreasonable. For example, studies have tested the nullhypothesis that the purchase of automobiles is unrelated to the income of consumers. Cohen (1994) calls such anunreasonable null hypothesis a “nil hypothesis.” That said there are many occasions when a traditional nullhypothesis is reasonable.Our consideration of the dominant hypothesis includes variations on a theme, such as minor variations in amodel. We also include the use of conditional nested hypotheses by which a researcher tests the conditions underwhich the dominant hypothesis holds. Dunbar (1993) and Klayman and Ha (1989) show that comparisons ofconditional nested hypotheses can be misleading. One might find that a hypothesis is more appropriate under certainconditions, while an excluded hypothesis might be superior for other conditions.People often have difficulty in making valid generalizations from data. One reason for the difficulty is thatsubjects pick a hypothesis and then look only for confirming evidence. Bruner and Potter (1964) demonstrated thisin their well-known experiment in which subjects tried to describe the image on a poorly focused slide. The clarityof the initial slide was varied such that one group received slides that were in very poor focus, another groupreceived slides that were in moderately poor focus, and the third received slides in medium focus. The experimenterthen gradually improved the focus until it reached a level that pretest subjects could identify 75% of the time. Thegroup that started with poorest focus tended to cling to their initial impressions such that by the end of the study theycould recognize only 23% of the items. Those with moderately poor focus raised their scores to 45% and those withmedium focus raised theirs to 60%. Note that all three groups were hindered by their prior hypotheses because thepretest subjects were correct for 75% of the slides. When subjects were given more time to study the slides, thosewho started with the very blurred slide benefited least. One might think of this as an analogy to the discovery2

process in science. One starts with little evidence, develops a hypothesis, then obtains better evidence. But if theadditional evidence is subject to interpretation, one might cling to the original hypothesis.Wason (1960; 1968) conducted experiments in which subjects had to determine the rule that anexperimenter was using to generate sets of three numbers. The subjects tried “experiments” whereby they proposednew sets of three numbers and then received feedback on whether the data agreed with the rule. Subjects typicallytried to confirm a favored hypothesis and were generally unsuccessful at leaming the rule. Wason concluded thatbias is introduced by the strategies people use to examine evidence. These studies, known as the “24-6” studies,have been widely replicated and extended.Mynatt, Doherty, and Tweney (1978) showed that subjects seldom sought disconfirmation of their favoredtheories and they often ignored information that falsified their theories. Other studies have shown the use of a singlehypothesis leads to a bias in the way that people evaluate evidence (Lord, Ross, and Lepper 1979; Jones and Russell1980; Chapman and Chapman 1969).The preceding studies in this section were done primarily with students. However, as shown below, manystudies have been done using professionals and researchers.Ben-Shakhar et al. (1998) provided clinicians with a battery of psychodiagnostic tests from the records of aJerusalem hospital. They were asked to identify which patients were suffering from Paranoid Personality (PP) andwhich from Borderline Personality (BP). Most of the clinicians required several hours to make their judgments andthey were paid about 200 to do so. Half of the clinicians received a suggestion that the patient was suffering fromPP and half were told that it was BP. The battery of tests was designed to be neutral. The experts' conclusionsshowed strong agreement with the suggestion they had been given earlier.Elaad, Ginton, and Ben-Shakar (1994) found that prior expectations affected conclusions by polygraphexaminers. As might be expected, this only occurred when the results contained some ambiguity.Goldfarb (1995) concluded that the dominant hypothesis has detrimental effects on research in economics.He used an example from industrial organization to illustrate the problem. Before 1974, many researchers hadconcluded that industrial concentration raised profits. Demsetz (1974) published an influential attack andhypothesized that the superior efficiency of large firms was the cause of high profits. After this, the publishedregression results changed; they supported Demsetz, who called this phenomena “believing is seeing.”Studies of scientific practice also suggest that the dominant hypothesis does not effectively promoteobjectivity. Greenwald et al. (1986) reviewed evidence from psychology and concluded that researchers display aconfirmation bias that leads them to revise their procedures until they achieve a desired result. In some notoriouscases, such as that of Cyril Burt, eminent researchers altered data to support their hypotheses (for a review, seeBroad and Wade 1982).The success of the dominant hypothesis approach in contributing to scientific generalizations rests upon theassumption that other scientists will develop competing hypotheses, and the “marketplace of ideas” will, over time,lead to the selection of the best hypothesis. Mitroff (1972) studied procedures used by space scientists andconcluded that the dominant hypothesis was beneficial to scientific advancement because the best idea will win inthis market. However, Armstrong (1980) argued that Mitroff's conclusion did not follow logically from his evidence.The “marketplace” may not operate effectively. For example, Pollay (1984) concluded that no consensus arose aftera decade of research using the Lydia Pinkham data to investigate the duration of the carry-over effect of advertising.Rodgers and Hunter (1994) found that researchers investigating a favored hypothesis selectively deletestudies from a meta-analysis. Studies in medical research by Begg and Berlin (1989) and in marketing by Rust,Lehmann, and Farley (1990) show how such a bias can adversely affect meta-analyses. Coursol and Wagner's(1986) analysis suggests that researchers in psychology are less likely to submit (and journal editors less likely topublish) studies that do not support a favored hypothesis.3

2.3. Competing HypothesesFor competing hypotheses, the researcher examines evidence on two or more plausible hypotheses. Thisenhances objectivity because the role of the scientist is changed from advocating a single hypothesis to evaluatingwhich of a number of competing hypotheses is best. Of course, in practice, researchers might start with a favorableview of one of the hypotheses or they may reach a premature conclusion. Sawyer and Peter (1983) claimed thatcompeting hypotheses are useful in marketing, and they cite studies by Cialdini et al. (1978), Burger and Petty(1981), and Bettman et al. (1975) as successful illustrations.Research on the scientific method shows that the method of competing hypotheses can aid objectivity.Using laboratory studies, Klayman and Ha (1987) and Dunbar (1993) found that subjects who thought of explicitalternatives to their best guess were most successful at discovering generalizations. Farris and Revlin (1989), in alaboratory study, found that competing hypotheses aided the discovery of correct hypotheses. Gorman and Gorman(1984) found that when subjects actively searched for and obtained more disconfirming information, they were morelikely to discover correct explanations.By structuring alternative hypotheses, researchers may be better able to judge how the evidence relates toeach. McKenzie (1998) found support when he reviewed relevant studies.Dunbar (1995) studied the behavior of scientists in four biology laboratories. He found that scientists werequick to modify their hypotheses when they received inconsistent evidence. If the evidence called for major changesin hypothesis, they tended to be resistant when working alone. Peer review proved useful in getting them to consid