10.1 Basic Problems in Research Ethics

Occasionally, reputable newspapers report on “science scandals,” which often refer to completely fake research results or plagiarism. These are, of course, very serious cases in which it is made quite clear that such behavior is absolutely unacceptable from an ethical (and oftentimes legal) perspective. In the practice of empirical marketing research, ethical issues often arise at different stages in the research process. These issues can be less serious and less clear; sometimes it is only negligence, but this can have considerable consequences for the scientific process of knowledge generation. Especially in the last 10 years or so, not least because of some prominent cases, the sensitivity for such ethical aspects has grown significantly. For this reason, the first section of this chapter will briefly outline key aspects of research ethics. Section 10.2 characterizes and discusses questions of research ethics that occur during the typical phases of the research process.

First, let’s look at some conflicts that can lead to ethical questions for researchers. On the one hand, hardly anyone doubts the necessity of ethical principles for scientific research, yet the pressure on scientists has grown so much in recent years (e.g. Honig et al. 2013) that the danger of violating ethical principles has increased:

  • For a scientific career, even if it is only a matter of remaining in a scientific profession at all, outstanding publication successes in the leading international journals of the respective discipline are required today.

  • In the past, mainly scientists from the US and some European countries published in these few leading journals; the competition for publication opportunities has increased, as more and more authors from around the world try to publish in these journals.

  • There is intense competition between journals for reputation and attention (measured primarily by the number of citations of published articles), which results in publishers and editors being most likely to accept particularly clear and substantial research results for publication.

  • In some countries, the pressure factor of “grants” has been added in recent years. In many cases, funders of grants (e.g. interest groups, companies, political institutions, etc.) attach great importance to the fact that the respective projects lead to clear (or seemingly clear, see below) results in a limited amount of time—if possible with the a priori expected results. Otherwise, the chances of successful applications for grants could decrease in the future.

Daniele Fanelli, in several studies, has examined the changes in publication behavior and the possible causes. In one of these studies (Fanelli 2012, p. 891), he found that the proportion of published non-significant results, which are “negative” with regard to the confirmation of a hypothesis, has decreased over time:

“Concerns that the growing competition for funding and citations might distort science are frequently discussed, but have not been verified directly. Of the hypothesized problems, perhaps the most worrying is a worsening of positive-outcome bias. A system that disfavours negative results not only distorts the scientific literature directly, but might also discourage high-risk projects and pressure scientists to fabricate and falsify their data. This study analysed over 4600 papers published in all disciplines between 1990 and 2007, measuring the frequency of papers that, having declared to have ‘tested’ a hypothesis, reported a positive support for it. The overall frequency of positive supports has grown by over 22% between 1990 and 2007, with significant differences between disciplines and countries.”

One of the reasons for preferring “positive” results may be that they are cited more frequently. Another study by Fanelli (2013, p. 701) confirmed this assumption:

“Negative results are commonly assumed to attract fewer readers and citations, which would explain why journals in most disciplines tend to publish too many positive and statistically significant findings. This study verified this assumption by counting the citation frequencies of papers that, having declared to ‘test’ a hypothesis, reported ‘positive’ (full or partial) or ‘negative’ (null or negative) support. Controlling for various confounders, positive results were cited on average 32% more often.”

In many cases, empirical research results are not always so smooth and clear as the researchers hoped for:

  • Many measurement problems can affect the results.

  • In the behavioral sciences, the interaction of a large number of variables is particularly complex, and strong effects of single variables are less common.

  • Innovative projects have a higher risk than the progression on known paths.

In this situation, in which the aim is to arrive at clear and original research results, the research process can be very difficult and complex. In some cases (more or less consciously), it may happen that the research process is influenced in order to produce “desirable” results (“verification bias”). This is enabled because many details of the research process (e.g. the selection of subjects, the measurements and data preparation) are only limitedly verifiable for outsiders (e.g., reviewers and readers of the publication).

In one of the biggest social science scandals, which centered on the Dutch social psychologist Diederik Stapel, several Tilburg University (Netherlands) committees investigated the numerous data fabrication cases and the methods used and summarized the findings in a comprehensive report (Levelt Committee et al. 2012). This also includes (on p. 48) the following characterization of the so-called verification bias:

“One of the most fundamental rules of scientific research is that an investigation must be designed in such a way that facts that might refute the research hypotheses are given at least an equal chance of emerging as do facts that confirm the research hypotheses. Violations of this fundamental rule, such as continuing to repeat an experiment until it works as desired, or excluding unwelcome experimental subjects or results, inevitably tend to confirm the researcher’s research hypotheses, and essentially render the hypotheses immune to the facts.”

It is important to note that ethics by no means refers only to the extreme cases of fabrication of results or plagiarism (e.g. Martinson et al. 2005). Rather, in the research process, there are many situations—from a research question to a publication—that involve minor or major ethical issues, such as the elimination of certain data (“outliers”), incomplete or selective presentation of results or incorrect information regarding the contribution of several authors in a publication (see Sect. 10.2). Fortunately, the major science scandals uncovering completely fabricated studies or extensive plagiarism rarely occur. Nevertheless, there is evidence of a significantly wider spread of “minor” faults and manipulations in the research process. Table 10.1 shows the results of a survey of more than 2000 psychologists at US universities, who indicated whether they had already used certain questionable approaches in their research practice and to what extent they consider such practices justifiable.

Table 10.1 Dissemination of questionable research practices (Source: John et al. 2012, p. 525)

Why have research ethics become so important in science? One might first think of general ethical principles in society, which by all means also apply to scientists and science, namely the rejection of lies, fraud, damage to others and so on. The field of science, however, has some additional specific aspects:

  • First of all, science is free and not subject to any external control, that is, the correctness of processes and results should be internally evaluated, not least by the ethical acceptable behavior of scientists. External control would also be difficult in many areas because of the lack of insight into research processes and specific expertise.

  • The central task of science is the search for truth and the avoidance of errors (Resnik 2008). How can this be ensured if the research process is significantly under the influence of negligence and manipulation?

  • It should also be remembered that many fields of research (e.g. life sciences) have far-reaching consequences for many people and society at large. Careless work—or even fabricated results—would obviously be completely unacceptable in this regard.

  • For science, the exchange of results has central relevance and it would be unthinkable if current research could not be based on past results. In this respect, trust and reliability in science are indispensable.

  • Ultimately, it is also about the existence of the scientific system itself, which is largely funded by society (public budgets, foundations, etc.). Sloppy research, fake results and unethical practices would, of course, rightly lead to at least questioning this funding.

In its “Recommendations on Academic Integrity” (Wissenschaftsrat 2015, p. 7), the German Council of Science and Humanities identifies the importance of observing ethical principles for science:

“Honesty, a sense of responsibility and truthfulness are prerequisites in all areas of society and work. Why does science in particular have to make certain of this ethical foundation and continually ensure its stability? Misconduct, fraud and negligence, which can occur in other areas of life, are also possible in science; nonetheless, science has a particular ethical responsibility that compels it to carry out continuous self-monitoring. Science’s claim to autonomy—in terms of the freedom of persons and institutions in science—reinforces this ethical responsibility.”

Figure 10.1 summarizes key aspects that constitute the area of tension in which scientists are concerned with ethical behavior.

Fig. 10.1
A chart for the key aspects of the area of tension concerning ethical behavior. Listicle for ethical requirements on the left and incentives for unethical behavior on the right point to ethical research behavior in the center.

Ethical requirements and incentives for unethical behavior

What are the essential ethical principles for scientific research? Resnik (2008, pp. 153ff., 1998, pp. 53ff.) develops some principles that are concretely applicable to the respective research practice. Here are the most important of these science-specific principles:

  • Honesty: “Scientists should practice honesty in research and publication, and in their interactions with peers, research sponsors, oversight agencies, and the public” (Resnik 2008, p. 153).

    Without a doubt, this rather general point concerns almost all ethical requirements for research and it is applicable to the entire research process and the publication of results.

  • Carefulness: “Scientists should avoid errors in research, especially in presenting results. They should minimize experimental, methodological, and human errors and avoid self-deception, bias, and conflicts of interest.” (Resnik 1998, p. 56).

    Carefulness is essential to serve the purpose of research, which is the search for meaningful and true statements. In addition, when using results for further research or for practical applications, it is assumed, of course, that they have been produced with the utmost care.

  • Objectivity: “Scientists should strive for objectivity in research and publication, and in their interactions with peers, research sponsors, oversight agencies, and the public.” (Resnik 2008, p. 153).

    Researchers are sometimes exposed to certain interests (e.g. expectations of success at their home university), which can lead to pressure to obtain certain (“desired”) results. However, the goal of objectivity does not only affect the research process in the narrow sense. This should also be applied to reviewers (e.g. in the review process for journals).

  • Openness: “Scientists should share data, results, ideas, methods, tools, techniques, and resources.” (Resnik 2008, p. 153).

    This is about the significant aspect that science can develop only if access to previous knowledge is comprehensively secured. However, openness is often limited in practice in military or commercial research (e.g. market research, pharmaceutical research, etc.). The rising competition in science is another problem.

  • Freedom: “Scientists should be free to conduct research without political or religious intimidation, coercion, or censorship.” (Resnik 2008, p. 154).

    Freedom has been a central “success factor” of scientific research for centuries. Religious- or ideological-influenced research could never have led to the tremendous progress of the past. In Western countries, the freedom of science is largely guaranteed today; however, there are certain limitations, because the allocation of grants and funds can represent the interests of the respective funders.

  • Faircredit allocation: “Scientists should give credit, but only when credit is due.” (Resnik 2008, p. 154).

    Such fairness is the prerequisite for scientific cooperation, not least because the recognition of contributions is of central importance for the professional existence of scientists. Plagiarism as an unmarked takeover of the achievements of other scientists is an extreme example of a violation of this principle. Even the naming of authors who did not have a significant share in the research in question in a publication contradicts the principle. Power relations in the science system can also play a role: “A few decades ago in Germany, it was not uncommon for a professor to publish an article that had been written by an assistant.” (Albers 2014, p. 1153).

  • Respect for human subjects: “Scientists should respect the rights of human subjects and protect them from harm and exploitation.” (Resnik 2008, p. 157).

    Adequate behavior toward study subjects has also become a problem in the social sciences, which has led to a number of broadly accepted principles. The standard has become “informed consent”, which allows the subjects to make a voluntary decision on participation in a study on the basis of appropriate information. In the social sciences, it has also become common for subjects to be protected against damages to their physical or mental health and to be guaranteed about the confidentiality of the data collected (see also Sect. 10.2.3).

In addition, Resnik (2008, p. 154, p. 156) incorporates the following ethical principles, which are less specific to research practice (but not unimportant), in his compilation:

  • Respect for colleagues

  • Respect for property

  • Respect for laws

  • Stewardship of research resources

  • Social responsibility

Table 10.2 provides some collections of principles on research ethics of several research organizations that are relevant to marketing researchers:

Table 10.2 Statements and principles of research ethics

Ernst-Ludwig Winnacker, former president of the German Science Foundation (2015, p. 23) outlines the consequences of fraud in the science system for fraudsters:

“Of course, there will always be people who are players who take the risk. Then their career is over, and they start a business, living off the money of their parents or their spouse. In any case, a return to the scientific system, where trust is important, will hardly be possible. Anyone who cheats must know that.”

The following section illustrates and discusses more concretely ethical problems in the research process. The section follows (roughly) the typical steps of the research process.

10.2 Ethical Issues in the Research Process

10.2.1 Research Topics and Research Questions

The beginning of each empirical study is the determination of the study’s topic and the appropriate research questions. Doubts may arise regarding the ethical accountability of certain research objectives. The following example may illustrate this. Consider a market research study designed to develop influencing techniques for children between the ages of 5 and 8 to increase their consumption of (caries-promoting) sweets (e.g. SAGE Editors 2013). Can responsible scientists contribute to “seducing” relatively vulnerable children into harmful behavior? This question is usually answered with “no.” But what about more ambiguous cases? Where are the limits?

In 2014–2015, the American Psychological Association (www.apa.org) experienced a fierce controversy over research ethics, as this organization engaged in the development of “enhanced interrogation techniques” (e.g. waterboarding and fake executions) by psychologists commissioned by the American Secret Service or the US military. Hardly anyone will want to ethically justify psychological research into the development of such methods. It is noteworthy that the motives for working with the Ministry of Defense were quite opportunistic, because the military sphere is a major and important employer of psychologists (see Hoffman et al. 2015).

Now, in marketing research, ethical issues are generally not as acute as in some other disciplines. Just think about the very serious discussions on human genetic research, gene modification of agricultural seeds and consequences of nuclear research. Nevertheless, there may also be topics in marketing research in which one should at least ask questions about the ethical justification. Here are some (hypothetical) examples:

  • Market entry strategies in international marketing that exclude poor countries from technical or medical progress

  • Development of strategies for misleading consumers’ price perception

  • Impact-maximizing design of misleading advertising

  • Questioning the autonomy of consumers through neuromarketing

Often it is not only the avoidance of unethical behavior, but social responsibility that is also explicitly required. Resnik (2008, p. 156) formulates this principle in the following way: “Scientists engage in activities that enhance or promote social goods, such as human health, public safety, education, agriculture, transportation, and scientists therefore should strive to avoid harm to individuals and society.” This principle can be effective, for example, in scientific opinions on public affairs or warnings about risks from economic developments (e.g. influence of advertising on nutritional behavior). Resnik (2008) cites three arguments that justify the demand for the socially responsible behavior of scientists:

  1. 1.

    Moral obligations that apply in general, including scientists

  2. 2.

    Scientists receive so much support from the public that they should also give something back to society

  3. 3.

    Socially responsible science makes it easier to receive further support from society

Research and teaching at universities is largely funded by public funds. In this respect, it is obvious that not only the perspective of companies can play a role, but also the interests of employees and consumers. Meanwhile, some science organizations have formulated principles of social responsibility for themselves. As an example, see below for the main goals of the University of Bremen. Another example is the Association for Consumer Research, which has a special section titled “Transformative Consumer Research” (TCR): “TCR is a movement within our association that seeks to encourage, support, and publicize research that benefits consumer welfare and quality of life for all beings affected by consumption across the world”(www.acrwebsite.org, accessed July 23, 2018).

Bremen University (Germany) offers an example of positive determination of research goals and the exclusion of certain fields of research (e.g. military research) with its “guiding objectives,” from which the following determinations are taken:

“Instructors and students of the University of Bremen are guided by the basic values of democracy, human rights and social justice, which are also the subject of research and teaching in many areas. They will continue to look at the consequences of science in economics, politics and culture and the opportunities for socially and environmentally responsible use of research results (for example, forward-looking technology and economic policy, no military research). The University of Bremen is committed to peace and pursues only civilian purposes.” (Source: www.uni-bremen.de/universitaet/profil/leitbild.html, accessed July 23, 2018).

Connections to companies and their associations are obvious and meaningful for marketing research. In some cases, however, there may be attempts to use the special authority of science (see Sect. 1.1) for the interests of individual companies or lobby groups via the allocation of third-party funds, advisory and expert services, company-paid doctoral students and so on, so that they influence the results of scientific research accordingly. The problem of the one-sidedness of paid reports and evaluations is common in scientific, legal and political life. The effort for objectivity of scientific work can be impaired if the preparation of a report for a certain client is associated with considerable payments. If conflicts of interest are possible due to the influence of funders and others, then at least their disclosure in a publication is necessary, which is now a requirement for most scientific journals.

New York University’s “Sponsored Research Guidelines” regulate public access to research results that have been funded:

“The University does not conduct or permit its faculty to conduct secret or classified research. This policy arises from concern about the impact of such restrictions on two of the University’s essential purposes: to impart knowledge and to enlarge humanity’s store of knowledge. Both are clearly inhibited when open publication, free discussion, or access to research are limited. For the same reasons, the University requires that investigators be able to publish the results of their research without prior approval of a sponsor. Agreements may, however, permit sponsors a brief period to review proposed publications and presentations to identify (1) proprietary information that may require patent or copyright protection, or (2) information confidential to the sponsor that must be removed. In general sponsors are granted review periods of 30 to 45 days prior to submission for publication, but review and delay periods should total no more than 90 days”. (Source: https://www.nyu.edu/about/policies-guidelines-compliance/policies-and-guidelines/sponsored-research-guidelines.html, accessed July 23, 2018).

The most important criterion for decision-making in such cases is the principle formulated by Schurz (2014, p. 42) that in the case of scientific knowledge, the context of justification should be free from external influences (see Sect. 1.1). Nevertheless, in the context of discovery and exploitation, in many cases the influences of various interest groups (including the private sector) cannot be completely avoided.

10.2.2 Study Design

The focal point of this phase is the definition of a study design and the development of measurement instruments. There are usually many options that can significantly influence the results. Of course, this can create a temptation to achieve the most substantial and clear results for favorable publication opportunities (see Sect. 10.1). As an example of the strong influence of the research methodology on results, we can refer to the frequently used survey method in data collection. Numerous studies have shown that even seemingly minor changes in question formulation or questionnaire design can lead to significant differences in results (e.g. Schwarz 1999). The same applies in a similar way to the field of sampling. In the present section, we select and outline some cases that are of widespread importance in research practice.

Ensuring the Validity of Measurements

In the context of this book, the problem of validity of measurements is discussed extensively (see Sect. 6.3). This illustrates the central importance of this aspect. What significance can a study have when it uses data that only insufficiently reflect the theoretically interesting concepts (see Sect. 2.1)? With regard to the lack of validity of erroneous data in the testing of theories, Sect. 3.2 also refers to the discussion of “measurement error underdetermination.”

Against this background, a certain amount of evidence for the validity of a study is required for its publication in a reputable journal. If validation is a (gradual) exclusion of alternative explanations for the results found (Jacoby 2013, p. 218), then this already suggests that this is a process in which successive tests increasingly provide more certainty of the occurrence of validity. Not all of these tests are reflected in corresponding measures; some are more logical (e.g. in terms of content validity). In addition, there is no established “canon” of validity tests that must be “processed” in each study.

An example of the misuse of validity tests relates to the measurement of “Cronbach’s α”, which stands for the internal consistency of a multi-item scale, and thus allows statements about the reliability (as a necessary condition of validity) of such a scale (see Sect. 6.3). There are some cases in which the process of scale development is such that the items used are extremely similar (or almost identical). Although this contradicts the established principles of scale development, according to which the items should reflect different facets of the measured concept (e.g. Churchill 1979), it favors high α-values and thus increases the chances of publication of a study. Such an approach would be ethically problematic, because the ultimate goal of science, the search for true and meaningful statements (Schurz 2014, p. 19, see also Sect. 1.2) is deliberately disregarded, just to improve the publication chances.

With regard to the validation of measurement instruments, the following principles should at least apply to a responsible research practice:

  • Due to the centrality of the validity of study methods, a comprehensive and critical review (and, if appropriate, adaptation) of these methods should be carried out before applying these methods in a study (see also Chap. 6).

  • The results of a validity check of the methods that are actually used should be fully documented in a publication and not limited to a selection of favorable results.

Abusive Use of Pretests

Pretests, above all for checking and improving the measurement methods used (e.g. questionnaires), are today regarded as a standard procedure in empirical research. However, there are also possibilities of abuse insofar as pretests and corresponding changes in the data collection can be made until the desired results come out (again a variant of the “verification bias”). Peter (1991, p. 544) comes to a rather skeptical assessment: “It is common practice in some areas of social science to not report such things as how many ‘pretests’ were done before the desired result was obtained, how many subjects were dropped (…) in order to make the results come out favorably, or how many different manipulations were tried before finding one that worked.” Closely related to this is the incorrect practice of including the results of pretests in the publication, depending on whether these results “fit” or not (Laurent 2013).

Lack of Openness in Qualitative Studies

From the perspective of the present book, qualitative studies are most relevant to theory building (see Sect. 4.3.3), and the theories developed become the subject of theory tests (see Chap. 5). In few areas of marketing research, results of qualitative studies are regarded and published as independent research contributions. Qualitative methods are characterized by great openness and freedom in the research process, so that they can support the creative process of theory building (e.g. Creswell 2009; Yin 2011). But if at the beginning of the qualitative research process, the researcher already has more or less defined ideas on the (desired) results, then one must expect that the freedom of the research process would make it relatively easy to achieve these results. If researchers are no longer open-minded about a qualitative research project due to previous theoretical determinations, worldviews or orientation toward the interests of third-party funders, then systematically distorted results, whose causes are hardly recognizable to outsiders, are likely.

10.2.3 Data Collection

This part of the study procedure refers mainly to the process of data collection (e.g. conducting interviews) until the existence of a (still unedited) data set. Central to this is the fair and careful treatment of respondents and test persons. This aspect is of outstanding importance in medical or pharmacological research, but it is by no means a marginal problem for marketing research. In addition, the correct implementation of sampling is important at this stage.

Protection of Respondents or Study Participants

For the participants in empirical studies, who, for instance, complete questionnaires or participate in laboratory experiments, different types of burdens can arise, especially time and stress, and possible disadvantages by disclosing personal information. In the methodology literature (e.g. Shadish et al. 2002, pp. 279ff.; Groves et al. 2009, pp. 375ff.; Rosnow and Rosenthal 2013), there is agreement that the burdens and risks for the study participants must be minimized.

A milestone in the development and implementation of ethical standards for conducting empirical human research was the “Belmont Report” (www.hhs.gov), named after the conference venue (Belmont Conference Center, near Baltimore), where in 1978 the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research set out appropriate policies and guidelines. This was due to experiences from the Nazi era and from the post-war period with unscrupulous experiments on humans, which led to severe damage to the test subjects. Empirical studies in marketing research are usually not associated with such risks, nevertheless, the developed principles also refer to studies in which, at most, relatively small disadvantages for the participants may arise. The Belmont Report refers to them as “Basic Ethical Principles”:

  1. 1.

    Respect for Persons: Respect for persons incorporates at least two ethical convictions: first, that individual should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection. The principle of respect for persons thus divides into two separate moral requirements: the requirement to acknowledge autonomy and the requirement to protect those with diminished autonomy.”

  2. 2.

    Beneficence: Persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being. Such treatment falls under the principle of beneficence. The term “beneficence” often tries to cover acts of kindness or charity that go beyond strict obligation. This document presents beneficence in a stronger sense, as an obligation. Two general rules have been formulated as complementary expressions of beneficent actions in this sense: (1) do not harm and (2) maximize possible benefits and minimize possible harms.”

  3. 3.

    Justice: Who ought to receive the benefits of research and bear its burdens? This is a question of justice, in the sense of “fairness in distribution” or “what is deserved.” An injustice occurs when some benefit to which a person is entitled is denied without good reason or when some burden is imposed unduly.”

It is also important that not all three principals have the same significance for marketing research. Some aspects are more relevant in other contexts, such as medical research (e.g. with regard to new therapies or medicines).

The three “Ethical Principles” have been assigned three more concrete requirements in the Belmont Report:

  1. 1.

    Informed consent: The participants agree with the study based on appropriate information on research objectives, possible burdens and data protection. It is therefore up to the requisite “respect for persons” to leave the participants to decide on their participation.

  2. 2.

    Assessment of risks and benefits: This aspect corresponds to the principle of “beneficence” because the very benefits of a study (to be maximized) have to be contrasted with the associated (and minimized) burdens. This relation and its possibilities for improvement should be the subject of appropriate considerations in the run-up to the realization.

  3. 3.

    Selection of subjects: “The principle of justice gives rise to moral requirements that there be fair procedures and outcomes in the selection of research subjects.”

To establish and ensure ethical compliance with human research, institutional review boards (IRBs) have been present at US universities and other scientific institutions since 1974 and they must approve the conduct of studies. In many other countries, there are now comparable institutions.

Here is an example for an Institutional Review Board (IRB) at Northwestern University: (irb.northwestern.edu):

“About the IRB

The protection of research subjects at Northwestern University is a shared responsibility, with the institution, researchers, IRB committees, and the IRB Office working together toward this common goal.

The IRB Office is primarily responsible for developing and directing the University’s Human Subject Protection Program (HSPP), which also involves other offices at Northwestern University. The HSPP mission is to be a model program of excellence in protecting the rights and welfare of human subjects involved in research.”

Manipulation of Sample Size and Response Rate

Because of the voluntary nature of the participation of respondents or test subjects, a 100% response rate is virtually unattainable in social science studies. Particularly in the academic field (e.g. in studies of doctoral students), there are typically very limited resources. These limited resources aggravate the problem, because often there are no incentives for participation and frequently repeated attempts or reminders are too expensive. For example, a study by Collier and Bienstock (2007) showed that even in studies published in leading international marketing journals, the response rates were usually only less than 50%. It is common that a low level of response rates due to systematic differences between participants and non-participants can lead to biased test results. Here, in the context of research ethics, it is important to critically examine practices that manipulate sample size or response rates to achieve the desired results.

  • As we all know, a weak correlation between two variables or a small difference between different groups may be statistically significant if the sample is sufficiently large (see Chap. 7). One tactic for achieving significant results is to increase the sample size accordingly or to combine the data set with other data sets (Levelt Committee et al. 2012; Laurent 2013).

  • Even by consciously refraining from higher response rates, one can manipulate results. Laurent (2013, p. 327) formulates a “rule” for such manipulation: “Checking, after the collection of each observation, whether the result is significant (at 5%) and then stopping the data collection immediately, for fear that the result might no longer be significant after additional observations”. Of course, he means this as a warning.

Against this background, it is required that the sample size be determined before data collection. “Authors must decide the rule for terminating data collection before data collection begins and report this rule in the article.” (Simmons et al. 2011, p. 1362).

10.2.4 Data Preparation and Data Analysis

Typically, after the data collection, a phase of data preparation is required, such as identifying wrong records or outliers. Such changes in the data set can be problematic and allow manipulation for desired results. Furthermore, statistical data analysis is not as “objective” and independent as it sometimes seems. For example, determining significance levels (p = 0.01 or p = 0.05 or p = 0.1) indeed determines the type and number of “significant” results. An analysis of p-values in leading management journals showed a peculiar accumulation of values just below the usual thresholds of 0.05 and 0.1, respectively, indicating that data have been “processed” just enough to reach those levels of significance (see Albers 2014). An empirical analysis in sociology showed that significantly more p-values were just under than just above the 5% threshold (Gerber and Malhotra 2008), although one would actually expect an approximately even distribution. In the study by Banks et al. (2016), 11% of respondents said they had already manipulated p-values. This is where a relationship with “publication bias,” mentioned in Sect. 9.3, becomes apparent: scientific knowledge is systematically distorted if an attempt is made to obtain significant results whenever possible in order to improve the publication chances of a study. In this context, the study by Fanelli (2012), cited in Sect. 10.1, comes to mind.

Ray Fung (2010), from Harvard University, summarizes the results of his research on reported levels of significance in leading management journals:

“Researchers may be dredging through their data to push p-values below levels of significance deemed necessary to publish. We examine a random sample of papers from top management articles and compare their hypotheses and p-values to a simulated distribution of results that should occur if no data dredging bias exists. Our analysis reveals that data dredging may be occurring. The distribution of p-values shows suspicious and statistically significant upswellings preceding the common levels of significance of 0.05 and 0.1. Not a single paper found more than half of its hypothesized results to be nonsignificant, which is statistically infeasible.”

Data Manipulation

Without a doubt, the invention orfabrication of data is completely unacceptable and usually (on discovery) leads to harsh sanctions, often leading to a loss of the professional position in the science system. Albers (2014) describes corresponding cases. Again, there is a “gray area” of behaviors in which data are not faked, but in which manipulations are made, which may be partly justified and useful, but sometimes also problematic.

On the one hand, the unadulterated reproduction of observations collected in a study is the basis for meaningful empirical results. On the other hand, it may also be useful to eliminate or edit individual records; otherwise, the results would be corrupted. Thus, correlation coefficients or least squares estimates, for example, are influenced in sometimes misleading ways by individual cases with values well beyond the usual range, the so-called “outliers” (e.g., Fox 1984, pp. 166–167). The elimination of data also leaves scope for excluding observations from the analysis that “disturb” the desired outcomes (for a full discussion of the problem, see Laurent 2013). After all, in a survey of 344 management researchers by Banks et al. (2016), 29% of respondents said that they had already eliminated cases from data sets to achieve “better” significance values.

Gilles Laurent (2013, p. 326) formulates a general principle for the elimination of outliers:

“In practice, whenever researchers eliminate observations, they should include an appendix that describes precisely their argument for the elimination, as well as the full distribution of observations before and after elimination (…).”

“HARKing”

The term “HARKing” (“Hypothezing After the Results are Known”, Kerr 1988) is comparable to the term “fishing through a correlation matrix” (Peter 1991, p. 544) and describes a behavior in which the researcher calculates a large number of correlation coefficients, significance tests and so on after the data are available. Then, with such seemingly “significant” results, it is possible to come up with “fitting” hypotheses to “enrich” a publication. This does simulate an actually non-existent theoretical basis for these results. In the previously mentioned study by Banks et al. (2016), about 50% of surveyed researchers acknowledged such behavior. The comments in Sect. 7.4 refer to the very limited validity of results obtained in this way.

Nevertheless, it should also be remembered that unforeseen outcomes should not go unacknowledged. There is nothing against an interpretation or discussion of these findings, but the appearance of a theoretically developed and then statistically “successfully” tested hypothesis would be misleading in this case.

Adjustment of Significance Levels and Applied Statistical Methods

Another method to obtain empirical results that (seemingly) confirm the hypotheses that have been theoretically developed is a change in significance levels. Thus, a correlation coefficient or a statistical test at a significance level of p = 0.05 may not lead to a significant result but could at p = 0.1. One also finds the practice of performing tests with multiple significance levels (e.g. p = 0.01, 0.05, and 0.1, respectively). Of course, this increases the proportion of results that are “somehow” statistically significant.

A similar approach is the use of different statistical tests for a particular relationship between variables (Laurent 2013). Since different tests have different properties, the results are usually not identical; and if ethical principles are disregarded, it is often possible—according to the verification bias—to report at least a “suitable” result.

Storage of Data

With regard to the verifiability of test results, today it is increasingly required that the data and documents be kept for a longer period and made accessible as needed. This aspect is important not only in terms of the ability to detect unfair behavior of researchers, but also in terms of performing replication studies and meta-analyses. Chapter 9 outlines their essential importance for the process of generating scientific knowledge.

The storage of data to secure access to it for a certain period of time is not only the task of the authors, but some scientific journals now also take responsibility for this (e.g. in marketing, Marketing Science and the International Journal of Research in Marketing) and keep this data available to other researchers for replication.

Here is the guideline from the journal Marketing Science (pubsonline.informs.org/journal/mksc) regarding the data used in an article that is submitted for publication:

“Marketing Science announces its replication policy. Broadly speaking, the policy will require that upon acceptance of a paper by Marketing Science, the author(s) of the paper will submit the data and estimation codes used in the paper. The journal will make these files available on its website to scholars interested in replicating accepted paper’s results.”

10.2.5 Interpretation and Presentation of Results

Between data analysis and publication, the interpretation and presentation of the study results happens, although these steps are certainly overlapping. In research ethics, the focus is on one problem area: the omission of results that do not fit into the overall picture, or the selection of “fitting” results. Thus, the results of the study are incomplete and, in many cases, biased.

Laurent (2013, p. 326) speaks in this context of “hidden experiments” or “best of” tactics, meaning the omission of results that do not confirm the central statements of a publication. There is evidence that in some publications, only about half of the original partial studies are reported. In the study by Banks et al. (2016), about 50% of surveyed management researchers stated that they report (or not) on hypothesis testing, depending on significance levels. This may occasionally correspond to the preferences of some reviewers, who are, so to speak, the “gatekeepers” on the way to publication and can exercise corresponding power. Sometimes clear results and short articles are desired, and partial results that do not fit the picture should be left out. However, this is associated with limitations in the search for scientific truth, which Laurent (2013, pp. 326–327) characterized as follows: “If an effect is so weak that it is significant in only four experiments out of eight, this is informative and should be reported. If the effect appears only with certain manipulations, measures, populations, experimental settings, and so forth, this too is informative and should be reported.”

Against this background, it is important in a report or publication to fully document the key steps in a study—from the development of measurement instruments and sampling to statistical analysis. This makes it possible for readers and reviewers to comprehend the development of the test results and to critically reflect on them. There are certainly some limits, which the scarcity of space for publications and the patience of readers determine. However, appropriate information can be offered on the Internet or in appendices to larger publications.

Ralph Rosnow and Robert Rosenthal (2013, p. 45) give some advice concerning the transparency, informativeness, precision, accuracy, and groundedness of reporting methods and results:

“By transparency, we mean here that the quantitative results are presented in an open, frank, and candid way, that any technical language used is clear and appropriate, and that visual displays do not obfuscate the data but instead are as crystal clear as possible.

By informativeness, we mean that there is enough information reported to enable readers to make up their own minds on the basis of the primary results and enough to enable others to re-analyze the summary results for themselves.

The term precision is used not in a statistical sense (the likely spread of estimates of a parameter) but rather in a more general sense to mean that quantitative results should be reported to the degree of exactitude required by the given situation.

Accuracy means that a conscientious effort is made to identify and correct mistakes in measurements, calculations, and the reporting of numbers.

Groundedness implies that the method of choice is appropriate to the question of interest, as opposed to using whatever is fashionable or having a computer program repackage the data in a one-size-fits-all conceptual framework.”

10.2.6 Publications

Section 10.1 referred to the great, and probably growing, importance of publications in the science system. These are crucial for the opportunity to enter a scientific career and for further development of the career; they significantly influence the chances of success in applying for grants and third-party funding and, in some cases, are the basis for academic honors. The central standards in this respect are the number of publications of a scientist and the quality (degree of innovation, substance, relevance, etc.) of the publications, which often are (simply) assessed on the basis of the status (ranking, reputation, “impact factor” as an indicator for the citation frequency) of the respective journals in which the article is published. Against this background, it is easy to see that scientists are making great efforts and competing to achieve publication success. In a sense, “in the heat of the moment”, it can lead to behaviors and practices that are problematic in ethical terms. In the following section, some aspects are addressed from the perspective of the target group of this book (doctoral students, advanced students), all of them potential and future authors. Albers (2014) and Honig et al. (2013) provide further information on the problems of the scientific system and the publication process.

Opportunistic Citation Behavior

Every scientific publication in marketing research is based on an appropriate evaluation of the relevant literature for the respective research subject, in empirical work in particular on the development of the theoretical basis and the presentation and justification of the methodological approach. The bibliography and the corresponding reference list serve to classify the current project and to integrate its results into the development of the field of research, and to adequately acknowledge the achievements of other scholars (see Sect. 10.1), to justify one’s own considerations and chosen course of action, and to facilitate access to relevant literature for the readers of the publication. Against this background, the reference list should, of course, focus on sources that are material and somewhat representative of the content of the publication. It appears that there are occasional deviations from this behavior with the aim of increasing publication chances, by citing additional sources that are well appreciated by editors and reviewers of the journal to which the article is submitted. Here are two related practices:

  • Adding of citations from publications by members of the “editorial board” of the journal, whose expertise make it likely that they will be considered as reviewers for the submitted article.

  • Adding of citations from articles in the journal to which a paper is submitted for publication, but these are not material to the argumentation in the paper. Thus, the author expresses his or her appreciation of this journal and could gain the goodwill of editors and reviewers. Regardless of this, it is not uncommon for an article to be submitted to a thematically highly specialized journal (e.g. Journal of Product Innovation Management) that this journal is quoted relatively frequently because of the thematic focus.

In both outlined cases, the authors would mislead the readers to opportunistically increase the publication chances of their article.

Other opportunistic goals are “citation rings” in which scientists within a group (e.g. representatives of a particular research field) cite each other with disproportionate reciprocity, thus increasing one another’s fame in the academic world and driving up citation indices. Here also numerous self-citations can play a role. In such cases, other important sources may not be adequately considered, and the information will be withheld from readers.

Plagiarism

In recent years, plagiarism in PhD dissertations by prominent German politicians has attracted a good deal of attention from the general public. Even though this was due to the prominence of the wrongdoers, the fact remains that more or less secretly copying without adequate reference to sources is, according to a very broadly shared view, a completely unacceptable behavior (not only in science). Essentially, it is about the fact that in such cases the use of ideas, results and statements of others in a publication are not adequately identified.

The Academy of Management (2006) gives in the “Code of Ethics” some advice with regard to avoiding plagiarism:

“1. AOM members explicitly identify, credit, and reference the author of any data or material taken verbatim from written work, whether that work is published, unpublished, or electronically available.

2. AOM members explicitly cite others’ work and ideas, including their own, even if the work or ideas are not quoted verbatim or paraphrased. This standard applies whether the previous work is published, unpublished, or electronically available.”

The sharp rejection of plagiarism in the scientific community is mainly due to the grave violation of the principles of trust, reliability; honesty and fairness (see Sect. 10.1).

“Slicing”

The aforementioned publication pressure on scientists can also lead to attempts to generate as many publications as possible from a larger study. The literature somewhat ironically speaks of “the highest number of publishable units” (Albers 2014, p. 1555) for dividing the results of a project into a larger number of narrowly focused publications. However, the scarce space in the leading journals can also be the reason for the shortest possible publications, in which extensive studies can no longer be given a platform for comprehensive presentation. In such cases, however, all results must be original without repetition of already published results.

What are the ethical problems in this context? First, the question arises as to whether editors as well as reviewers and then the readers of a journal know that several publications have been published or have appeared on various aspects of the project. If not, one gets a distorted impression of the author’s contributions in terms of scope and substance. For this reason, it is necessary to state each time an article is submitted whether the results of the respective data have already been published elsewhere. Furthermore, an extensive use of “salami tactics” leads to a waste of scarce space in scientific journals and thus limits the publication possibilities of other studies.

Appropriate Mentioning of the Authors

In view of the already explained relevance of publications for a scientific career, the correct information on authorship is also highly relevant. Appropriate mentioning indicates who is responsible for the published study and has provided the corresponding contribution. The usual rules for naming authors are generally recognized and clear:

  • The scientists who have made a significant contribution (and only those) should be mentioned as authors. Persons without a contribution should not be named as author. If the contribution of individuals is limited to minor administrative or technical assistance, this can be communicated in a footnote. Sometimes publications refer to co-authors who have not made a direct and significant contribution or worked directly on the project. This can be justified if these individuals have made substantial contributions to enable the project. For example, one could think of scientists who have provided intellectual and administrative contributions to a successful third-party funding application (and thus have designed subprojects) but have not fully cooperated in each subproject. On the other hand, the position of a supervisor at a scientific institution or the supervisor status during a PhD phase does not justify the claim of co-authorship in a publication.

  • Normally, the order of authors refers to the proportion of contribution of authors to the publication. If all authors have contributed to approximately the same extent, an alphabetical order (or random order) of names and a corresponding note are common. The hierarchical position does not matter for the order of authors.

  • There is no justification for so-called “ghostwriting”. This is about scientists exploiting the dependencies of others to publish their work under their own name. These are cases of plagiarism (see above), because the “author”, who is indicated on the publication, uses a contribution by another person and pretends that it is his or her own achievement.