Introduction

Typically, states and organizations have used their sanctioning powers to prevent people who were somehow knowingly engaging in wrongful conduct, such as breaching contracts or otherwise eschewing their duties. By and largeFootnote 1 the usage of enforcement tools in both law and management assume that in most cases, people consciously choose to engage in unethical behaviors, and that certain techniques, such as communicating through state laws, ethical codes that focus on corporate values (Somers 2001), and incentives (Gneezy et al. 2011; Camerer and Hogarth 1999; Weaver 1995), can be used to change such decisions. A large portion of the intervention mechanisms used by various government agencies, courts, and organizations have relied on these assumptions and created incentive mechanisms, increased enforcement efforts, and added new regulations to enhance transparency (Stapenhurst and Kpundeh 1999). In contrast, many theories of the behavioral approach to human judgment and decision making have challenged the basic assumptions of the neoclassical economic doctrine of rational choice (Feldman 2011). Among these, the literature related to the rising role of non-deliberative choice in people’s behavior stands as a central and dominant alternative. A common theme in these paradigms is the view that many of the undesirable behaviors, traditionally the focus of prevention using rational choice mechanisms, have to do with “good people” who do not necessarily engage in a fully deliberative process before performing a “bad” action (Banaji and Greenwald 2013). Therefore, the ability of current explicit mechanisms to curb unethical behavior may be limited and need to be reexamined. Thus, for example, organizational sanctions, which are prevalent in many codes of conduct (Adams et al. 2001; Schwartz 2002), might not stop people from behaving unethically if they do not perceive their action as one which raises an ethical problem.

The Rise of “Good People”

The focus in recent literature on “good people” represents a growing recognition that many ethical decisions are the result of implicit choices, rather than explicit ones, which are made by normative citizens. Simply reviewing the titles of current papers shows how central the theme has become (e.g., Banaji and Greenwald 2013; Bereby-Meyer and Shalvi 2015; Bersoff 1999; Hollis 2008; Mazar et al. 2008; Pillutla 2011; Shalvi et al. 2015). This theme of good peopleFootnote 2 suggests a growing recognition that many ethically relevant behaviors that were previously assumed to be choice-based, conscious, and deliberate decisions are in many cases the product of automatic processes that prevent people from recognizing the wrongfulness of their behavior, an idea dubbed by several leading scholars as an ethical blind spot (e.g., Chugh et al. 2005; Bazerman and Tenbrunsel 2011).

A deeper understanding of good people can be achieved based on the concept of dual reasoning, the assumption of two distinct systems of reasoning. This concept gained popular recognition in Kahneman’s book, thinking fast and slow (2011). Generally speaking, the concept, which stands at the core of much of the research in behavioral law and economics, differentiates between an automatic, intuitive, and mostly unconscious process (labeled System 1) and a controlled and deliberative process (labeled System 2) (see also Stanovich and West 2000; Evans 2003; and for review: Evans 2008). Although this paradigm has been criticized by many scholars (e.g., Kruglanski and Gigerenzer 2011), the recognition of the role of automaticity in decision making has played an important part in the emergence of behavioral economics (e.g., Halali et al. 2013; Halali et al. 2014; Sanfey et al. 2003), and behavioral law and economics (e.g., Jolls et al. 1998), and it is the foundation for a new understanding and approach to self-interest (see Gigerenzer and Goldstein 1996).

Good People and Conflict of Interest

Given the growing focus on good people in psychology and management, and the rise of the dual reasoning research, the limited attention to its implications on the enforcement of ethics is puzzling (see Feldman 2014 for a review). Motivated reasoning is the main theoretical paradigm that supports the view that individuals’ self-interest changes their understanding of reality (Kunda 1990). The addition of the behavioral ethics and dual reasoning line of research to motivated reasoning is connected to the perspective that moral judgments and decisions are the results of reasoning and deliberation, while self-interest was argued to be an automatic primary motive that needs to be constrained by appropriate inhibitory mechanisms (e.g., Moore and Loewenstein 2004). Along those lines, it was found that honesty requires the availability of cognitive-control resources (Gino et al. 2011; Mead et al. 2009) and time (Shalvi et al. 2012), demonstrating that behaving ethically requires more deliberative resources than behaving unethically. Similarly, Halali et al. (2013) have shown a similar effect with regard to fairness considerations in dictator games settings. These fairness considerations seem to require much more deliberation than self-interest considerations, while the latter are more intuitive (see also Achtziger et al. 2015; Uziel & Hefetz 2014; Xu et al. 2012, for the same pattern of results). Furthermore, Moore et al. (2010) showed that people truly believe their own biased judgments, with only a limited ability to recognize that their behavior was affected by self-interest. Thus, not only has motivated reasoning literature shown that self-interest affects people’s understanding of the world around them, but behavioral ethics tells us that self-interest has this effect despite limited awareness in the individual of the existence of this effect.

This study will attempt to explore the question of how the implications of this literature can be applied to enforce ethics in an organization. While the classical debate in enforcement intervention, among both states and organizations, has typically been related to comparing the efficacy of deterrence and morality (for a review see Feldman 2011), the good people paradigm suggests (1) that even garden variety, normative people might engage in unethical behavior without fully recognizing that their behavior is unethical and (2) that they might be less likely to react to these traditional interventions focused on curbing unethicality. Consequently, organizations might require a new way of dealing with wrongdoing. In other words, what seems to be missing from current models of enforcement intervention, according to the good people paradigm, is a consideration of their own relevance in situations that fall within employees’ moral blind spots. If indeed most unethical behavior in organizations occurs within such moral blind spots, it may be necessary to find new types of top-down interventions to change individuals’ ethical judgment and behavior.

From an applied perspective, the unaware, unethical effect of self-interest on the behavior of a broad and normative segment of the population—described above as good people—might impose an important challenge in creating new tools to discourage such types of unethical behavior (see Feldman et al. 2013). The ability of incentives and deterrents to affect non-deliberate behavior has been discussed by scholars such as Bazerman and Tenbrunsel (2011), who suggest that “such measures simply bypass the vast majority of unethical behaviors that occur without the conscious awareness of the actors, who engage in them” (p. 111). Following the same line of thought, Banaji and Greenwald (2013) have challenged the classic enforcement approach that focuses on external measures and incentives to control unethical behavior, as such an approach relies on an unjustified central role to self-control, autonomy, and responsibility for one’s actions.

Yet, although in various situations “good people” may not be affected by incentives, it is not clear to what extent employees will entirely ignore the existence of incentives. For example, even if the process of self-deception might block one’s full awareness of the unethicality of their own behavior, it is possible to predict that introducing sanctions will cause people to be more aware of their behavior and lessen the effect of automaticity. Therefore, traditional intervention techniques that target awareness should not be disregarded, but rather reexamined in light of behavioral ethics literature.

This challenge of reexamining traditional regulatory approach, and possibly creating new ones, is especially important in dealing with subtle conflict of interest in organizations and in government. This specific type of unethical challenge is attributed according to various scholars by people whose behavior breaks no specific law (Lessig 2011).

Subtle Conflict of Interest

In attempting to understand how to intervene and cause people to engage in an ethical way, the current study focuses on the concept of subtle conflict of interest. Conflict of interest is the basic paradigm that lies at the heart of most organizational misconducts and receives a special attention in almost every type of ethical code (Stevens 1994). It also contributes to the spread of corruption across numerous government and business contexts. For example, in the medical field, there is a fertile ground for conflicts of interest, where even scientists and doctors, who believe they are doing what is best for the public health, might ultimately engage in behaviors that favor the entities who compensate them (Feldman et al. 2013). Another common example can be found in clinical studies that are financed by pharmaceutical companies, which provides an incentive for physician-researchers to reach certain results that would benefit those companies (e.g., Friedberg et al. 1999; Hillman 1987; Rodwin 1989, 2012). An additional example is the transition of professionals from the public to the private sector. This is a common phenomenon, which is highly problematic for the ability of public sector employees to focus only on the interest of the public (Che 1995). This process, referred to as the “revolving doors,” presents a possible conflict of interest. However, the problem is much greater than these examples suggest. One hypothesis suggests that anticipation of future opportunities in regulated firms may cause regulators to be less aggressive when administering regulatory policy, even without full awareness that this anticipation may alter their behavior (Che 1995). In many other areas, including those of lawyers vis-à-vis their clients, executives vis-à-vis shareholders, prosecutors in plea bargains, and academics involved in the promotion of their colleagues, most good people may believe that the option that promotes their self-interest is also the correct one. Thus, in such situations, there might be only limited wisdom in threatening people with punishment for corruption.

Understanding the behavior of people in all of these situations cannot be gleaned from current behavioral ethics studies of dishonesty where subjects sit in the lab and are asked, for example, to report how many assignments they have solved (e.g., Mazar et al. 2008). In these situations, the decision to cheat is clear and unambiguous, which is not the case when employees behave unethically in conflict of interest situations. This important difference is especially salient in contexts where people’s motivated reasoning might lead them to feel that their choice to prioritize their self-interest is, in fact, also the right solution for the organization they work for (see also Zamir and Sulitzeanu-Kenan 2016). Thus, conflicts of interest—especially subtle ones—are different from the focus of most dishonesty studies: In such situations, both the interest and the behavior are subtle and could be seen as almost legitimate. In contrast, in most of the classical dishonesty studies, the lie is clear to the participant. For example, participants know that for every matrix that they over-report, they get more money. Thus, those who behave dishonestly know that what they do is wrong; they do it for a profit and find various excuses to justify their behavior (Ayal and Gino 2011). However, in many of the real-life organizational studies, people constantly evaluate and judge whether a certain employee is good, or a certain program is good, or a certain company is worth buying.

Indeed, there are some existing studies that have taken this approach, such as Cain et al. (2005) study on conflict of interest in counting jellybeans; but even in those situations, participants knew that the numbers they provided (in their advice) were false. A different study by Pittarello et al. (2015) focused on more ambiguous situations, where participants had to identify the location of blurred stimuli. In this instance, subjects could easily justify dishonesty in reporting the location of the stimulus simply because there is no one correct answer. As a result, subjects may not have been consciously aware of their dishonesty.

We believe that the experimental approach we present in the current work contributes to the literature in a number of ways. First, we use Amazon Mechanical Turk (MTurk) employees who were hired to evaluate a research center. This example is relatively similar to tasks that many employees engage in because they do not make an ultimate up or down decision about the research center; rather, they evaluate a certain proposal, which is much more similar to the type of evaluation assignments people perform in their regular jobs. Second, we focus on an ambiguous context, which we believe to be the most common area where good people could engage in motivated reasoning and self-deception regarding their unethicality (see Pittarello et al. 2015). Third, our study could contribute to the hundreds of enforcement studies, which documented numerous factors that change the efficacy of deterrence and morality on changing people’s behavior. We believe that learning how different interventions operate when dealing with subtle conflicts of interest will contribute also to the nudge literature, which was used to curb the unethicality of good people (Shu et al. 2012), and which is now a leading theory in the domain of non-deliberative choices in general and in psychology in particular. Our project’s assumption is that, in the rush to adopt nudges and implicit intervention to regulate the implicit behavior of people, we need to explore the efficacy of more traditional interventions in order to test whether or not we should abandon these methods, even when dealing with good people who supposedly do not fully recognize the wrongdoing in their behavior. In other words, the aim of our study is to test the complexity of the good people argument in ethical decision making where, on the one end of the spectrum, people fail to recognize the wrongness of their own behavior, and on the other end, people create a self-imposed ethical line which they don’t cross.

The Current Work

In the current work, we examined how material interests imperceptibly affect decision making, and the effectiveness of negating self-interests in conflict with organizational duty through classical and new intervention approaches. Many questions in this field remain open, as most of the new research on decision making and behavioral ethics does not appear to gain expression in the research and practice of conflict of interest. Little is known about what should be done to effectively change the putative influences outlined above. Understanding the process by which self-interest operates is naturally a key to understanding how to curb these influences and to determine which intervention method legal policy makers should focus on in various contexts of interest.

To answer these questions, we designed two studies that focus on people’s behavior in conflict of interest situations. In the first study, we focused on understanding the process through which conflicts of interest might affect people more—intuitive/automatic or analytical/deliberative mindsets. Following the vast majority of the research on bounded ethicality (see Bereby-Meyer and Shalvi 2015 for review), we hypothesized that, indeed, compared to analytical/deliberative mindset, intuitive/automatic mindset increases unethical decision in response to subtle conflicts of interest. In the second study, we focused on pinpointing the best intervention methods to curb such behaviors—deterrence or morality—and the best way to implement them—explicitly or implicitly. Deterrence and morality are very common intervention practices used by the authorities: Deterrence serves as a traditional function of the law and relies on extrinsic motivation to shape behavior; in contrast, morality focuses on changing an individual’s intrinsic motivation. Extrinsic motivation refers to actions driven by external commands or incentives and can be achieved by targeting the potential offender’s financial status through rewards and fines. Conversely, intrinsic motivation is linked to behavior driven from within the individual, usually out of a sense of moral or civic duty (Deci et al. 1999; Kasser and Ryan 1996),Footnote 3 and it is affected by targeting the sense of morality (see Feldman 2009, 2011).

As far as we know, the current research provides a first look into the ability of organizations to curb the unethical behavior of “good” people in subtle conflicts of interest. The focus on people in subtle conflicts of interest attempts to replicate situations in which it would be very easy for people to behave in self-interested way without fully acknowledging that their behavior is unethical. In obvious situations, where people are bribed to act against their duty of loyalty to the state or to the corporation, the now common “blind spot” argument is tenuous and is less likely to occur.

In our subtle conflict of interest scenario, we used a case study where the gap between ethical demands and self-interest is minimal. Specifically, we placed our participants in conflict of interest between what they were hired to do (to evaluate a specific research center in an objective way) and their personal interest (to write good things about the research center so they might be invited to participate in an additional study for additional compensation). In addition, we included other components to make this experimental setting suitable for examining the behavior of good people from two different angles.

First, instead of actual bribes or anything else overtly unethical, we introduced subtle conflicts of interest, which leave more room for implicit corruption. This distinction appears in the conflict of interest literature (Moore and Tanlu 2010). For the most part, it has been argued that it is more beneficial to investigate the behavior of good people when they are facing a subtle conflict of interest because they are more likely to be unaware of the influence that such conflict has on them (Chugh et al. 2005). Hence, consistent with research on the contribution of ambiguity to dishonesty discussed above, we created a situation where the incentive to shirk was presented in an ambiguous way, where there was no direct link between the behavior of the MTurk employee and reaping the rewards. Rather, the employee’s evaluation of the research center only slightly increases the odds of beneficial rewards in the future, but does not guarantee them. This ambiguity is not only interesting theoretically, but also practically: Many of the revolving doors conflicts occur in areas where there is no clear link between one’s level of favoritism in the public sector and her likelihood of being hired by a relevant regulated party upon exiting to the public sector (Cornaggia et al. 2016; Gormley 1979).

Second, we created two levels of bias participants could express to increase their chances of getting a future reward (i.e., issues regarding the research conducted in the research center and issues regarding the researchers working in the research center). By creating these two levels, we allow for another examination of good peoples’ behavior in conflict of interest situations. According existing studies (Mazar et al. 2008), part of the concept of good people is related to the perspective that good people choose not to lie as much as they could by rational choice accounts. By creating two possible levels of biased evaluation in conflict of interest situations, we allow for extending those insights to understanding the behavior of people in conflict of interest situations and their self-imposed limits.

In both experiments, we recruited participants from the online labor market, Amazon Mechanical Turk (MTurk). MTurk is an online labor market in which employers can employ workers to complete short tasks (generally in fewer than 10 min) for relatively small amounts of money (generally up to $1). Workers receive a baseline payment (i.e., show-up fee) and can be paid an additional bonus depending on their performance. Importantly, while the reputation of most in-lab participants is usually unknown, as most behavioral labs do not (or cannot) use a reputation system to create lasting, publicly available reputations, MTurk does know of participants’ reputations. Therefore, reputations of both the employers (i.e., requestors) and the workers can be sacrificed if either one of them behaves unfairly (for a further descriptions of Mechanical Turk sampling, see Buhrmester et al. 2011; Paolacci and Chandler 2014; Rand et al. 2012). Considering this unique characteristic, we reasoned that a sample of MTurk participants would be a better representation of the relationship between employers and employees in the real world.

In both studies, we measured participants’ objectivity in evaluating the research institute and its scientists as described in the questionnaire. We presented participants with various conditions, which created opportunities for them to advance their manipulated self-interest by shifting their judgments in favor of the described institution. If participants in the experimental group provided a biased evaluation of the research institution relative to a control group who had no financial interest in the evaluation, we could identify deviation in their evaluations from the control group’s objective evaluations. Such deviations could not be defined as corrupt or unethical. However, as suggested in the introduction, we have intentionally focused on these ambiguous contexts as a way to account for the Blind Spot argument (e.g., Banaji and Greenwald 2013; Chugh et al. 2005; Sezer et al. 2015). Participants were then assigned to a few randomized group in which they learned, either implicitly or explicitly, of being either under a regime of penalty or of appeal to morality. Next, participants answered two questionnaires regarding the research institute. The first one included items focusing on the research conducted at the institute and on the scientists working there; the second focused strictly on the research and was aimed to assess the participants’ agreement with different statements and their willingness to actively engage along the lines delineated in the statements. Lastly, we tried to assess participants’ sense of objectivity regarding the research institute when answering the previous questionnaires.

Experiment 1

Participants

Ninety-nine participants (52.5% males, 47.5% females) completed the experiment online through MTurk in exchange for $1. Additional collected demographics included Race (74.7% White, 7.1% Black, 4.0% Hispanic, 10.1% Asian, 4.0% Other), Age (18.2% 18–24 years old, 44.4% 25–34 years old, 23.2% 35–44 years old, 9.1% 45–54 years old, 5.0% 55 years old and over), and level of education (1.0% less than high school, 10.1% high school/GED, 26.3% some college, 8.1% 2-year college degree, 43.4% 4-year college degree, 8.1% master’s degree, 3.0% PhD/MD/JD). Participants were all US residents with a previous HIT Approval Rate of 80% or better. We excluded responses from one participant who attempted to complete the study multiple times.Footnote 4 All participants signed an informed consent form before participating in the study. We randomly assigned participants to one of two experimental mindset conditions (intuitive/analytical).

Procedure

After signing the consent form, participants went through a mindset manipulation. Next, participants read a paragraph describing the Edmond J. Safra Research Center and were introduced to the conflict of interest. Subsequently, participants received three different questionnaires: (1) an 18-item questionnaire, (2) a binominal questionnaire, and (3) an objectivity questionnaire. These questionnaires aimed to evaluate (1) their opinion, (2) their support for the research institute, and (3) their sense of objectivity with regard to the research institute when answering the questionnaires. Finally, participants answered a demographic questionnaire.

Materials

Mindset Manipulation

Relying on Shenhav et al. (2012; also used by Rand et al. 2012), we manipulated mindset by asking participants to write a paragraph of 8–10 sentences recalling an episode from their life in which their intuition/first instinct (i.e., intuitive/automatic mindset) or carefully reasoning through a situation (i.e., analytical/deliberative mindset) led them in the right direction and resulted in a good outcome. Then, prior to the 18-item questionnaire, participants were asked to rely on intuition/reasoning when making their responses. Lastly, in order to further strengthen participants’ reliance on reasoning in the analytical condition, we told the participants in this condition that, following their responses to the 18-item questionnaire, they will be required to describe in writing the reasons for their evaluations (Wilson and Schooler 1991).

The Conflict of Interest

We created a potential for conflict of interest (COI) by telling participants that “currently, the Edmond J. Safra Center is running an important additional online experiment (with additional, higher relative pay) to examine modes of…,” and that “participants for this experiment will be selected based on their answers to the current survey.” Following this statement, we asked participants to indicate whether they would like to be considered for this additional experiment. Eleven participants indicated that they did not want to be considered for the additional experiment (five in the intuitive mindset condition and six in the analytic mindset condition). The goal of the conflict of interest manipulation was to place participants in a conflict of interest between what they were hired to do (i.e., to evaluate the research center in an objective way) and their personal interest (i.e., to write good things about the research center so they might be invited to participate in additional studies for additional compensation). Because participants who did not want to be considered for the additional experiment could not get the opportunity to earn more money on their evaluations, their evaluations were not subject to a conflict of interest. In other words, they did not have any material temptation to write good evaluations. Since the current work focuses on conflict of interest, we excluded these participants from further analyses. Every participant who wanted to be invited for the additional study was given a link to that study at the end of the experiment and received additional bonus for that additional study.

The 18-Item Questionnaire

The 18-item questionnaire included nine items that focused on the research conducted by the institute, such as “Research conducted by this center is more important than most other research I’m familiar with in the social sciences,” and eight items that focused on the scientists working at the institute, such as “The salaries of scientists at this center should be higher than other scientists’ salaries” (see “Appendix” for the full questionnaire). Participants were asked to indicate their answers “as objectively as possible” on a scale of 1 (strongly disagree) to 6 (strongly agree). Agreement on all items indicated an evaluation favorable to the research institute. To make sure participants read each item before answering, one of the items (number 10) required participants to provide a specific rating (“2”) and did not include any question regarding the research or the scientists of the institute. Except for one participant who missed the answer, all of the participants responded correctly to this item.

The Binominal Questionnaire

The binominal questionnaire included four binominal questions regarding each of the following three statements about the research institute: (a) “Research conducted by the Safra Center is crucial for the well-being of society,” (b) “The Safra Center’s research will change the way we look at public institutions,” and (c) “The Safra Center’s mission is the first attempt ever to deal with one of our most important problems.” In reference to each statement, participants had to indicate whether (a) it is accurate/inaccurate, (b) they agree/disagree with it, (c) would/would not make it to a potential donor, and (d) would/would not be willing to sign a petition.

The Objectivity Questionnaire

The objectivity questionnaire included the following yes/no questions assessing their sense of objectivity with regard to the research institute when answering the questionnaires: (a) “Do you think you were influenced by anything while you were answering the questions?” (b) “Were you completely objective during this study?” and (c) “Did you consider anything besides your best judgment while answering these questions?”

Results and Discussion

The average time of survey completion in the intuitive and the analytic mindset conditions was 11:14 (SD = 8.7) and 12:68 (SD = 7.2) minutes, respectively. We excluded three outlier participants (two in the intuitive mindset condition and one in the analytic mindset condition) from all analysis because their completion time (41 and 46 min in the intuitive condition and 44 in the analytic condition) was more than three standard deviations away from the mean in their condition (Z values were 3.34, 3.90, and 4.45, respectively). Therefore, including all aforementioned excluded participants, we excluded a total of 15 participants from all the following analyses. The pattern of the following reported results was similar when all these participants were included in the analysis.

The 18-Item Questionnaire

For each participant, we calculated a separate mean score for the research-related items (mean = 3.9, median = 3.9, SD = .77) and for the scientist-related items (mean = 3.2, median = 3.3, SD = 1.0). Cronbach’s alpha reliability of these items was .88 and .90, respectively. Next, we entered the mean scores into a mixed-model analysis of variance (ANOVA), with mindset condition as a between-participants variable and the item issue (research, scientists) as a within-participants variable.

A significant main effect for item issue, F(1,82) = 86.88, p < .001, η 2 p  = .51, indicates that participants’ mean evaluation regarding the research conducted at the institute (M = 3.98, SD = .79) was more positive than their mean evaluation of the scientists working at the institute (M = 3.22, SD = 1.07) across all conditions. Importantly, and as expected, participants’ mean evaluation in the intuitive mindset (M = 3.81, SD = .83) compared to the analytic (M = 3.44, SD = .86) mindset condition was significantly more positive, F(1,82) = 3.99, p < .05, η 2 p  = .05, with no condition × item issue interaction (F < 1, n.s.).

The Binominal Questionnaire

The Kuder–Richardson (Kuder and Richardson 1937) formula 20 reliability of the 12 items in the binominal questionnaire was .84. For each participant, we calculated the proportion of answers in favor of the research institute. We entered this proportion into a one-way ANOVA with mindset condition as a between-participants variable. As in the 18-item questionnaire, participants’ favoritism toward the research institute in the intuitive (M = .72, SD = .20) compared to the analytic (M = .66, SD = .30) mindset condition was more positive. This difference, however, did not reach statistical significance F(1,85) = 1.23, p = .27, η 2 p  = .02.

The Objectivity Questionnaire

Because the objectivity questionnaire included only three binominal items, we entered the meaning of the participants’ answer (0: objective, 1: non-objective) into a generalized probit estimation equation for binominal data, with mindset condition (intuitive, analytic) as a between-participants independent variable, question (1, 2, 3) as within-participants independent variables, and the participants as a random factor. A significant main effect for question, Wald χ 2(2)  = 13.60, p = .001, revealed that more participants indicated non-objective behavior on question 1 (20.2%) than on question 2 (13.1%), Wald χ 2(1)  = 3.36, p = .067, and more on question 2 than on question 3 (4.8%), Wald χ 2(1)  = 7.29, p = .007. Yet, the main effect for condition and the question * condition interaction was not significant: All Wald χ2 < 1, ns., indicating that while participants in the intuitive, compared to the analytic, mindset condition favored the research institute they, did not feel less objective, or at least did not report they were less objective.

Thus, the results of the first study show that intuitive/automatic mindset—i.e., lack of deliberation—is related not only to dishonesty, on which much of the literature has focused, but is also related to unethical behavior in subtle conflict of interest situations. Hence, they broaden the horizons of current behavioral ethics literature by shifting the focus away from pure dishonesty. Under subtle conflict of interest between what participants were hired to do (i.e., to evaluate the research center in an objective way) and their personal interest (i.e., to write good things about the research center so they might be invited to participate in additional studies for additional compensation), intuitive/automatic mindset increases the likelihood that participants will provide favorable reviews toward the Safra Center relative to the participants who were put in the analytical/deliberative mindset. However, this study provides no suggestions as to how can we regulate people’s behavior, given that we are more likely to behave in a non-objective way in an intuitive mindset, an explicit intervention or an implicit one?

Experiment 2

Participants

Three-hundred and twenty participants (51.6% males, 48.4% females) completed the experiment online through MTurk in exchange for $1. Additional collected demographics included Race (76.2% White, 7.5% Black, 4.7% Hispanic, 7.5% Asian, 4.1% Other), Age (21.3% 18–24 years old, 42.5% 25–34 years old, 18.4% 35–44 years old, 8.1% 45–54 years old, 9.7% 55 years old and over), and level of education (.9% less than high school, 9.4% high school/GED, 27.5% some college, 8.8% 2-year college degree, 40.0% 4-year college degree, 10.9% master’s degree, 2.5% Ph.D./MD/JD). Participants were all US residents with a previous HIT approval rate of 80% or better. All participants signed an informed consent form before participating in the study. Participants were randomly assigned to six experimental conditions: four conditions with conflict of interest and different forms of deterrence or morality interventions, one control condition with conflict of interest without any intervention, and one control condition with no conflict of interest and no intervention. Below are the six groupsFootnote 5:

  1. 1.

    Conflict of interest with explicit deterrence (n = 52).

  2. 2.

    Conflict of interest with explicit morality (n = 55).

  3. 3.

    Conflict of interest with implicit deterrence (n = 54).

  4. 4.

    Conflict of interest with implicit morality (n = 54).

  5. 5.

    Control group: conflict of interest with no intervention (n = 56).

  6. 6.

    Control group: no conflict of interest and no intervention (n = 49).

Procedure

After signing the consent form, participants read the same paragraph as in Experiment 1, describing the Edmond J. Safra Research Center. Next, they were introduced to the conflict of interest, followed by various forms of morality or deterrence interventions. Subsequently, participants received the same 18-item questionnaire, binominal questionnaire, objectivity questionnaire, and demographic questionnaire as in Experiment 1. In the control with no conflict of interest condition, participants answered the set of questionnaires immediately after they read the paragraph describing the research institute (i.e., participants were not exposed to the conflict of interest and did not go through any intervention). In the control with conflict of interest condition, participants answered the questionnaires after being exposed to the description of the conflict (i.e., participants did not go through any intervention).

Materials

The Conflict of Interest

We have created a potential for conflict of interest (COI) using the same manipulation as in Experiment 1. Fifteen participants in the COI conditions indicated that they did not want to be considered for the additional experiment (three in explicit deterrence, four in explicit morality, three in implicit deterrence, two in implicit morality, and three in no intervention). As in Experiment 1, we excluded these participants from further analyses as if they did not want to be considered for the additional experiment they could not have experience the conflict of interest during the current experiment. Every participant who wanted to be invited for the additional study was given a link to that study at the end of the experiment and received additional bonus for that additional study.

Explicit Deterrence

We manipulated explicit deterrence by asking participants to read a paragraph on government crackdown and stating that “In accordance with this worldwide trend, we believe that people who let their conflict of interest affect their objectivity and integrity when completing this survey should be penalized. Hence, participants who let their conflicting interests affect their judgment might lose some of their compensation for the work they do for us.” Next, we asked participants to answer a three-item questionnaire to verify their understanding of the explicit deterrence intervention (see “Appendix” for a full display of the manipulation and the followup questions). Three participants in the explicit deterrence intervention failed to answer the three deterrence comprehension items correctly. We excluded these participants from further analysis.

Explicit Morality

We manipulated explicit morality by asking participants to read a paragraph explaining why, in a situation of conflicting interests, acting based on one’s self-interest is immoral, and stating that “In accordance with this worldwide trend, we believe that people who let their conflict of interest affect their objectivity and integrity when completing this survey are not acting in a moral and ethical way. Hence, participants who will let their conflicting interests affect their judgment might harm the public good.” Next, we asked participants to answer a three-item questionnaire to verify their understanding of the explicit morality intervention (see “Appendix” for a full display of the manipulation and the followup questions). Nine participants in the explicit morality intervention failed to answer the three morality comprehension items correctly. We excluded these participants from further analysis.

Implicit Deterrence

We manipulated implicit deterrence using a 35-item word completion test in which 11 of the items were words related to deterrence (e.g., punishment, subpoena, indictment) in order to prime participants with concepts of deterrence. Each of these 11 prime words was tested and found to have several hundred-thousand Google results shared with the word deterrence (see “Appendix” for a full list of these words). Methodologically, in priming the targets using lists of words, we have followed a rationale similar to those used in other studies in which priming words are used to induce a state of mind (e.g., Srull and Wyer 1979; Norenzayan and Shariff 2008). In contrast to scrambled sentences used in those papers, we have used a word completion task to get people to think about the two modes of compliance motivation—deterrence and morality. Further, this method was recently used as a dependent measure in a very influential paper on bounded ethicality (Shu et al. 2012). We excluded two participants from further analysis because they failed to identify five or more of the 11 deterrence prime words.

Implicit Morality

The implicit morality intervention was the same as the implicit deterrence intervention except that the 11 prime words were related to morality (e.g., integrity, morality, honesty) rather than to deterrence. Similar to the implicit deterrence intervention, each of these 11 prime words was tested and found to have several hundred-thousand Google results shared with the word morality (see “Appendix” for a full list of these words). The remaining 24 neutral items were the same as in the implicit deterrence intervention. We excluded from any further analysis two participants who failed to identify six or more of the 11 morality prime words.

The 18-Item Questionnaire

The 18-item questionnaire was the same as in Experiment 1.Footnote 6

The Binominal Questionnaires

The three statements and the binominal questionnaires were the same as in Experiment 1.

The Objectivity Questionnaire

The objectivity questionnaire was the same as in Experiment 1.

Results and Discussion

Table 1 displays the average time and standard deviations for survey completion in all experimental conditions. We excluded three outlier participants (one in the explicit deterrence condition, one in the explicit morality condition, and one in the control-COI condition) from all analysis since their completion time (48, 45, and 188 min, respectively) was more than three standard deviations away from the mean in their condition (Z values were 3.63, 4.04, and 6.85, respectively). Therefore, including all aforementioned excluded participants, we excluded a total of 34 participants from all the following analyses. The pattern of the following reported results was similar when all of these participants were included in the analyses.

Table 1 Average time and standard deviations for survey completion in all experimental conditions in Experiment 2

The 18-Item Questionnaire

For each participant, we calculated a separate mean score for the research-related items (mean = 4.1, median = 4.1, SD = .85) and for the scientist-related items (mean = 3.1, median = 3.1, SD = .95). Cronbach’s alpha reliability of these items was .89 and .83, respectively. Next, we entered the mean scores into a mixed-model analysis of variance (ANOVA) with condition as a between-participants variable and the item issue (research, scientists) as a within-participants variable.

A significant main effect for item issue, F(1,280) = 443.41, p < .001, η 2 p  = .61, indicates that participants’ mean evaluation regarding the research conducted at the institute (M = 4.07, SD = .86) was more positive than their mean evaluation of the scientists working at the institute (M = 3.07, SD = .95) across all conditions. The main effect of condition was significant, F(5,280) = 5.22, p < .001, η 2 p  = .09, as was the condition × item issue interaction, F(5,280) = 3.15, p < .01, η 2 p  = .05.

As shown in Fig. 1, subsequent analyses of the condition × item issue interaction revealed that participants’ evaluations of the research conducted at the institute in the control-COI condition and the two implicit intervention conditions (deterrence, morality) were significantly higher than those of the control-no COI condition and the two explicit intervention conditions (deterrence, morality), F(1,462) = 20.52, p < .001, η 2 p  = .07. These results indicate that the opportunity to earn extra money by participating in another experiment of the research institute caused participants to behave less ethically: They demonstrated favorable disposition toward the research conducted at the institute in the control-COI condition compared to the control-no COI condition. As for the different forms of interventions, while both the explicit deterrence and explicit morality interventions were effective, that is, they resulted in evaluations similar to those in the control-no COI condition, the implicit deterrence and implicit morality interventions were not effective. Participants’ evaluations in these groups did not differ from those in the control-COI condition.

Fig. 1
figure 1

Participants’ evaluations toward Safra as a function of condition and item issue. Error bars represent .95 confidence intervals

In contrast to the effect of the COI on participants’ evaluations of the research (as illustrated by the difference between the control-COI and the control-no COI conditions), the COI appears not to have affected participants’ evaluations of the scientists working at the institute, as the evaluations in the control-COI condition were not significantly different from the control-no COI condition (F < 1, n.s.). Interestingly, as shown in Fig. 1, while participants’ evaluations of the scientists in the COI conditions with implicit interventions (deterrence, morality) did not differ from the two control groups (COI, no COI), in the COI conditions with explicit interventions (deterrence, morality) participants’ evaluations were significantly lower than those of participants in the four former conditions (control-no COI, control-COI, implicit deterrence, implicit morality), F(1,280) = 21.81, p < .001, η 2 p  = .07. This pattern of results suggests a chilling effect (Craswell and Calfee 1986), that is, even more “objective” evaluations than in the control group, where there was no conflict.

The Binominal Questionnaires

The Kuder–Richardson (Kuder and Richardson 1937) formula 20 reliability of the 12 items in the binominal questionnaires was .85. For each participant, we calculated the proportion of answers in favor of the research institute. We entered this proportion into a one-way ANOVA with condition as a between-participants variable. The main effect of condition was significant, F(5,280) = 2.31, p < .05, η 2 p  = .04. Subsequent analysis revealed similar results to those observed in the analysis of the participants’ evaluations of the research conducted at the institute, as measured by the 18-item questionnaire. Specifically, participants’ favoritism toward the research institute in the control-COI condition and in the two implicit intervention conditions (deterrence, morality) was significantly higher than in the control-no COI condition and the two explicit interventions conditions (deterrence, morality), F(1,280) = 9.86, p < .002, η 2 p  = .03 (see Table 2). These results replicate the results regarding the research conducted in the institute, as measured by the 18-item questionnaire, indicating that the opportunity to earn extra money by participating in another experiment caused participants to be more favorably inclined toward the research conducted at the institute, and that only the explicit forms of deterrence and morality interventions were effective.

Table 2 Proportion of answers in favor of the research institute in all experimental conditions in Experiment 2

The Objectivity Questionnaire

Because the objectivity questionnaire included only three binominal items, we entered the meaning of the participants’ answer (0: objective, 1: non-objective) into a generalized probit estimation equation for binominal data, with condition (depletion, no-depletion) as a between-participants independent variable, question (1, 2, 3) as within-participants independent variables, and the participants as a random factor. A significant main effect for question, Wald χ 2(2)  = 55.61, p < .001, revealed that more participants indicated non-objective behavior on question 1 (24.1%) than on question 2 (13.6%), Wald χ 2(1)  = 20.96, p < .001, and more on question 2 than on question 3 (6.3%), Wald χ 2(1)  = 12.27, p = .001. Moreover, the main effect for condition was marginally significant: Wald χ 2(5)  = 10.40, p = .065 (see Table 3), while the question * condition interaction was not significant, Wald χ 2(10)  = 5.48, p = .857).

Table 3 Proportion of non-objective answers in all experimental conditions in Experiment 2

Subsequent analysis of the condition effect revealed that, in the control-COI condition and the two morality intervention conditions (explicit, implicit), more participants indicated non-objective behavior compared to participants in the control-no COI condition and in the two deterrence interventions conditions (explicit, implicit), Wald χ 2(1)  = 9.17, p = .002. Note that the two morality intervention conditions (explicit, implicit) behaved differently in the main analysis of the research conducted at the institute, as measured by the 18-item and by the binominal questionnaires. Specifically, under the implicit morality condition, participants were less objective as compared to the control-no COI (more in favor of the Safra Center), whereas under the explicit morality condition, they were not. It is possible, therefore, that under these two morality conditions, participants’ higher proportion of self-reported non-objective behavior is the result of self-justification following the morality intervention, and not a result of a true sense of non-objectivity during the study. In contrast, participants’ higher proportion of self-reported non-objective behavior in the control-COI condition, compared to the control-no COI condition, cannot be explained by self-justification, as these participants did not go through any intervention manipulation. Further, participants’ higher proportion of self-reported non-objective behavior in the control-COI condition is consistent with these participants’ exaggerated favorable evaluations of the research conducted at the institute (i.e., their actual unethical behavior). It appears, therefore, that these participants not only behaved unethically but also were aware of it.

Summary of Findings and Discussion

In the current study, we examined how different intervention techniques affected people when they were faced with a subtle conflict of interest (i.e., opportunity to earn additional money if invited to participate in a future study) and an opportunity to engage in a subtle behavior whose ethicality is ambiguous (i.e., rating the Safra Center in somewhat more favorable way could be attributed to other motives). First, by manipulating the self-interest of participants relative to a control group of participants who could not earn additional fee for an additional experiment, we found a substantial “corrupting” effect when we observed that the control-COI group reported a more favorable view of the target stimuli than did the control-no COI group. Although the potential effect of money on behavior is neither new nor surprising, the fact that an opportunity to earn such small amounts of extra money in future research, subtly mentioned to the participants, increased their evaluation of the research institute, even though they were explicitly asked to conduct their evaluations objectively, reveals the corrupting potential of subtle conflict of interests. Further, compared to control-COI, explicit interventions (both deterrence and morality) had a significant constraining effect on participants’ judgments. In contrast, implicit interventions (again, with a similar effect of deterrence and morality) did not affect participant’s judgments. These patterns were obtained with two different dependent variables—an 18-item Likert-type scale questionnaire and a binominal questionnaire. Thus, raising the participants’ awareness of the possibility of their unethical behavior, using rather simple explicit interventions, was sufficient even in the context of subtle conflict of interests to prevent some of those behaviors.

Taken together with the results of Experiment 1, according which unethical behavior in subtle conflict of interest situations is more pronounced under intuitive/automatic compared to analytical/deliberative mindsets, Experiment 2’s results support the claim that unethical behavior is associated with automatic, System 1 processing (e.g., Mead et al. 2009; Shalvi et al. 2012). Furthermore, consistent with the suggestion of the dual-model perspective that System 2 has the ability to override or inhibit default responses emanating from System 1 (Stanovich 1999), it appears that the type of the intervention being used (deterrence/morality) is less important, as long as it is conducted explicitly so that it triggers deliberate System 2 processing. By contrast, at least in the context studied here, implicit interventions did not have any effect on participants’ unethical behavior, suggesting that interventions based on System 1 processing might be less effective in overriding System 1 unethical behavior.

Yet, in the current work we used a word completion task to implicitly manipulate morality and deterrence interventions. While this method was recently used as a dependent measure in a very influential paper on bounded ethicality (Shu et al. 2012), it might be less effective as an independent implicit manipulation; however, the current null effects for this method should not be interpreted as an inability of implicit interventions in general to prevent unethical behavior in subtle conflict of interest. Specifically, it is possible that different implicit methods, or even the same one with a higher proportion of prime words (in the current work we had only 11 primes out of 35 words), might be more affective in constraining unethical tendencies. In line with this, Gino and Desai (2012) have recently found that exposure to childhood cues, as a different form of implicit intervention, reduced unethical behavior.

Nevertheless, taken together, the current findings strengthen our claim that traditional intervention techniques, which assume awareness should not be completely washed away by a more innovative nudge-like techniques advocated in recent behavioral ethics literature. Instead, the classic deterrence literature should be modified in light of the literature on behavioral ethics. It should not be abandoned all together even when dealing with “good” people. It might be the case that as Chugh et al. (2005) argue, incentives will not change the behavior of those who engage in implicit corruption. However, since people’s level of awareness might not be anticipated ex-ante, incentives’ importance should not be undermined without further empirical examination, which should be done in specific organizational and legal contexts.

These results improve our understanding of the approach organizations should adopt to deal with the unethicality of good people. Understanding the combination of these two possible evaluations—substantial (evaluating the center) and personal (evaluating the scientists working in the center)—contributes to our understanding of the good people’s unethicality. Participants were biased in their responses about the research center due to the mere possibility of participating in additional, more profitable, research. However, they were not providing biased estimates of the scientists. This can be explained by the fact that the latter judgment was based on far less information than the former and gave participants.

The fact that it was more difficult to “corrupt” participants as to the scientist measure is consistent with previous findings that people behave unethically to the extent that they can justify their actions (Schweitzer and Hsee 2002; Shalvi et al. 2011). Participants in our study have shown self-restraint against corrupting influences in situations in which they could not have produced a justifiable consideration for changing their judgment. People could feel good about themselves for expressing favorable views about research on ethics, but it may have been more difficult to find justifiable reasons for expressing favorable views about the scientists themselves when they were not given any information that would help them make this judgment. Thus, our findings support the focus on “good” people, as from a rational choice perspective there were fewer justifications to express views that were more favorable to the scientists than to Safra Center generally. Interestingly, however, in contrast to Experiment 2’s results, in Experiment 1, in response to a subtle conflict of interest intuitive/automatic mindset, compared to analytical/deliberative mindset, increased unethical favorable judgments regarding both the research conducted in the institute and regarding the scientists working there. These findings are consistent with the results reported by Shalvi et al. (2012) and highlight how potentially calamitous people’s intuitive/automatic self-serving tendency can be as it seems to evoke unethical behavior even in settings in which people typically refrain from behaving unethically (when self-justifications are lacking).

Another important finding that emerges from the study has to do with the self-awareness of participants as to whether they were objective in their evaluation. As suggested in “Introduction,” this is an important area of research for the interaction between behavioral ethics and the law because of the centrality of awareness to legal theory and practice. In this regard, both explicit and implicit priming of morality appears to have led people in our manipulated conflict of interest to rate their objectivity lower than did members of the control-no COI group, regardless of their actual level of objectivity, as measured by their evaluations of the research institute. More importantly, however, participants in the control-COI group, who indeed were less objective in their evaluations than their peers, reported being less objective compared to participants in the control-no COI group. This finding suggests that members of the control-COI group were aware of their unethical behavior, raising the question of how “good” people who do these “bad things” really are.

Limitations

The findings of this exploratory study suggest several potential implications for theory regarding the interplay between behavioral ethics and law. However, before proceeding with the substantive discussion, we must draw attention to some of the limitations of the study. Naturally, given the exploratory nature of this research, its results should be treated with some skepticism. Nevertheless, our main findings of the effect of the manipulated conflict of interest on participants’ evaluations of the research conducted at the institute, and the effectiveness of the different forms (deterrence vs. morality) and methods (explicit vs. implicit) of intervention were replicated both with the 18-item and the binominal questionnaires.

The main limitation arises with regard to the comparison between the intervention through morality and through the likelihood of sanctioning. There is a limit to what can be learned by comparing deterrence and morality when the concepts are manipulated in an online context. The true effect of deterrence is usually measured in the field or in a lab, where the overall bonuses are at stake. In this experiment, because of various IRB restrictions, our threat was relatively mild. This limitation did not apply to the morality-based intervention, which was naturally less problematic from an IRB perspective. Nevertheless, and despite these limitations, the overall greater efficacy of deterrence than that of morality strengthens the results we obtained, which seemed robust across all the experimental conditions. More importantly, the purpose of this study, as stated in “Introduction,” was not to compare which intervention is stronger. As a result, findings are limited to the particulars of the designs we have used. Hence, the current study mostly attempts to draw the attention of legal scholars to the need to revisit regulation and enforcement mechanisms in light of the research on behavioral ethics. That being the case, the mere fact that we have found effects and shown some consistency in the effects of certain intervention should lay ground for further research across different contexts as to what types of intervention work better in each context.

Another important limitation that we need to take into account is that this research has only offered a way to look at conflict of interest, but is far from exploring all relevant contexts and factors. Our findings are likely to change given the number of players, social and organizational norms, and many other factors. However, given the difference between this experimental approach to business ethics relative to the traditional dishonesty studies, we hope that more studies will follow this research and explore these and many more factors. Those future studies will help build the needed body of literature to help organizations plan their ethical strategy.

Further, an additional limitation is that our current paradigm did not allow us to measure what effects may have resulted had our participants not been paid: This is because all MTurk studies are paid. Nevertheless, this effect could easily be measured in another study. In any case, this is still a conflict of interest paradigm between people’s job requirement and the possibility of participating in a future study. Furthermore, the situation that we attempt to replicate—the revolving door where people perform one task thinking about how it would improve their ability to gain another job—could also be seen as involving not just money but also serving their self-actualization.

Finally, the current work did not attempt to explore the potentially interesting role of individual differences in bounded ethicality, specifically those differences dealing with subtle conflict of interest. Future research should put some effort in this direction and examine demographic variables as gender, education, et cetera, which the current design was underpowered to explore, as well as potential personality variables that could expand our knowledge on the good people paradigm. In this respect, it is worth to mentioning that while the vast majority of our samples did ask to be considered for the additional experiment (and therefore was subject to the manipulated conflict of interest), between 5 and 10% of our participants in both studies did not (and therefore avoided the conflict of interest situation). Interestingly, neither one of the demographics we collected in the current work explained participants’ choice on this matter. It would be interesting to identify personality characteristics that may help individuals to better resist conflict of interest situations.

Policy Implications

The design and findings presented above have several normative implications. First, the realization of how little is needed to change people’s behavior should be both alarming and comforting for policy makers. We have seen that it is easy to cause people to abandon their objectivity: A subtle promise to hire participants for an additional experiment, which might benefit them with $1, had a substantial effect on their objectivity. Thus, where objectivity is valued, policy makers should think deeply about where promises of that kind should be allowed to occur. This has a great significance to situations such as revolving door conflicts and token gifts.

Second, another important component in the paradigm of “good people doing bad things” is that most people are unaware of their lack of objectivity in their own evaluations. Nevertheless, it seems that with sufficiently strong explicit communication regarding a potential wrongdoing, many people would immediately change their behavior, regardless of the nature of the intervention (e.g., deterrence, morality). This would not lead to a change in the behavior of “bad” people, who would engage in further cost–benefit calculations to assess the wisdom of engaging in bad behavior. In other words, much of the concern with the inability to deter the unaware individual (e.g., Chugh et al. 2005) might prove premature. People might indeed be not fully aware to the unethicality of their behavior, but traditional explicit reminders of both deterrence and morality might be sufficient to cause them to at least start correcting for it.

Third, recognizing the role of traditional regulatory tools in shaping implicit behavior is also relevant to one of the most important regulatory changes in recent years—the BIT (behavioral insight team) revolution, which is based on the influential nudge approach (Alemanno and Sibony 2015; Thaler and Sunstein 2008). BIT advises governments on how to use knowledge from psychology and behavioral economics in shaping people’s behavior in socially desirable ways (Feldman and Lobel 2015). Generally speaking, while gaining increasing popularity when it comes to pensions and energy saving BIT has been less dominant in attempting to regulate ethical behaviors. The current BIT approach does not deal with the ability of traditional explicit intervention methods to decrease various automatic processes related to corruption or lack of tolerance and discrimination against minorities. Future research should examine how to combine traditional explicit interventions with implicit interventions when attempting to shape ethical behavior of people in organizations and beyond.