Introduction

Managers are often confronted with apparent moral conflicts between their managerial responsibilities and the requirements of ordinary morality. In this paper, I examine what managers ought to do in the case of one such kind of conflict, one that is especially relevant for managers in a global context. This is the case when the institutional environment, coupled with a manager’s occupational responsibilities, exerts pressures on managers to contribute to another party’s wrongdoing. By ‘institutional environment’, I mean the political and social framework in which businesses operate and which is promulgated through the use of law and custom. This includes formal regulations and policies, but may also include the mechanisms through which policies are made and promulgated, as well as informal governmental and cultural norms.

One common view of this kind of conflict is that it presents managers with an irresolvable conflict or a moral dilemma, in which ‘you’re damned if you do, damned if you don’t’ and that leaves the manager with dirty hands.Footnote 1 For instance, in his analysis of Google’s decision to establish search engine operations in China and to abide by the government’s censorship requirements, Brenkert (2009) has encouraged business ethicists to take seriously what he calls ‘moral compromise’ in non-ideal contexts. In doing so, he invokes the literature on moral remainders and irresolvable conflicts. According to Brenkert, Google’s leadership was complicit in restricting the right to free expression, but by not entering the Chinese market, they would have violated other important considerations, including a duty to develop Google as a business. No matter what Google’s leadership did, they would have violated an important value or principle.

In this paper, I point out the problems with a view of morality that allows for genuine moral dilemmas and propose an alternative account of managerial responsibility that does not rely on such a view, appealing instead to the importance of the manager’s intention in creating the possibility of permissible complicity in someone else’s wrongdoing. By discussing Google’s cooperation in the Chinese government’s censorship regime, I argue for two conditions for permissible managerial complicity. The first is that the manager must intend and act in such a way as to minimize the firm’s complicity in the other actor’s wrongdoing. In many cases, this will imply a duty for the manager to take actions that aim at changing the institutional context that supports this wrongdoing. The second condition, which fulfills an expressive function, is that the manager must communicate to the firm’s constituents that she recognizes important interests are at stake and that she is committed to minimizing the firm’s complicity. I leave open the possibility that these conditions are non-exhaustive, as they are not meant to justify any kind of complicity in questionable practices. As such, these conditions are necessary, but perhaps not sufficient, depending on the circumstances faced by managers in problematic institutional environments.

In "Google’s China Controversy" section, I offer a summary of Google’s decision to establish search engine operations in China. "One Possibility: Moral Compromise and Irresolvable Moral Conflicts" section outlines the problems with invoking moral dilemmas and moral compromise as a way of understanding the decision facing Google. These problems point to the need for an alternative account. In "Clarifying the Nature of Complicity" section, I argue that intentions may help to establish permissibility in cases where managers are pressured by the institutional environment to be complicit in questionable practices. In "Intentions and Conditions for Permissible Complicity" section, I offer my account, with its two conditions. Finally, in "Clarifications" section, I offer responses to some questions or concerns my account might raise.

Google’s China Controversy

In early 2006, Google China set off a firestorm of controversy by launching a new version of its Chinese search engine that censored search results considered to be potentially objectionable to the Chinese government. From 2000 to 2002, Google had made available a version of Google.com with Chinese-language characters, powered exclusively by servers located outside of China and by 2002, it had built up a 25 % market share of Chinese search engine traffic. Since Google did not have offices or search engine servers within China, it was not subject to the Chinese government’s censorship requirements. As a result, although the ‘Great Firewall of China’ would prevent connecting to the webpages themselves, a search on Google.com from within China would provide links to banned webpages (Thompson 2006).

In September 2002, the Chinese government began to block and slow down residents’ access to Google.com, causing Google to lose market share to its main Chinese competitor, Baidu. By 2005, Baidu had gained about 50 % market share while Google’s had stagnated at less than 30 % (Thompson 2006). Google’s executives were torn about how to respond given the government’s Internet censorship policies. Ultimately, they decided that if they wanted to remain competitive in China, they would have to launch a locally hosted search engine service, Google.cn, which meant complying with Chinese censorship guidelines. Google’s executives were also driven by the desire to expand access to information. In their words, ‘[f]iltering our search results clearly compromises our mission. Failing to offer Google search at all to a fifth of the world’s population, however, does so far more severely’.

In January 2006, the company established search engine servers in China. While agreeing to abide by the censorship requirements, they decided not to offer certain services, such as Gmail and Blogger so as to protect their users’ privacy and prevent having to share their users’ personal information with the government. In addition to the locally hosted Google.cn, they would retain the uncensored Google.com with Chinese-language characters. Lastly, when a search was subject to censorship, they would notify users that results had been removed in compliance with Chinese regulations (McLaughlin 2006; Thompson 2006). The Google Blog called this move ‘a hard compromise’, which suggests that company executives felt something of importance was sacrificed in their actions (McLaughlin 2006).Footnote 2

One Possibility: Moral Compromise and Irresolvable Moral Conflicts

One way to understand the decision facing Google’s managers would be to invoke the idea of a genuine moral dilemma or irresolvable conflict.Footnote 3 A genuine dilemma is a case in which a moral agent is, in fact, required to do each of two or more courses of actions, but cannot do them both or all (Gowans 1987; McConnell 2010; Sinnott-Armstrong 1988).Footnote 4 Taken together, the following three propositions express the most general form of a moral dilemma (Williams 1965, p. 118):

  1. (1)

    I ought to do A.

  2. (2)

    I ought to do B.

  3. (3)

    I cannot do both A and B.

In such cases, a moral actor faces a conflict between two moral obligations or values for which there is no easy resolution because neither obligation (or value) cleanly overrides the other. Instead, when facing this type of conflict, even when choosing the best course of action all-things-considered, one has nevertheless infringed upon or violated the unmet obligation. Therefore, one is doomed to commit a wrong despite doing what one ought. In the wake of such a choice, one likely experiences feelings such as guilt, violation, loss, remorse or regret, and these feelings seem appropriate (Williams 1965). These feelings are referred to as ‘moral residues’ or ‘moral remainders’.Footnote 5 Some thinkers have called attention to the experience of these feelings and argued that their existence and apparent appropriateness count as evidence for moral dilemmas. Others have gone even further, arguing that these feelings indicate that agents have residual obligations to act in ways that adequately acknowledge the inevitable failure occasioned by the dilemma, such as offering an apology or explanation to those affected (Marcus 1980).

In developing his account of moral compromise in the context of the decision facing Google’s leadership, Brenkert invokes the literature on moral residues and dilemmas.Footnote 6 For instance, he references the arguments of Stuart Hampshire, Isaiah Berlin, Bernard Williams and Martin Benjamin against theories of morality that do not admit moral dilemmas. Brenkert also endorses the idea of moral residues, which are the basis of the phenomenological argument in favour of moral dilemmas (2009, pp. 463–464).Footnote 7 Having rejected non-dilemmatic theories of morality, Brenkert argues that Google’s managers must engage in moral compromise, a result of situations in which ‘one cannot fulfill all the values or principles upon which one operates (2009, p. 463)’. As a result, one violates an important value or principle no matter what one does.Footnote 8

To make his case, Brenkert points out that Google’s managers face several competing responsibilities (2009, pp. 466–467). He acknowledges the following duties that make up one horn of the dilemma: the duty to develop Google as a sustainable business; the duty to fulfill their fiduciary duties to their shareholders; the duty to uphold the laws of the countries in which it does business, although he Grants that this duty may be overridden in instances of significantly unjust laws; the duty to develop new jobs and protect existing ones; the duty to protect Chinese employees from danger if Google were to refuse to adequately censor its search results.Footnote 9 Lastly, as Brenkert acknowledges, an important mission for Google is to expand access to information (Thompson 2006).Footnote 10 According to Brenkert, these considerations all weigh heavily in favour of entering the Chinese market and abiding by the government’s censorship regime.Footnote 11

At the same time, Brenkert acknowledges there are also important considerations weighing against entering the Chinese market and the self-censorshipFootnote 12 that such a move would seem to necessitate, thereby forming the other horn of the dilemma. Amongst them is that, in abiding by the government’s censorship requirements, Google is engaging in ‘obedient complicity’, which ‘[occurs] when a business follows laws or regulations of a government to act in ways that support its activities that intentionally and significantly violate people’s human rights’ (Brenkert 2009, p. 459). In this case, the human rights threatened by the censorship requirements are the freedoms of information, expression and speech. In addition, Brenkert cites Google’s informal motto, ‘Don’t be evil!’, as another potential consideration weighing against self-censorship, since acting according to one’s values would seem to be important for reasons of integrity.Footnote 13

On Brenkert’s account, Google is all-things-considered justified in censoring its search results, although ‘in doing so, they will have indeed morally compromised their values and infringed on the human right to freedom of expression and information’ (Brenkert 2009, p. 462). He also suggests that Google has a responsibility to mitigate the harm caused by the human rights infringement, for example, by disclosing filtered search terms and naming those who have requested that it does so (2009, pp. 470–472). Moreover, while Google’s self-censorship of search results is not morally required, he describes it as ‘the best decision they can make, even if it is not the most desirable situation one might imagine’ (2009, p. 462), because either choice would lead to an infringement or violation of important values or obligations.Footnote 14

This conclusion is noteworthy for a few reasons. Take, for instance, Brenkert’s definition of obedient complicity, which turns out to be a rights violation or infringement. As Brenkert (2009, p. 459) himself admits, filtering the search results in the same way, but through the company’s own initiative and without the mandate of the state, ‘would not violate human rights!’ Complicity typically refers to actions that contribute, further or play a part in bringing about another agent’s wrong.Footnote 15 In their decision to enter China, they are complicit because they are furthering the wrong of the Chinese government’s human rights violations.

However, on the definition of ‘obedient complicity’ Brenkert gives, Google is charged with contributing to the government’s violation, which itself constitutes a human rights violation. I will return to Brenkert’s characterization of Google’s actions as obedient complicity in a later section, but first I wish to explore the issue of moral compromise in more detail, as one of the main aims of Brenkert’s paper is to persuade business ethicists to take the idea of moral residues and moral compromise more seriously. Two important questions arise from a close examination of any account that attempts to explain apparent conflicts between a manager’s occupational responsibilities and her responsibilities qua ordinary moral agent by invoking moral dilemmas. The first is whether there are genuine dilemmas in the first place. Second, even if were to Grant that moral dilemmas do exist, does the nature of managerial responsibility give rise to an irresolvable moral conflict in Google’s case (and in other relevantly similar business situations)? I turn to these questions, respectively, in the following sections.

A. Genuine Moral Dilemmas?

The view that moral dilemmas are a feature of morality is controversial in moral philosophy. For starters, there is something puzzling about the implication of moral dilemmas; namely, that even if you have done something all-things-considered justified, you may still have committed a wrong. How can it be that a moral agent is justified in committing a wrong?

The literature on moral dilemmas and remainders is voluminous. My purpose is not to provide an exhaustive review of the arguments for and against the existence of dilemmas here. Instead, my aim is simply to respond to some of the arguments from the proponents of dilemmas that Brenkert invokes as a basis for his account of moral compromise and to outline some of the undesirable implications of any theory of morality that includes dilemmas.

Consider first Bernard Williams’ argument, in which he points to moral residues as evidence for the existence of moral dilemmas. Standard ethical theories, namely utilitarianism and deontology, deal with the resolution of apparent conflicts between two or more moral oughts by ‘[eliminating] from the scene the ought that is not acted upon’ (Williams 1965, p. 113). If we assume standard ethical theories are correct, it is irrational to agonize over one’s choice, as long as one chooses the best or right course of action.Footnote 16 Williams argues that standard ethical theories do not pay due heed to feelings of regret or distress the agent may have in resolving difficult moral choices. They cannot accommodate the regrets agents have in the face of certain difficult moral conflicts stemming from the belief that the agent ‘has done something that he ought not to have done, or not done something that he ought to have done’, even if he did the best thing (1965, p. 111). Williams concludes that mainstream ethical theories are mistaken.

However, as Foot (1983) points out, instead of arguing for the existence of dilemmas, this argument assumes the appropriateness of feelings associated with apparent dilemma. Foot argues that the mere experience of regret, guilt and so on cannot by itself tell us that the agent is correct to feel that way. It cannot even tell us whether the agent himself actually believes he has done something wrong. For example, Foot cites feeling guilty over giving away the belongings of someone who has recently passed away and maintains it would be mistaken to say that wrongdoing has taken place simply because one experiences such feelings, even if those feelings seem appropriate given the circumstances. The argument from regret and other related feelings, according to Foot, does not demonstrate that the agent has, in fact, done something wrong despite doing the best thing.

Therefore, according to Foot, in order to argue for the existence of genuine moral dilemmas, it is not enough for a moral conflict to leave us with a sense of loss once we have resolved it as best we can. Rather, we must gain clarity as to what we feel badly about or regret.Footnote 17 Consider the following apparent moral conflict: on the one hand, we ought to fulfill our promise to meet a friend and, and on the other hand, we ought to help someone in need, say by driving them to the hospital because they have been in an accident (Foot 1983). Foot points out it seems appropriate to claim that one ought to take the second person to the hospital, even if it means breaking our promise to our friend. She asks whether we regret the act of breaking the promise because we believe we did something wrong or whether we merely regret the consequence of not being able to fulfill the promise, i.e. that our friend was inconvenienced. As she points out, surely it is the latter. Imagine that your friend had the same thing happen to him on his way to meet you, so that the broken promise did not cause him any inconvenience. In this case, there would be nothing for you to regret.

Certainly, from the phenomenological point of view, there is merit to the claim that something of value is sacrificed when making trade-offs between important values. For instance, Brenkert writes about a mother who sacrifices her career in order to dedicate herself more fully to her family as an example of moral compromise (2009, p. 465). However, even if we Grant that some important value has been lost or sacrificed, it is unclear that this kind of conflict between the goods of a career and family life underwrite the kind of unavoidable wrongness Williams has in mind and that the Google case is meant to represent. For unavoidable wrongness to be applicable in this case, there must be two (or more) oughts that cannot be overridden. For instance, as Brenkert himself points out, we are likely to think that the mother who gives up her career is going beyond what morality demands from her, so it certainly is not the case that she is under an obligation to quit her job in order to more fully dedicate herself to her family. Furthermore, there is no potential wrongdoing here with respect to her career obligations, unless one believes that women commit a wrong by abandoning their careers in favour of their family, which would constitute an extreme position.Footnote 18 In cases of permissible choice between two goods, it seems strange to say that the moral actor is engaging in wrongdoing, although we may concede—and the agent herself may feel—that something of value has been sacrificed or lost (Foot 2002).

Thus far, I have considered phenomenological arguments in favour of moral dilemmas. At this point, I turn to consider a more general problem for the view of morality that accepts the existence of genuine dilemmas.Footnote 19 Recall the most general form of a moral dilemma (Williams 1965, p. 118):

  1. (1)

    I ought to do A.

  2. (2)

    I ought to do B.

  3. (3)

    I cannot do both A and B.

Amongst others, Williams (1965, p. 118) and Donagan (1984, pp. 296–297) have pointed out that the statements above imply a contradiction, as long as one believes both of the following two principles to be true:

‘Ought implies can’: If I am under an obligation to do something, this implies that I can do it.

‘Agglomeration principle’Footnote 20: If I ought to do A and I ought to do B, then I ought to do both A and B.

If one wants to argue in favour of the existence of moral dilemmas, one needs to jettison either ‘ought implies can’ or agglomeration. For his part, Williams (1965) argues that we should abandon agglomeration, while Walter Sinnott-Armstrong (1988) has advanced arguments in favour of abandoning both principles. However, as Donagan (1984) points out, abandoning either principle would imply sacrificing something foundational to our understanding of morality.Footnote 21

B. Managerial Responsibility and Moral Dilemmas

In the light of the above discussion, there is good reason to doubt the plausibility of moral theories that acknowledge genuine moral dilemmas. However, let us assume for the moment that I am mistaken and dilemmas exist. On Brenkert’s account, fulfilling the managerial responsibilities in the Google case involves the violation of others’ human rights. If managerial duties are to override the duty not to violate human rights, we would need a highly demanding account of managerial responsibility. Therefore, for an argument based on moral dilemmas to work, one needs a view of managerial responsibility that elevates the manager’s duties so that they are at least as forceful as the obligation not to infringe or violate others’ human rights. Such a stringent view of managerial duties, however, is questionable. Why should we think that managerial duties actually compel the manager to do wrong in order to ensure their fulfillment?

In response, one might object that I am not adequately recognizing the importance of the property rights implicit in the demanding kind of account of managerial responsibility that I have so far dismissed. If one endorses Milton Friedman’s (1970) account of managerial responsibility, then managers have an occupational responsibility to respect their employers’ (i.e. shareholders’) wishes, which will generally be to maximize corporate profits. Refusing to establish search engine operations in China puts shareholder value at risk in the long run. Therefore, if one takes seriously the occupational duty of managers to maximize the profits of the firm, then perhaps the decision yields a genuine dilemma, assuming dilemmas exist.Footnote 22

There are a two ways to reply to this line of argument. One would be to challenge the notion that the shareholders have property rights of ownership giving them rightful control over what managers ought to do. To do so, one might invoke Stout’s (2002, 2012) attack on the notion that shareholders are the owners of the corporation. For instance, she attacks this notion by distinguishing between ownership of the corporation and ownership of shares of stock. Whereas shareholders hold shares of stock, the corporation owns itself (Stout 2012, p. 37). Owning shares of stock merely means that shareholders have a contractual relationship with the corporation, not unlike other stakeholders such as debtholders and employees. This contractual relationship does not amount to ownership of the corporation.Footnote 23 Stout also presents evidence that corporate law does not recognize shareholder rights to control managers’ behaviour. For instance, shareholders’ rights to sue for breaches of fiduciary duty are significantly limited by the business judgment rule. Their voting rights are also quite restrictive (Stout 2012, pp. 42–43). Her arguments call into doubt the idea that shareholders’ property rights imply the kind of exceedingly demanding duty to advance shareholder’s interests—even at the cost of human rights violations—necessary to generate a moral dilemma.

A second reply would be to Grant that shareholders own the corporation and there is a duty on the part of managers to pursue profits, but to push back on the idea that such a duty entails that managers must pursue all means practically possible to fulfill them, as opposed as all means practically and morally possible. To understand this argument, consider the case of promises. The making of a promise to someone is widely thought to generate a binding and overriding reason to do something. However, this fact alone does not mean that there is no such thing as justified promise-breaking. Recall Foot’s example about breaking the promise to one’s friend in order to drive an accident victim to the hospital. This shows we can be justified in breaking a promise when there are more urgent matters to which we must attend.

To frame the Google case in terms of a dilemma between human rights and managerial responsibilities is to hold that the duty ‘do not fail in one’s managerial duties’ is without exception, in that failing to live according to the precept constitutes a wrongdoing, no matter the circumstances. However, if it is wrong to say that promises ought to be kept no matter what one has to do to keep them (e.g. drive an accident victim to the hospital), then it is equally wrong to say that managerial duties ought to be fulfilled no matter what other moral precepts one has to ignore (e.g. respect others’ human rights). In our common sense understanding of promises, if we reasonably acknowledge there are moral limits a promise-keeper ought to respect to make good on his word, then likewise we can acknowledge there are moral limits managers ought to respect in fulfilling their occupational duties. Therefore, occupational responsibilities do not compel managers to engage in wrongdoing. Managers cannot be said to have failed in their duties if they do not pursue wrongful means in order to fulfil them. Otherwise, it would seem there are no real limits to managerial responsibilities.

In the Google case, if what is really at stake is the choice between the duty to secure the long-term survival of the company (and all the duties that depend on the firm’s long-term survival) and a human rights violation, the claim that Google is all-things-considered permitted to violate human rights is wrongheaded. As long as one believes that the duty not to infringe human rights is grave and difficult to compete with, making such a recommendation entails an implausibly demanding account of managerial responsibility.

C. The Corporation as a Political Actor: One Source of Managerial Dilemmas

Even if one endorses moral dilemmas in the abstract, there is another reason to doubt that the nature of managerial responsibility gives rise to moral dilemmas. In order to understand this argument, first it is important to note that the possibility of dilemmas is frequently raised in the context of the political realm. For instance, Berlin (1969) writes about the tension between negative and positive liberty in the public sphere. Consider also how widespread talk of ticking time bomb scenarios has been, both in popular culture and in political philosophy. This should not be a surprise, since we believe politicians bear a direct responsibility for the public good, and that responsibility may conflict with the requirements of private morality, the latter being roughly equivalent to what is also called ordinary or common morality. Theorists point to two key differences between public and private morality. One is that we expect public officials to act with a greater degree of impartiality than we would an ordinary moral agent or private individual (Nagel 1978, p. 84). Another is that, as the conflict between consequentialist considerations and deontic constraints in the ticking time bomb scenario reflects, public morality seems to be characterized by a greater emphasis on consequences (Nagel 1978; Scanlon 1978).

Recently, there have been calls in business ethics to view corporations as political actors (e.g. Néron 2010; Scherer and Palazzo 2011; Wettstein 2009). Accordingly, if corporations can be said to have political responsibilities, then perhaps managers could be faced with dilemmas, just as politicians are. One might argue that managerial responsibility is more like the kind of public morality associated with our governmental institutions, which would make managers political or quasi-political actors. If one wants to argue for the possibility of moral dilemmas due to the nature of business and managerial responsibility, this might seem like a promising starting point.

In response to this argument, even if corporations are governed by a public or quasi-public morality, this by itself does not guarantee that managerial responsibility will give rise to moral dilemmas. In the first place, many of the calls for the politicization of corporations have been characterized by the demand that corporations play a greater role in securing individual rights, not to override them in the hope of securing the economic viability of the firm.Footnote 24 Therefore, the motivations for promoting corporate political responsibility cut against the argument that managers ought to commit wrongs in the name of their managerial responsibility and long-term interests of the firm. As such, there is reason to doubt that the strategy of politicizing the corporation would underwrite the existence of moral dilemmas in a way that would make it permissible for managers to commit rights violations.Footnote 25

Clarifying the Nature of Complicity

Thus far, I have argued that we should not understand Google’s decision as a moral dilemma, given the various problems associated with the view that genuine dilemmas exist. I have also argued that even if dilemmas were genuine, it would be mistaken to invoke them in order to understand cases such as this one. Nonetheless, it may be that Google is, in fact, permitted to engage in self-censorship. I argue this is not because Google faces a moral dilemma, but because we should reject Brenkert’s characterization of Google’s complicity. In contrast to Brenkert, I argue that Google’s is not engaging in human rights violations or infringements, even though Google’s actions may further those violations or infringements.

According to the United Nations Global Compact, ‘direct complicity’ occurs ‘… when a company knowingly assists a state in violating human rights’ (qtd. in Brenkert 2009, p. 459).Footnote 26 Brenkert argues this does not apply to Google, because the Global Compact implies that direct complicity is a human rights violation in itself by highlighting forced relocation as an example. Filtering search results is different, according to Brenkert, because all search results must be filtered in some way or another. Consequently, he proposes we understand Google as engaging in ‘obedient complicity’, which ‘requires only that a business engage in actions mandated by a state that significantly and knowingly violate human rights—even though similar actions (in this case, filtering) undertaken simply by the business itself would not violate human rights!’ (2009, p. 459) Obedient complicity involves obeying governmental laws or regulations that infringe or violate human rights.

Brenkert is correct to point out the difference between what Google is doing and forms of direct complicity such as forced relocation. Nevertheless, there are three reasons to reject the characterization of Google’s actions as obedient complicity. First, it is a very serious wrongdoing to violate or infringe upon someone else’s human rights. The fact that the wrong is defined as a human rights infringement is what necessitates an implausibly demanding account of managerial responsibilities giving rise to moral compromise in this case, since only a very serious moral consideration could be said to compete with a duty not to infringe on others’ human rights. Second, if it is a very serious wrong, there is reason to doubt that mitigating the harm of the infringement after the fact creates the permissibility to engage in the wrongdoing in the first place as Brenkert argues it does. Third, Brenkert concedes that the relevant rights do not directly apply to private entities responsible for disseminating important information, such as newspapers or search engines. This is why, on Brenkert’s view, Google has not been directly complicit in violating anyone’s freedoms of speech, information or expression (Brenkert 2009, pp. 458–459). And yet, on his view, the company is guilty of violating the human rights to these freedoms, despite the fact that, if Google were to do the same kind of filtering absent governmental regulations, it would not be guilty of a rights violation (Brenkert 2009, pp. 459, 462).

Brenkert states that human rights are primarily the responsibility of the government because: (1) it is governments who are parties to the United Nations Universal Declaration of Human Rights (UNDHR) and to the International Covenant on Civil and Political Rights (ICCP); and furthermore, (2) governments are the entities with the power and authority to prevent the realization of the right (Brenkert 2009, p. 455). Since it is the responsibility of the Chinese government to ensure an environment where human rights are respected, their efforts to restrict freedom of speech, information and expression constitute human rights violations. However, if the assurance of freedoms of speech, information and expression is the primary responsibility of the government, for Google to be guilty of violating the human rights of the Chinese, it must in the first place have the duty to promote the relevant rights. Therefore, the mistake in Brenkert’s ‘obedient complicity’ is that it equates contributing to another party’s human rights violation with being guilty of a human rights violation. For Google to be guilty in this way, the Chinese search engine users must have claim-rights directly against Google, which Brenkert denies in assigning human rights primarily to the Chinese government.

There are those who would attach human rights obligations directly to corporations. Wesley Cragg, for example, has argued that corporations have both indirect and direct human rights obligations. Their direct obligations stem from their power to institutionalize respect for human rights (Cragg 2012, p. 22). This argument is echoed in Wettstein’s (2009) argument that corporate power makes multinational corporations quasi-governmental institutions with direct human rights obligations including the duty to protect and promote human rights. In proposing his Fair Share theory of corporate responsibility for human rights, Santoro (2009) also argues for the indirect and direct human rights obligations of corporations. Santoro contends that search engine providers operating in China have the duty to promote the realization of the relevant freedoms of the Chinese, although he agrees with Brenkert that by entering China, ‘all of the Internet companies to a greater or lesser extend have dirtied their hands by directly and actively participating in some form of censorship’ (2009, p. 95).

However, if one attaches human rights directly to corporations because of their power to promote those rights, it would imply that Chinese search users have a claim-right directly against Google to provide uncensored or less censored search results. More fundamentally, in order to make the general argument that human rights attach directly to corporations, we would have to give corporations even more power than they currently enjoy. As Patricia Werhane (2012) points out in her review of Wettstein’s (2009) work, if one is already concerned about the alarming amount of corporate influence over political matters, one ought to try to curb their political power, instead of expanding it.

At this point, it will help to turn to the Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy’ Framework (the Principles), proposed by John Ruggie and endorsed in 2011 by the UN Human Rights Council (United Nations Office of the High Commissioner for Human Rights 2011). This framework seeks to clarify the nature of corporate human rights responsibilities without making businesses directly responsible. The Principles define non-legal complicity as occurring ‘when a business enterprise contributes to, or is seen as contributing to, adverse human rights impacts caused by other parties’ (Ruggie 2011, p. 17). The document speaks to the obligation of businesses to respect human rights, which entails: (1) publicly committing to respect for human rights; (2) engaging in human rights due diligenceFootnote 27; and, (3) engaging in remediation when negative impacts have occurred. In situations where local laws and human rights requirements conflict, businesses should strive to respect the human rights requirements and to ‘treat the risk of causing or contributing to gross human rights abuses as a legal compliance issue’ (Ruggie 2011, p. 21).

In the case at hand, it is not clear what the principles imply beyond needing to conduct human rights due diligence and to have a corporate policy statement. Google is faced with a conflict between human rights requirements and local laws, but I would not consider the nature of Google’s decision to engage in self-censorship as contributing to or causing a gross abuse, even though their actions do have negative human rights impacts.

I do not mean to imply that managers can do whatever they wish in cases such as this. Rather, I wish to recast the nature of the problem in terms of furthering another party’s wrongs in institutional contexts that make it difficult for us to avoid doing so. I propose we ask the question whether there are cases in which managers can permissibly further another’s wrongdoing. In doing so, I hope to avoid the problems outlined above with defining complicity in terms of responsibility for human rights violations.

Intentions and Conditions for Permissible Complicity

In order to explore whether there is any such thing as permissible complicity, it is helpful to re-examine the nature of the managerial decision in the Google case. Consider the differences between the intentions and plans of the two parties in this case whose actions make censorship possible. The government’s intention, as stated previously, is to restrict its citizens’ freedom of information, expression and speech. The government carries this out by preventing information providers, like Google, from disseminating sensitive political information to its citizens.Footnote 28

In contrast, given what we know from Google’s blog entries and news reports, the intentions of its managers could be understood in starkly different terms. One could make the argument that Google’s aim is to provide more and better quality information to Chinese citizens and to change the political environment in China over the long run. For instance, co-founder Sergey Brin has spoken publicly about how difficult it was for him to come to terms with Google’s compliance with censorship, given his experience living in the Soviet Union as a young boy (Enlightenment man 2008).Footnote 29 At the time that the decision to enter China was announced, the company blog evidenced a good deal of hand-wringing amongst the executives. Their blog entries made it very clear that a big motivation for their entry into the Chinese market was to try to improve the flow of information over the long run. The company also declined to offer certain services that might compromise the safety of their users, such as Blogger or Gmail (McLaughlin 2006).

If I am correct about Google’s intentions and goals being starkly different from, or even opposed to, those of the Chinese government, then perhaps their intentions offer a possibility for permissible complicity in difficult institutional environments. Whereas Brenkert’s account casts obedient complicity as a human rights violation, I propose that we think of institutionally driven complicity in terms of an actor being pressured to play a part in another agent’s failure to live up to their responsibilities. Examining an agents’ contributory acts in light of intentions creates the possibility of conditionally permissible complicity in a way that a human rights violation cannot, since the latter seems to entail an intention to do evil, in contrast to a contribution to someone else’s moral failure, which need not entail such an aim. It also escapes the problems accompanying any view of morality that presupposes genuine moral dilemmas.

Recasting the case in this way Grants that corporations such as Google are not the primary party responsible for promoting human rights. Instead, I would argue it is sufficient to recognize the notion in common sense morality that it is pro tanto objectionable to aide someone in carrying out a wrong, which is what happens when managers implement Internet search filters that further the government’s aim to stifle citizens’ freedoms of expression, speech and information. In order to argue that managers’ part in this is permissible, one must provide additional information about the nature of the action. For example, one might argue managers intend to minimize their complicity and to work to ameliorate the kinds of conditions leading to the complicity in the first place. Depending on the facts at hand, such an argument might open the possibility of permissibility.

This kind of thinking is not altogether dissimilar to the pattern of thinking behind the doctrine of double effect (DDE), which states that acts pursued for the sake of some good end may be permissible even if they cause harm as a side effect, as long as one merely foresees the harm without intending it. In Thomas Aquinas’ classic formulation, DDE also contains a proportionality condition: ‘And yet, though proceeding from a good intention, an act may be rendered unlawful, if it be out of proportion to the end. Wherefore if a man, in self-defense, uses more than necessary violence, it will be unlawful: whereas if he repel force with moderation his defense will be lawful’ (Summa Theologiae, II–II, 64, 7). If DDE is correct in highlighting the importance of an agent’s intentions, there may be other ways in intentions create the possibility of permissibility of a course of action.

If an actor’s intentions offer the possibility of permissibility when managers find themselves under institutional pressures to further a government’s questionable aims, what conditions might we impose on those managers? I propose two necessary conditions: one stipulates conditions on the intention itself, and the other fulfills an expressive function towards the company’s constituents. I do not claim that these two conditions are sufficient in all cases in which companies further to another party’s failure.

The first condition is that the actor must hold and act on a certain intention, namely, to work to minimize his or her complicity in another’s moral failure, which in many cases will imply a commitment to improve the conditions that necessitate complicity in the first place. It is not enough for managers of search engine companies to enter China with the intention of maximizing profits, creating jobs or securing the long-term survival of the firm. The managers must have the aim to minimize their complicity and take steps to do so. Since in many contexts, this will include taking steps over time to improve the conditions that necessitate complicity, Google’s managers must both intentionally take steps to minimize the extent to which they further the aims of the Chinese government and work towards a fuller eventual realization of freedom of expression, speech and information for Chinese citizens.

The second condition is that managers must find a way to express their intention to bring about a more just environment over the long run. The second condition fulfills an expressive function, sending the message to the company’s constituents that it recognizes the importance of the interests at stake in the complicity in another agent’s failure.Footnote 30 Expressing the intention to minimize complicity shows that the company is not only aware of those important interests, but also that it respects its constituents by being open and transparent with them. For instance, they may seek to reassure their constituents by communicating their intentions. In the Google case, this generates a duty for managers to communicate to the company’s constituents that they recognize the importance of the interests that are being harmed by the government’s censorship regime. Such a condition might imply a duty to communicate and work with NGOs and other organizations that monitor human rights issues in China.

Against such a view, one might argue that intentions should not matter at all when considering the permissibility of an action. Consider two companies doing business in China. The managers of one company have the intention to minimize its complicity in the government’s failure to live up to its human rights responsibilities, whereas the other’s managers do not. Despite this difference in their intentions, both sets of managers may take steps that minimize their complicity. The managers without the intentions required by my account do it for other reasons, e.g. to avoid bad publicity and guard the long-term reputation of their firm. If they are both engaging in the same actions, why should we care whether one set of managers has the required intentions, whereas the other does not?

There are two responses to this objection. The first is to challenge whether managers without the stipulated intentions are actually likely to engage in the actions that minimize complicity. There may be some overlap between the actions of the two groups of managers, but it is unlikely that managers without the requisite intentions would undertake the same scope of initiatives as those managers that do intend to minimize complicity. Furthermore, if they do not have the intention to minimize complicity, it seems unlikely they would fulfill the second condition, which is to express said intention to appropriate constituents.

A second possible response is to invoke considerations of integrity and moral character. A manager who harbours the intention to minimize complicity can be said to have the requisite moral insight to recognize the important interests that are at stake in the situation, e.g. she recognizes the importance of the rights and freedoms at stake in the Google case. As a result, it seems that the manager who intentionally acts to minimize complicity has cultivated a certain character that is aware of the relevant important moral interests and aims to respect them, whereas the manager whose actions are not guided by such an intention has failed to cultivate a good moral character.

Clarifications

In this section, I wish to address three questions that my account might raise. The first is whether there is any significant difference between Brenkert’s condition that companies in situations like Google’s have a duty to mitigate harm and the condition that companies have a duty to minimize their complicity in another’s failure to fulfill their responsibilities. The second question has to do with whether my account is simply an application of the doctrine of double effect. The third is the objection that my account is too permissive with respect to companies’ contributions to others’ failures to live up to their responsibilities.

With respect to the first issue, it may be true that, in the context of the Google case, the requirements of my account overlap with the requirements of Brenkert’s conditional duty to mitigate the harm stemming from the human rights infringement. Under his account, Google ought to undertake some or all of the following initiatives, some of which they, in fact, undertook: notify users of the censorship of search results; provide the public with a list of censored terms; name those who have demanded that the censorship take place; work with other companies to draft a common code of conduct governing best practices; provide circumventing information; continue to monitor the local situation; leave Google.com with Chinese characters, run by external servers, so that users can compare its search results to those available on Google.cn.

Despite the similarity between Brenkert’s duty to mitigate the harm and the two conditions I lay out above, there are essential differences. The first is that actions a company’s managers undertake to minimize complicity are not necessarily the same as those they undertake with the purpose of mitigating the harm from a human rights infringement. One might imagine that actions undertaken to mitigate harm would also minimize complicity, but actions undertaken to minimize complicity may not be sufficient to mitigate the harm from a true human rights violation, because it is such a serious wrong. Furthermore, as a way of minimizing complicity, company managers may be expected to work to address the conditions that make the complicity necessary in the first place, which is not the same as mitigating harm. Mitigating a moral harm requires managers to somehow lessen their wrongdoing, whereas addressing the conditions that give rise to complicity aim at solving the root causes of the problem.

In addition, since the second condition in my account requires managers to pursue activities that express their intention to minimize their complicity, it also means they ought to communicate and work with constituents who may not be directly impacted by the company’s operations in the problematic environment. It is unclear that communicating effectively and working with NGOs would satisfactorily mitigate the harm from a human rights infringement. Therefore, doing so might not be a necessary feature of the requirement to mitigate harms in environments such as China on Brenkert’s account.

A second question that may arise is whether my account of permissible complicity is actually different from DDE. Recall the aforementioned definition of DDE: acts that are pursued for the sake of some good end may be permissible even if they cause harm as a side effect, as long as one does not intend that harm, but merely foresees it. In the traditional version of DDE, the agent must not have the intention to harm (i.e. to produce the bad side effect) in order for the action to be considered permissible.Footnote 31 In other words, in order for killing in self-defence to be rendered permissible, one must intend to save one’s own life and not intend to murder the assailant.

There are two reasons why the account proposed here does not constitute a straightforward or simple application of DDE. The first reason is that my account requires the manager to act according to a more narrowly defined intention; in other words, the manager must do more than not intend harm. She must have the right kind of intention; namely, to minimize the company’s complicity in another’s failure to live up to their responsibilities. This required intention is more complex than what is stipulated by DDE. Furthermore, in applying DDE, the analysis of intention focuses on one actor and how their action causes harm. In contrast, my account consciously incorporates the backdrop of institutional forces that encourage or push the agent to act in a way that contributes to another entity’s moral failures.

This brings me to the second difference between my account and DDE. If DDE only requires that agents not have the intention to harm, then companies like Microsoft and Yahoo!, who simply want to protect their market position or fulfill other desirable business objectives, might also be justified in engaging in self-censorship in China. By requiring that managers intend to minimize complicity over time, my account avoids the conclusion that any good aim underwrites a justification for contributing to any moral wrongdoing or failure as long as one does not intend the wrongdoing itself. Thus, the first condition captures the intuition that there is a moral difference between an actor who contributes to another’s failure while intentionally minimizing their complicity and another who contributes to another’s wrongdoing without taking such steps.Footnote 32

Lastly, one might raise an objection to the two conditions in the account. Specifically, one might object that the conditions allow for too much or yield permissibility too easily. Consider, for instance, a country that sets out to torture its enemies and seeks to purchase instruments of torture from a company. Imagine that the company’s managers spoke out against the country’s torture policies, expressing their concern over the treatment of those being detained and tortured. They also minimized their complicity by working to develop instruments that made the torture more efficient or less painful than their competitors. Wouldn’t my account lead one to the conclusion that the company’s managers were engaging in permissible complicity? This conclusion is horrific, so it may be objected that my account must be setting the bar too low.Footnote 33

This is perhaps the most significant challenge for my account, but there are three possible responses to it. The first is that the account presents two necessary conditions for determining permissible complicity. As such, it does not by itself imply permissibility in the case of the company selling instruments of torture. However, to challenge this response, one might point out that it is therefore impossible to know if Google’s managers are off the hook for their complicity in the Chinese government censorship regime.

A second response would be to argue that it is possible that, when confronted with one’s contributions to others’ failures to live up to their responsibilities, there are bad and worse failures, so that the possibility for permissibility might in part be a function of the failure in which one is complicit. When considering the rights to freedom of information, expression and speech and comparing it to the right not to be tortured, it may be that violating the former set of rights is bad, but that violating the latter right is much worse. Therefore, one may be permissibly complicit in the government’s violations of its citizens’ rights to freedom of information, expression and speech, whereas it would be impermissible to be complicit in violating the right not to be tortured. Developing a fuller justification of these distinctions for the purpose of extending the account I have proposed here represents an important area of future research.

A third response to the challenge of the company selling instruments of torture would be to challenge the idea that a company can ever permissibly sell instruments of torture at all, while pointing out that a company like Google can make a myriad of legitimate decisions as to how to filter its search results. In other words, one can plausibly argue that selling instruments of torture could never be permissible, whereas search engine operations constitute a permissible business venture.

Conclusion

The aim of this manuscript is to offer a way for managers to successfully navigate situations in which their managerial duties seem to conflict with ordinary moral precepts, where such conflicts are driven by the institutional environment. I have advanced an account for understanding managerial responsibility in contexts where the company’s activities may contribute to the host government’s failure to live up to their responsibilities by working through the Google case. In offering this account, I have sought to counter the view that being successful in business requires getting one’s hands dirty from time to time. The aim has been to offer advice to managers on how to maintain their integrity when operating in difficult institutional environments, while preserving the view that the correct theory of morality is non-dilemmatic. In doing so, I have argued it is not necessary to abandon the idea that there are morally permissible plans of action available to managers operating in non-ideal contexts, but preserving this idea requires examining how managerial intentions and the expression thereof can determine the permissibility of our actions.Footnote 34