Neuroethics, and the neuroscience of sociality and morality generally, has warmed to “moral enhancement” as a vague idea deserving attention, and perhaps promotion. The morality of modifying morality is an issue as significant as weighing practicalities to technologically enhanced morality. Indeed, rightly evaluating those practicalities should contribute heavily to overall moral judgments upon this opportunity. Means do factor into evaluating ends; means also shape what proposed ends even mean. Defenses of moral enhancement in the abstract too often proceed as if morality’s nature was already understood, and its improvement would be easily confirmed and approved. That seemingly smooth path towards consensus is both conspicuous, and suspicious.

Moral enhancement, if achievable (though few techniques show much promise), is by no means automatically moral in every sense. There is plenty of room for reservations towards potential moral enhancers. Skepticism starts from noticing that “enhancement” and “morality” should not be separately explicated and then simply conjoined; their ambiguity allows for imprecision, and their combination affects how one conceives both terms. A primary purpose of this discussion is to deny that “moral enhancement” could be straightforwardly defined. Instead, the reader is invited to consider why the context of morality requires problematizing the notion of enhancement, and why the connotations to “enhancement” require questioning our understanding of morality.

One’s sense of “enhancement,” typically enough, may start from a vague notion of improvement, but no one’s view of enhancement should stop there, and no dictionary provides a precise definition of enhancement itself. Any rigid conception for enhancement in isolation is an intellectual trap, unfit for philosophical premising. More pliable conceptions of enhancement only multiply complexities. Semantically, enhancement can indicate a marked improvement in addition to just an incremental increase [1], so “moral enhancement” is doubly normative, eliciting divergent views about what is, and what should be, morally commendable. Physiologically, alterations to brain processes guiding morality will likely affect more than a person’s ability to moralize and might cause unacceptable side-effects to one’s character and conduct. Clinically, if moralizing could be verifiably improved without side-effects, that improvement may reach effectiveness only within bounded contexts and suitably controlled conditions. Theoretically, an impressive clinical improvement to a few subjects should not be mistaken for a sure way to enhance anyone’s morality to any desired degree. Ethically, in light of our highest standards for such things as personal integrity and social harmony, enhanced moral performance may yet fall short.

Enhancement, despite basic meanings, resists simplistic explication for normative evaluations. The word needn’t be abandoned entirely, however. My deeper analyses of performance enhancements and their applicability across real-world settings are available elsewhere [2], and this essay’s conclusion yields my basic criteria for moral performance enhancement. The broader lesson is that moral enhancement can appear moral in the deed, while non-extendable even in theory and immoral after ethical reflection. Neuroethics bears a special responsibility for understanding that difference, and ensuring that scientifically-advised ethics has an opportunity to be decisive over intuition and convention. A modest and realistic approach to moral performance enhancement, grounded in sound neurophilosophy, can emerge from exploring these issues.

Whose Morality Needs Enhancing?

Where some moral consensus happens to prevail, both subjects and observers can confirm how much moral improvement is experimentally reached – but only up to that measure of consensus, and not beyond. Advocating moral enhancement for severely immoral behavior, such as unprovoked physical harm, takes advantage of civic consensus to acquire plausibility [3]. However, that plausibility is gained by restricting its scope. What counts as unwarranted aggression varies from culture to culture, while reductions of maniacal violence (something few cultures tolerate) is less like paradigmatic moral enhancement and more like remedying moral incapacity [4]. Enhancement, whatever it can be, would not be mainly for very bad people that no one wants around. We shall therefore pursue the idea of improving morality, not treating its absence.

Enhancement is retaining enough operational meaning to indicate a reach for expectations beyond therapeutic goals [5], even if therapy is also hard to define. Ambiguity can be managed by distinguishing contexts for separate definitional precision. Useful conceptions of enhancement require careful elaboration beyond either reducing it to therapy or dichotomizing it apart from therapy [610]. Although enhancement does not automatically begin where remedial therapy stops, restoring or rehabilitating the capacity for commonly expected moral conduct deserves separate philosophical consideration.

Offering enhancements to morality arouses hopes that whatever we take ourselves to be morally doing today, we could have more of that tomorrow. Morality is surely good, if anything is. Why wouldn’t more of a good thing be moral? Enhancement could be tautologously connected to the good, too. John Harris has done so: “In terms of human functioning, an enhancement is by definition an improvement on what went before. If it wasn’t good for you, it wouldn’t be enhancement.”Footnote 1 If more morality is good, and enhancing is good, wouldn’t more morality be truly enhancing? Logic forbids that conclusion (the fallacy of undistributed middle), since the ‘good’ of morality might not be the same as the ‘good’ of enhancing. Identifying them can’t be accomplished by asserting yet another tautology. Furthermore, this argument relies on an ambiguity to “more morality” – does this refer to improving upon morality, or increasing the number of moral people? If morality sets the standard for what is best, trying to surpass what is best only deviates away from morality. If morality is instead something that we want more people to have, we will be looking around at our neighbors before we look into a mirror at ourselves.

An audience receptive to the enhancement of their own morality may turn out to be quite small. People would rightly question the methods and results, as they already have close familiarity with morality’s improvement. Cultures put vast effort into the moral improvement of their members towards meeting or exceeding social expectations though entirely conventional and non-technological means. But cultures cannot be faulted for falling short of homogenous moral conformity. Even societies with high success rates must tolerate disagreements over moral priorities, disputes over ethical problems, and discrepancies between ethical theories. These clashes over morality and moral values would not be erased or overridden by any putative moral enhancer [1215]. Anyone would be wise to wonder how moral enhancement could fit into that complicated moral landscape.

That wonder about morality’s complexities is not dispelled by noting how plenty of people think that they know what is best for everyone. Moral cacophony is not an occasion for imposing some moral order, despite Guy Kahane and Julian Savulescu’s assurances: “In numerous discussions on enhancement, a recurring objection is that we do not know what is good, or what would constitute an improvement in human well-being or moral dispositions. This is, perhaps, the best diagnosis for the status quo bias that infects so many protagonists in the debate – since we don’t know what would be better, we should remain where we are.”Footnote 2 But a practical motivation to depart from the status quo is not the same thing as knowing a good moral direction to go. If I lack sufficient reasons for prioritizing my moral agenda, I do not immunize it from ethical criticism by noting how no else appears to have sufficient reasons for their moral priorities either. Selecting one direction that seems sensible, even if only a few people go there while other people go elsewhere, is still a social experiment in ethics requiring the closest scrutiny. Nor is one’s preference immunized from scrutiny by pointing out how critics might lean towards a different direction, or none at all for now. Thinking that the justificatory burden is heavier on the cautious than on the bold is a more worrisome bias here.

Neuroethical proposals to apply unconventional means for adjusting common morality can turn out to be culture-bound, tacitly parochial, or openly partisan. No matter how moral a proposal to enhance morality in the brain may appear to a few neurophilosophers or a large public, people will still use their heads to fully evaluate such a potentially divisive or even deleterious goal.

Brains Won’t Be Enough

People using their brains to think about moral matters for themselves can seem like an unhelpful intrusion upon the world of brain research into morality. However, the objectivity offered by the brain sciences cannot elevate the achievement of moral bioenhancement (through physiological alterations to the biological functioning of the nervous system) or moral technoenhancement (through indirect adjustments of neurological functions using technological means) to any similarly objective status. Whether a procedure is invasive or not, and whether it is reversible or not, makes no difference to this objectivity deficit, a deficit which makes a huge difference to any practical and moral evaluation of that procedure.

Due to that objectivity deficit, simplistic chartings for the territory of moral enhancement would be misleading. A supposition that the meaning of enhancement can be clarified by linking it to therapy’s seeming objectivity is a tempting start, but ultimately unsatisfactory. Allowing enhancement to be just a subjective matter is a dead-end, too. Supposing that moral enhancement must either be defined objectively in terms of neurophysiological improvements to some natural human capacity, or defined relatively in terms of some preferred normative standard for human conduct [17], wrongly presumes that the first option is achievable. Discerning measurable improvements to brain functioning for morality cannot be accomplished without consulting what is regarded as sound moral judgment and conduct. Moral enhancement won’t be defined so objectively, so moral enhancement needn’t be defined so subjectively, either. That objective-relative dichotomy collapses – allowing the organic and the normative to be understood as interrelated and even interwoven together, and permitting more complex options to come into view.

Normative matters are not eliminable here, nor are they subsidiary. Objective knowledge about the brain’s functioning does not automatically count as objective knowledge about moral psychology, or about what is moral. Labelling the introduction of, for example, a pharmaceutical or transcranial modification to the brain as a reliable “moral bioenhancer” just because that modification targets some type of neurological activity contributory to a certain mode of moral cognition is misleading at best. Likewise, labelling the application of, for example, real-time EEG or fMRI neurofeedback as a reliable “moral technoenhancer” simply because the subject can hit upon some type of psychological activity positively correlated with a certain mode of moral cognition, is similarly rash. Furthermore, any limited manifestation of genuinely effective moral enhancement will not easily fulfill higher ethical expectations, and may fail to be fully moral in that elevated sense. The literature on moral enhancement does not fail to occasionally mention such neurophilosophical worries, but detailed justification for such worries are rare, and insights into overcoming them are rarer.

We shall accordingly pursue two main agendas. First, serious issues are raised about prevalent presumptions, stated or tacit, rampant in the moral enhancement literature. One common presumption is that morality has, and can rightly be taken as, a fairly unified core of morals/virtues accepted across a society, so moral pluralism is ignored. Another common presumption is the tendency to think that what counts as suitably moral conduct is pretty stable across social situations, so that moral contextualism is dismissed. A further presumption is the way that discerning a neurological process contributing to moral cognition is taken for a moral neurological process to be manipulated with predictable results, so that moral systematicity is overlooked. This article does not directly argue for moral pluralism, moral contextual, or moral systematicity, although these alternatives receive favorable consideration here. Their plausibility is strengthened by this article’s second agenda. Valid philosophical concerns about the “Does-Must Dichotomy” and “Factor-Cause Plurality,” as I label them, forbid easy leaps from views about morality on to conclusions about ways to enhance morality, and then further on to ethically justifying those enhancements.

Taken together, these two agendas help to advance investigations into this key question: How can neuroscientific realities and ethical theories work together to help decide the morality of enhancing morality? Nowhere is “morality” explicated in advance, since my point is not to comfort familiar certainties about morality, but rather to compel philosophical doubts about what is meant by morality in neuroethical discussions. Should people try to make some brains more moral? And if that attempt is made, whose morality will judge the results?

Various sorts of concerns have been raised about clearly identifying what is to be improved when morality gets enhanced. Ethics can inadvertently fuel skepticism towards successful clarification. If ethicists cannot agree on what concretely counts as morality and morally correct judgment, in some conceptual respect or actual realization, how could ethics help decide the morality of enhancing morality? Skepticism may also inadvertently arise from scientific inquiry into morality. If scientists, to be scientific, must avoid pre-defining morality too narrowly ahead of the evidence gathered, many varieties or types of moralities may be observed during inquiry. Scientists distinguishing among moralities, finding some commonalities along with many differences, aren’t so novel – cultural anthropologists and social psychologists have long done so [18]. Any scientist speaking of moral behavior or morality must explain what is specifically meant while describing a discovery.

Regardless of skepticism arising from ethical disputations or scientific discriminations, ethicists or scientists failing to elaborate what is meant by “moral” or “morality” while discussing moral enhancement leave confusion in their wake [1921]. Generic factors to morality in general can be identified by common sense and moral psychology, clarifying a few psychological matters practically needed for whatever is regarded as morality [22]. However, expecting these factors taken together or separately to enjoy systematic connections deeper than their common relation with morality is an unwise demand. Moral holism or moral universalism may remain an ethicist’s dream. Various generic factors involved with moral conduct – such as moral sensitivities, sentiments, values, intentions, or beliefs – cannot be neatly identified with processes at neuronal levels, which have been explored only to a superficial degree. Debates over whether moral psychology is more reliant on “system 1” or “system 2” or some other system will not resolve this deeper issue. It appears likely that key features of human behavior receiving the honorific label of “morality” in one culture or another actually have no common neurological basis and share no normative essence [23, 24]. This moral pluralism can still acknowledge how basic factors involved with the human ability to moralize are genuine enough, even if they are not all involved to the same degree or directed towards the same moral norms. That plurality of factors conducive to varieties of morality must confound overconfident justifications for moral enhancement that seemingly rest secure on solid brain science.

The next sections offer a series of three scenarios to further illustrate why it cannot be a simple matter to specify what counts as a genuine moral enhancement, or to determine when practical moral enhancement is ethical to utilize. The first scenario yields an opportunity to reflect on the issue of whether some core to morality enjoys society-wide approval.

Moral Scenario One: Make Our Child Moral

The scene is a pregnancy clinician’s office, in the not too distant future. A diagnostic report is clutched in a young couple’s hands, as they listen to the clinician. “At four months, we can see that your baby’s physical health will be within all the usual parameters, so no worries there.” The mother relaxed, but her partner asked, “What about the moral indications?” The clinician found more good news on the charts. “No chance of sociopathy or narcissism or anything like that. There’s a tendency towards aggression and bullying, but that will get modified along with the five-month infusion at your next visit.” The partner frowns, repeating half-remember diagnostics: “That cognitive infusion for math, right?” “Yes,” the clinician says, “To ensure good math skills, as you requested. The math infusion isn’t legally required, but the civility infusion for anti-aggression is the law, as you know.” Seeing an opportunity, the clinician suggests in a helpful voice, “There are a few moral infusions available, all proven quite effective.” The mother sharply asks, “Did you see something wrong?” The clinician replied, “No, nothing is wrong. From these numbers, I see compliance around authority, enjoying conformity, and playing it safe. I’d say that your child will grow up to respect the rules and expect others to do the same. I also see tendencies for disapproving people who won’t live a respectable lifestyle and support themselves.” The mother seemed relieved. “So our child will be a moral person, then.”

After the pleased couple left the office, the clinician picked up another report, but his mind was dwelling on the previous case. With different fetal numbers, that couple might have picked the Proteneo brand as the moral infusion best for their child. There are moralities to fit many predilections and budgets. Few parents wanted their children to stand out for any saintly, heroic, or radical kind of life, though. Their questions make them sound mostly worried about whether their offspring will fit in, stay out of trouble, have lots of friends, and be able to keep a job. If there were any worries there, simple sociable infusions always work, he’d reassure them. He refocused on the report in his hands. “Next couple. What’s the chances they’ll want a moral infusion?” His eyes widened. It looked like a truly rare case. “Empathy numbers are very high, and lots of openness to difference. This one’s going to hate any rules leaving people out. Plus low self-prioritization. The non-conformist and generosity indicators are all there, too.” Now it was the clinician’s turn to smile. The last couple wouldn’t have liked that kind of news. They would have asked for that Proteneo moral infusion to reverse those numbers. But he knew his next clients pretty well. He could almost hear their reaction: “So our child will be a moral person, then.”

What is the moral to this story? In this futuristic scenario, fetal “moral infusions” are occasionally selected by the public, along with legally mandated civic infusions which the public has accepted as necessary, and cognitive and social infusions that have proven popular. As far as this fetal clinician can see, with a catalog of moral infusions to select or turn down, the customer can always be right. So long as the product isn’t illegal or dangerous, of course. That’s not a worry on this clinician’s mind – how could morality be illegal, or dangerous?

It is not necessary to assume that some singular morality prevails so that genetic moral enhancement has a unique target [25]; multiple moralities could each be upheld by a segment of a future society, much like today. People don’t suppose that what they think is truly moral is best kept illegal – morality must be beneficial, according to common opinion. With this in mind, any purveyor of moral infusions would suppose that business will stay legal. After all, the public of this future scenario approves sociable adjustment; indeed, most people apparently regard civility and sociability to be sufficient for good morals. Broad moral tolerance and available moral enhancement could be compatible with legal enforcement of civility and general concern for conforming sociability.

Are any of today’s societies heading in the direction of that hypothetical society? Some may easily drift in that direction, depending on the social worth attached to types of moral enhancement. The next hypothetical scenario further illustrates how moral enhancement is one thing, while the evaluation of such enhancement within different social contexts is quite another.

Moral Scenario Two: the Right Morality for the Job

The next scene takes place in a large investment firm’s personnel department, at some future date. Two characters discuss a chemical compound called talcapone, a catechol-O-methyl transferase inhibitor that selectively results in a dopaminergic augmentation. Subjects given a dosage of talcapone show egalitarian tendencies during interactions with others, indicating that talcapone heightens their aversion to being responsible for inequitable distributions of resources [26].

A job candidate awaits the final interview, after the profiling and drug testing were completed. The employment officer and a vice president are deciding whether to make the hire for a position managing investments for clients. “There’s more than a trace amount of a talcapone variant in his system,” the officer said. The VP’s eyebrows went up, recalling how talcapone was just the first dopaminergic to cause generously fair behavior, and later variants had few side-effects. Over-the-counter dosages labeled as Benevolium became legal in many countries. “You remember what happened when we started testing for that, right?” The officer’s face turned grim, recalling how several investment managers had to be fired. “Would he agree to stop taking Benevolium?” he asked. “Perhaps,” the VP replied, “We have the right to ask him.” The officer agreed, saying, “After all, we wouldn’t ask him to take alcapone instead.” Startled, the VP glanced back. “Do we still test for that, too?” she asked. “No, no, not anymore,” the officer replied with a laugh. “After everyone found out that the firm was promoting people on alcapone, testing was pointless.” They both smiled. Because it induced the opposite effect of Benevolium, the number of people secretly taking alcapone was probably far greater. Few people knew, or cared, what the original generic name for that drug was. Plenty of people loved the idea of going “gangster” like the notorious Al Capone, while they are taking alcapone. It really worked, though. The utter ruthlessness of investment advisors on alcapone stood out, and the extra fees extracted from clients produced big financial results. The VP took another look through the candidate’s resumé. “I see business management here. What about human resources?” The officer’s query into the system brought up an opening in employee benefits. Pleased, the VP said, “Tell him that the investment job has been filled, but benefits position is open.” If the candidate takes that job, she thought, employees would appreciate a staunch advocate of fairness who looks out for them. So long as he kept taking Benevolium, of course.

This second scenario provokes more questions. Does the egalitarianism produced by Benevolium permit it to count as a genuine moral enhancer? Must people on “alcapone” get classified as morally degenerate? Going further, are these scientific questions as well as ethical questions? More question ensue. Among the characters we meet in these stories, who is truly moral? Can they be ranked from more moral to less moral? And, most importantly, does figuring out answers to such questions require taking a broader civic context, or at least a smaller-scale institutional context, into account?

Finding it necessary to ask these higher-level questions is the larger point here, since speculation about enhancement, and moral enhancement, in any detached and abstract manner is clearly inadequate. Knowing that we should enhance morality, or even figuring out how to reliably enhance morality, will not reasonably follow simply from pointing out common notions of morality or psychological features of morality, or identifying some neurological processes making morality possible. Neurophilosophy should guide the ability of neuroethics to identify logical gaps and fallacies endemic to sketchy proposals for artificially adjusting morality.

The Does-Must Dichotomy

In many neuroethics discussions, which I have encountered, I have noticed how the author proceeds for some time in the ordinary ways of reasoning, concerning facts about what the brain does, or observations about what public opinion says or how social affairs proceed; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, does, and does not, I meet with no proposition that is not connected with a must, or a must not. Recalling Hume’s famous Is-Ought dichotomy, I wonder how, what seems altogether impossible, this similar Does-Must relation may be established.Footnote 3

This Does-Must gap is boldly leaped by typical arguments for or against the use of some new brain-related treatment or technology to achieve a putative enhancement. Neuroethics, with the latest information about moral neurophysiology in hand, could attempt to authorize arguments determining what is the better (or even best) morality exemplified by well-functioning brains.Footnote 4 However, when brain science labels certain brain processes as ‘moral’, that identification does not establish that their improvement must be acknowledged as moral. The Does-Must Gap will not be narrowed that way – the normative must also have its say. Philosophically, the issue could be posed like this: Does one’s brain make one moral, or does one’s morality make one’s brain moral?

Reasoning gaps proliferate as the distance grows between the things that people are heard to say about morality and the things that should be changed about brains. Realistically, there may not be enough neuroethics or neuroscience to bridge them. The systemic, and even organic, relations involved with morality, its factors, and its practitioners does not lend itself to a conception of morality as an autonomous and self-sustaining matter that stays aloof from concrete manifestations in the lives of socialized and moralized individuals. Consider the following argument, where a society agrees that X is a needed factor for morality:

  1. 1.

    “For morality, X is needed.”

  2. 2.

    If A is needed for B, then more of A yields more of B.

  3. 3.

    Increasing or intensifying X improves morality. (from 1 and 2)

  4. 4.

    Morality should be improved.

Therefore, X should be used for enhancing morality. (from 3 and 4).

The Does-Must gap seems smaller thanks to premise 2, but an argument relying on 2 commits a fallacy of “guaranteed causation.” Even if the presence of factor A is needed for B, that doesn’t mean that A “causes” B, and it cannot guarantee that more A will positively influence B. And even if a little more A can (under certain conditions) positively influence B, that cannot guarantee that even more A will do so as well. (Factor-Cause Plurality is responsible, to be discussed next.) Premise 3 can make it seem like improving morality could be easily accomplished, but confirming 3 remains far out of reach.

Neuroethics is vulnerable to this difficulty. It has taken an interest in arguments sharing in this schema:

  1. 5.

    3. Increasing or intensifying X improves a person’s morality.

  2. 6.

    Physiological intervention P causes a neurological alteration N, which results in X increasing/intensifying in some measureable way.

Therefore, P causes an improvement in a person’s morality.

A concrete example could be devised by briefly looking at the supposed effect that talcapone has pro-social behavior. Imagine an egalitarian arguing as follows:

  1. 7.

    For morality, taking responsibility for equitable distributions of resources is needed.

  2. 8.

    Increasing responsibility for equitably distributing resources improves a person’s morality.

  3. 9.

    Ingesting talcapone causes a neurological alteration, which results in increasing responsibility for equitably distributing resources.

Therefore, Ingesting talcapone causes an improvement in a person’s morality.

Doubts about the universal validity, or even real-world plausibility, of premise 6 are already familiar. As for the argument as a whole, logical gaps between the premises and the conclusion persist, due to Factor-Cause Plurality.

Factor-Cause Plurality

Logical gaps endemic in neuroethics are often due to overlooking Factor-Cause Plurality. A single factor can’t automatically count as a significant cause so long as many causal factors may be involved too, and a causal factor probably has a range of possible effects leading to various results depending on surrounding conditions. For example, undergoing a physiological intervention probably won’t reliably cause a neurological alteration to any guaranteed degree while local physiological or neurological conditions are fluctuating, as they usually are [30]. Ceteris paribus clauses proliferate in the course of regular scientific inquiry. However, the more that carefully controlled trials handle unwanted variables, the less those trials resemble real-world situations calling for the behaviors under study.

Furthermore, increasing something needed for morality may cease to improve morality, or even begin to reverse its positive effects, if that increase proceeds beyond some definite point. Nor will increasing something needed for morality reliably improve morality as expected for just any person, under varying social conditions. The organic systematicity to morality and its multi-level factors cannot be overlooked by moral psychology, neurophilosophy, or neuroethics.

It is a symptom of an immature moral neuroscience that popular moral psychology drives the “discovery” of brain functions necessary for moral cognition, and not the reverse. Examples are readily at hand. For many reasons, altruism has enjoyed that popular status recently – if only unselfishness dominated our motivations, we would do the right thing without second thoughts getting in the way. If we could invent a neurophysiological intervention ‘A’ for an altruistic emotion – empathy is often mentioned – in order to make this altruistic motivation more easily dominate psychological matters, would improved morality surely ensue?

  1. 10.

    For morality, acting on an altruistic emotion is needed.

  2. 11.

    Acting on an altruistic emotion improves a person’s morality.

  3. 12.

    Neurophysiological intervention A causes a neurological alteration, which results in increasing an altruistic emotion.

Therefore, intervention A causes an improvement in a person’s morality.

This argument suffers from logical gaps in the expected places, as many real-world psychological and environing matters have to be factored in. First, acting from altruism infrequently guarantees moral conduct, since bestowing our energies upon each and every person arousing that an altruistic sentiment is naive and wasteful, and a failure to do the truly right thing becomes inevitable over the course of a day. Premise 11 cannot easily follow from 10. Thoughtfully discriminating who truly needs and deserves aid and comfort is still required, and all the more so if altruistic emotions are dominating one’s thoughts. We would need guarantees that only the right people enjoy our beneficence. But no such guarantee can be made, when an emotion is taken for a fine driver of morality. Emotions are strongest concerning whatever happens to come into view, and not necessarily what should be attended to, so a person’s altruistic-driven morality would depend heavily on how that person already perceives the world. The ethical idea that “I should care about distant people I’ll never see” is not an idea that an emotional reaction will directly inspire. Not even “I should start thinking about distant people I’ll never see” is a thought for which an emotional reaction will be responsible. Furthermore, emotional reactions are not generalizable the way that reason-based judgments can be. No matter how strongly I happen to feel that “Paulina must get my aid now,” that is not a compelling reason for you to also aid Paulina, even if we agree on all the relevant facts about Paulina. The same goes for my feeling that Manuel needn’t get any consideration from me. No matter how many sound judgments I think that I can make about who receives the due measure of my attentive considerations, many altruistic people will still appear to be foolishly making the wrong decisions, by attending to nearby cases too intently, or by being too accommodating to unworthy cases.

To forestall such impractical and unethical outcomes, advocates of emotion-based moral enhancement have to resort to additional (clearly dubious) assumptions, such as: (a) only people at risk of serious moral deficiencies will be recipients of moral enhancement (yet it won’t be just our emotions identifying who they are), or (b) the “moral” emotions will somehow be directed only at truly moral cases (so reasons somehow get built into emotions); or (c) everyone will get their emotions adjusted to just the “right” degree (how will that be calculated?) at the same time; or (d) everyone’s capacity for respecting fairness and justice can be taken for granted (yet we can’t take that for granted now) so we don’t bestow undue clemency upon the unworthy; or (e) everyone’s capacity for fairness and justice will also get enhanced to just the right degree to moral emotions from violating those norms (but that measured re-balancing leaves our moral judgments as conflicted as before). The embedded assumptions are unrealistic, of course.Footnote 5 Even if some of those assumptions were satisfiable, cognitive reasoning is asked to play a large role, and emotions won’t end up being primarily responsible for any moral improvement.

Admirers of cognitive improvement for moral enhancement do not gain ground as emotion-based moral enhancement proves unrealistic. Parallel problems can be raised wherever any singular psychological/neurological factor is proposed as the right place to artificially improve morality. The greater neurophilosophical lesson has to eventually be learned. Respecting Factor-Cause Plurality requires relying upon more than just linear and mechanistic causation where complex physiological and psychological processes are concerned, at every level of embodiment from synaptic to social levels [37, 38]. Appreciation for dynamically cyclical and systemic processes, and complexities to interventions affecting them, is a matter calling for refined analysis.

There are six primary ways that achievable improvements to something’s dynamic capacities could fail to yield desired performance enhancements if those improvements are extended beyond some certain amount or degree. These general limitations apply to human capacities, and where the issue is improving morality, all six must be carefully considered.

  1. I.

    Asymmetric Improvement. Further improvement is futile because any additional alteration causes a divergence away from the optimal level achievable. Examples: the professional skills of photography applied to taking a realistic picture of an ordinary object; or the skills of tuning Steinway pianos applied to an inexpensive piano of mediocre construction.

  2. II.

    Asymptotic Improvement. Further improvement produces fewer results as the optimal level is approached. Examples: making finer and finer adjustments in the effort to precisely copy an original diagram; or playing a game such as bowling or poker with greater proficiency, reaching some sort of maximal performance level constrained only by uncontrollable chance.

  3. III.

    Asymptomatic Improvement. Further improvement produces undetectable measurements of real enhancement. For example, improving one’s playing of chess may reach the point of winning every game over all opponents, but after winning so consistently, how can any further enhancement of chess playing be detected by the competition, or even by this supreme master?

  4. IV.

    Asynoptic Improvement. Further improvement confusingly produces differing results from different perspectives, so it become difficult to objectively recognize enhancement. For example, attempting artistic improvements to surpass a genre’s standards will lead observers to disagree over what kind of art is the result; or engaging in competitive dancing in an effort to surpass expected dance forms would not be judged by dance experts in the same way.

  5. V.

    Asynchronic improvement. Further improvement only causes a more and more delayed result, obviating the point of the activity. For example, consider a person’s skill at making witty remarks at a gathering – the wittier the remark, the longer period of time is needed for comprehension by one’s audience, so producing extreme wit is not consistent with displaying clever wittiness.

  6. VI.

    Asymphonic improvement. Past a certain degree of improvement, discord is generated to the point of destroying the context in which improvement had made sense. For example, playing an instrument in an orchestra could get excessively enhanced to the point that the overall orchestral performance is disrupted; or playing soccer could be done so exceedingly well that other players become less relevant and the game is distorted or eroded.

These six limits to improvement place serious restrictions on potential enhancements. For example, talcapone could only moderately enhance moral performance in a narrow range of real-world situations. Controlled trials for interactions between the subject on talcapone and a participant find modest degrees of fairer behavior, but no ‘optimum’ fairness would be reached with more and more talcapone, since a subject would eventually reach some personal equilibrium (Limit II) and any further “improvement” would amount to unfairness against the subject (Limit I) instead of the other person. Even if the goal of talcapone treatment is to attain what society wants as true fairness, increasing a subject’s talcapone dosage while she negotiates a business contract or handles a customer complaint cannot ensure that observers can notice if ideal fairness is ever attained (Limit III). Going further, many real-world situations calling for fairness involve multiple participants. A judge taking a heavy dose of talcapone might control trials or issue decisions in ways that stray from standard judicial practice, provoking the prosecution or defense to make an appeal to a higher court (Limit IV). A local public official on talcapone, faced with a decision affecting thousands of people, might deliberate too intensely for so long that the public won’t understand either the delay or the eventual decision (Limit V). Finally, putting a large committee of state officials on plenty of talcapone and asking them to re-write tax laws would make it less likely that consensus is reached and more likely that community discord erupts (Limit VI).Footnote 6

Some readers may feel a rising frustration with so many convoluted considerations. Enthusiasts of moral bioenhancement may only be asking for a straightforward way to get people to see what is good and do the right thing. Could distractions away from pure morality be the real problem? Perhaps less is more: improvements might be made indirectly or passively by diverting or diminishing the role of certain factors, instead of enlarging them [4446]. Do the means really matter that much, morally? The third scenario is about providing morality by whatever means are effective when people want morality to be evident.

Moral Scenario Three: Just Deliver Us Morality

The Rector looked at his watch. Hopefully this candidate would be on time. The previous ministerial candidate sauntered in 30 minutes late. There was a good excuse, the rector recalled. That candidate was full of excuses, and shameless about giving them. “When I ended up at a gas station to ask directions, it couldn’t have been by accident,” she had explained. “How else could I have met that grandmother, so worried about her old car? We had to pray together, and it set her day right again.” Her demeanor during the entire interview was like that. Whatever happened, no matter how it happened, always seemed so good, and so right. The Deacon was impressed. “What we need is a minister who just knows when something is right!” The rest of the search committee didn’t disagree, having reached the consensus that their chosen minister has to truly know goodness and morality. Their previous minister had finally retired, to the relief of the congregation, because he couldn’t make tough moral judgments and take firm moral stands. Dissensions were growing over various issues, almost splintering the congregation. The next minister was going to be different, the search committee vowed.

Moral confidence exuded from all three finalists, as expected. No one on the committee was surprised to find out that all three were applying brain stimulations as moral enhancers. The theological seminaries mostly tolerated them now. A few seminaries held out for “authenticity” or “ethics,” but no one seemed to know exactly what those things really meant. Graduates weren’t all the same, though. The tardy candidate, so sure about good things when she saw them, was applying Pacifica. It was developed for people with anxieties about moral shame or moral tragedies. For ordinary people, it was apparently like feeling certain that whatever happens, happens for some good reason. As the Rector could tell from talking with people using Pacifica, each one could quickly intuit some good reason for whatever is going on, but there’s never much consistency. Not that people on Pacifica ever cared about consistency. Whatever story needed to be told in the moment is always the best account, of course.

The first candidate had proudly mentioned how he was on Proregula. The Deacon had said, “If you want someone who always knows the rules, then he’s the one for us!” That’s the effect of Proregula, as the committee reminded the Deacon. That candidate couldn’t help coming across as judgmental about everything, because applying Proregula diverts the brain from comparing one’s judgments with what other people hold to be right and wrong. The behavior was unmistakable. That expression of sheer incredulity that anyone disagrees was the first clue. “How could you ever think that!” Then the lecturing, as if anyone who dared to disagree was in dire need of an elementary education about right and wrong. Fortunately, few people used Proregula. If two of them start an argument over some moral question, it will never end. The running joke at the seminaries went, “You’d argue with God on Proregula, except God always agrees with you!”

The Rector’s musings halted, as the next candidate was right on time. The Rector had never met someone applying Purimente. Opponents said that Purimente just made people imagine that their actions are always morally fine, by dampening the activity of brain regions estimating how one’s acts are evaluated by others. People using Purimente can tell when others have different opinions, but they never feel judged by another person, or even feel like judging themselves. The committee soon observed its effects, when the candidate informed the shocked committee that she had already contacted the disgruntled group of congregants within the church. “If you don’t select me as your minister, I’m going to help them start a new church,” she announced. She seemed to barely notice the unhappy mutterings around the table as she admitted, “This all could seem improper.” Her smile widening with self-assurance, she added, “But I assure you that this comes from nothing but good intentions.”

The main lesson from the third scenario is that advocating moral enhancement by improving just a person’s ability to intuit what is right, stay true to what is right, or intend only what is right, does not seem sufficient for genuinely moral conduct. It is not enough to suppose that an enhanced ideational or motivational purity to one’s moral capacity can guarantee moral outcomes. Two additional issues stand out. First, from a first-person perspective, acting from mysterious promptings that masquerade as reasons to be moral will seem oddly unlike acting from authentic beliefs and genuine reasons. It will become apparent to subjects that they act from certainties that do not merit such trust, and they may come to doubt whether they should act on their moral beliefs [47]. Second, from a third-person perspective, people sincerely trying to be moral may be viewed as exerting undue control over others, or causing harm to other’s interests, as all three scenarios illustrate.

There is no way to abstractly define moral enhancement to forestall these issues. To give another example, defining moral enhancement as “having morally better future motives,” as Douglas [3] stipulates, appears to promise a fail-safe proviso: if a person does an action causing harm to another, either that person was not truly morally enhanced, or that person had purely moral motives to excuse the act. This is quite abstract and all too convenient. Selecting some key factor to our moral psychology as the best way to assuredly enhance anyone’s morality acquires intuitive plausibility at the expense of real-world practicability. What we would all like to know in advance is precisely what kinds of actions would a “morally enhanced” person do to others, or allow to happen to others? That level of detail is almost never provided; the moral enhancement enthusiast resorts to citing some familiar moral platitudes or a preferred ethical theory.

Proponents of ethical theories can recognize elements of their views in the hypothetical “enhancements” imagined in the three futuristic scenarios. However, no ethical theory has to approve those interventions as genuine moral enhancements. Theorizing about morality can, and should, take into account neurological, psychological, institutional, and social factors. These factors are tightly interrelated matters to be carefully examined for their relevance to morality, and for their contribution to (or detraction from) any improvements to a person’s morality, if improvements are even possible under actual circumstances.

Making My Brain Make Me Moral

It will never be a simple matter to start from common sense opinions about morality or fine ideals for moral progress and proceed to formulate a matching method for enhancing morality. The Does-Must Dichotomy and Factor-Cause Plurality, along with the ordinary divergence of views about what morality should be, raise many obstacles. If the systemically and socially organic nature of morality and moral performance is surveyed, reasonable bridges between narrowly specified goals and their delimited practical means can be realistically designed.

Supplying that content and context can restore a measure of objectivity. Where a scientific identification of moral behavior can be consistent with an ethical theorist’s description of some kind of moral conduct, they could together objectively discern alterations to a subject’s enactment of that morality. The alterations managing to meet the ethical theorist’s expectations for moral improvement past certain confirmable criteria could then be (tentatively) classified as a moral enhancer. Like following a line from one starting point on through another point in a certain direction for a specified distance, moral enhancing figuratively appears to be a task for “moral vectoring.” Moral vectoring requires at minimum these factors: the identification of two ‘points’, taken to be on a common moral ‘plane’, so that transitioning from the first to the second is a moral transition in the same sense of “moral,” and finally determining this transition’s progress beyond that second point, to the degree of enhancement anticipated.

Moral vectoring permits some concrete and empirical meaning for “moral performance enhancement.” Moral performance enhancement, to be clearly identifiable, sets at minimum three criteria for a moral enhancer: (a) a statement of moral vectoring; (b) an effective procedure for realizing that vector in a subject’s conduct; and (c) observational confirmations of that planned vector’s fulfilment. Next, before judging that an identified moral enhancer is thereby realizable, checking its usage within specified social contexts and comparing it against the six Limits to enhancement is required. Finally, after these hurdles have been cleared, approval may still be withheld. A realizable moral performance enhancement may yet be quite disputable (due to differing standards for enhancing), highly objectionable (due to divergent understandings of morality), or even irredeemable (due to violations of human rights, for example), but at least it is concretely available for these sorts of ethical and civic evaluations.

Specifying moral vectors within a social group practicing a morality, setting the fulfilment of vectors into their proper environing contexts, and evaluating the many impacts of moral vectors throughout those contexts, should take priority in discussions of practical moral enhancement. Only identifiable, realizable, and ethical moral enhancements should be candidates to undergo rigorous approval processes before any kind of broader application is permitted. For all its complexities, ethics is still what we must use together in order to decide whether and how to make an individual’s brain more moral.