Keywords

This chapter details some of the many resources that the existential and phenomenological traditions offer for reformulating psychological science to recognize the social constitution of the person and the pursuit of constitutive ends that have inherent and cumulative meaning. This exploration is pursued through close examination of an intriguing neuropsychological study of honesty and dishonesty that reveals critical moves to abstract participants and investigators from ordinary human intercourse in an attempt to study mind independent causal processes. The analysis shows that these abstraction moves are embedded in a broader and deeper network of interpretations that are necessary to make the researchers’ activities intelligible. This interpretive network is obscured in methods focused scientific reports. Psychological science is discussed as constitutive activity undertaken for the sake of knowledge, which is treated as choiceworthy in itself. Psychological scientists engage extensively in instrumental activity, but this form of activity is subordinated to constitutive action in the service of knowledge. This analysis is applied more broadly to psychological science, and an interpretive approach to psychological research is outlined that fosters a richer and more meaningful account of human action. I begin with a description of an intriguing study of the neurophysiology of honesty.

Honesty and Its Neurophysiology

Greene and Paxton were interested in studying “what makes people behave honestly when confronted with opportunities for dishonest gain?” (2009, 12506). They studied two competing hypotheses to explain honest and dishonest behavior. The first hypothesis was based on the idea that incentives for dishonesty must be resisted willfully, which involves the cognitive control processes that enable people to delay reward. They called this the “Will” hypothesis. The second hypothesis is based on the concept of automaticity, in which individuals act quickly and automatically in response to particular environmental stimuli without thinking about it. If honesty is automatic, the individual would not be tempted to dishonesty, which they termed the “Grace” hypothesis.

They tested these hypotheses with two experimental conditions that involved predicting the outcome of a computerized coin flip, with financial incentives for correct predictions. The cover story of the study was that it was an investigation of a paranormal capacity to “predict the future.” In the No Opportunity to lie condition, the participants recorded their prediction in advance so they could not lie. In the Opportunity to lie condition, they reported their prediction after the coin flip, which gave them the opportunity to lie for financial gain. The participants were divided into two groups based on their responses in the Opportunity condition. A dishonest group included the people who reported improbably high levels of accurate “predictions” (69 % accuracy or higher, with a mean of 84 %). The honest group included the 14 lowest accuracy reporting subjects (mean of 52 % accuracy).

During the study, participants underwent functional magnetic resonance imaging (fMRI) to assess when and to what degree the control network of the brain was activated as participants chose their responses. This network includes the dorsolateral prefrontal cortex, the ventrolateral prefrontal cortex, and the anterior cingulate cortex. In addition, Greene and Paxton assessed how long it took participants to report their prediction.

In the reaction time data, there were no differences in response times for the honest group across any of the comparisons, and all of these participants’ decisions were made quickly. When the dishonest group reported accurate predictions, there was no difference in their response times between the forced honesty in the No Opportunity condition and the “correct” predictions in the Opportunity condition (which included honest and dishonest correct predictions). The interesting contrast was when people in the dishonest group reported honest incorrect predictions in the Opportunity condition, their responses were slower. This means that when the dishonest subjects gave up opportunities for dishonest gain, it took them longer than any other response. Consistent with these results, honest and dishonest participants did not differ in the amount of time to report correct predictions in the Opportunity condition, but dishonest participants took more time than honest participants to report incorrect predictions.

The fMRI data were consistent with the response time data. Greene and Paxton found that there was increased control network activity among dishonest participants who refrained from lying (honest incorrect predictions) compared to when they reported correct (honest and dishonest) predictions. In contrast, there was no difference in neural activity in the control network for honest participants when they reported incorrect predictions in the No Opportunity and Opportunity conditions. The investigators summarize that “the honest subjects, unlike the dishonest subjects, showed no sign of engaging in additional control processes (or other processes) when choosing to forgo opportunities for dishonest gain. These findings support the Grace hypothesis” (Greene and Paxton 2009, 12508). It is important that all of the honest participants reported awareness of the opportunity to cheat, meaning that they did not act in ignorance. Greene and Paxton concluded that “the behavioral and fMRI data support the Grace hypothesis over the Will hypothesis, suggesting that honest moral decisions depend more on the absence of temptation than on the active resistance of temptation” (2009, 12509). They found their results surprising because additional control processes appear to be activated in situations of limited honesty. The most likely reason for reporting incorrect predictions when one is actually cheating is to cover up the cheating with limited honesty. In other words, the researchers found that will power is not necessary for honesty among those who generally give honest responses, but it is necessary for honest responses among those who demonstrated a willingness to be dishonest in this context and seems to operate in the service of masking that dishonesty.

It is worth noting a number of positive features of this study before discussing its shortcomings. The topic of honesty is important and interesting, and the investigators used a clever research design to study it. The use of reaction time and brain imaging methods is very sophisticated and led to impressive results. The examination of the brain circuitry involved in honesty can provide important knowledge to support a psychologically realistic moral psychology (which is astonishingly rare). Finally, in a departure from most neuropsychological research, they indicate that they are studying the “choice to lie” (Greene and Paxton 2009, 12406; italics in original).

For all its interest and merit, Greene and Paxton’s study is an excellent example of a powerful and pervasive tendency in psychological research toward abstraction. This abstraction is practiced in four key ways. First, the researchers attempt to abstract themselves from their investigation by adopting the disengaged stance of investigating “will” and “grace” hypotheses about honesty, with the pretense of disinterest in the topic. They do not even touch upon why honesty is important or how it plays a role in ordinary human affairs. They introduce their study as focusing “on the respective roles of automatic and controlled processes in moral judgment” and the “cognitive processes that generate honest and dishonest behavior” (2009, 12506). All of this clearly suggests that they have no personal stake or interest in honesty; it is all just dispassionate science to them.

The second form of abstraction is no doubt obvious in the quotations above. Greene and Paxton are interested in cognitive processes that “generate” choices about honesty. They do not ascribe this “generation” to an agent, much less that honesty or dishonesty might play a role in purposive human action or in relationships. These processes are either automatic and beyond conscious control or they are controlled by the “will” through the action of the brain’s control network to resist temptation in the form of a proffered financial incentive to lie. The cognitive processes are abstracted from a real person and are implausibly portrayed as impersonal. Having neural activity as the focal point of the study strongly encourages this impersonal perspective because it seems that the investigators are able to peer behind the curtain of personhood in examining blood flow patterns in the brain with the fMRI.

The third form of abstraction is that individuals are brought into a highly contrived laboratory situation. Even putting aside the elaborate equipment and esoteric expertise of the researchers, the level of contrivance in this research represents a powerful abstraction of individuals from everyday life in their induction into research participation. Participants were formally recruited, screened for handedness and psychiatric or neurological illness, asked to provide informed consent, asked to participate in a trivial task, given a contrived cover story to explain the task, offered financial incentives for excelling at this trivial task, and paid for their participation in the study (in addition to their “winnings”). To be fair, this study follows the classic experimental logic of attempting to strip away complications so that a very focal phenomenon can be investigated. Yet it is difficult to know what is being investigated by the time all this abstraction and contrivance has succeeded in isolating the presumed focal event of choosing whether to honestly report one’s prediction of the outcome of a computerized coin toss.

A facet of the abstraction of the research participants from ordinary activities is that it is reasonable to believe that honesty features very strongly in ordinary human life. There are many ways to construe the role of honesty in human relations, such as seeing it as a universal moral imperative, taking it as an inherent element necessary for any possibility of communication, viewing it as an essential aspect of meaningful human relationships, or taking a cynical view that it is the appearance of honesty, not honesty itself, that is at play. Greene and Paxton barely allude to the kind of human sociality that makes honesty worthy of our attention, much less to the kind of conceptual questions that are involved in seeking clarity about the nature of honesty in human relations. They just blithely assume that everyone operates in a space of moral conflict between honesty and maximizing gains through dishonesty.

The final form of abstraction is that Greene and Paxton discuss their study in isolation from a lifeworld that has compelling norms about honesty and cooperation. These researchers tacitly relied on some very strong assumptions about the nature of honesty to portray their work as an investigation of honesty. Unless one assumes that there is a universal obligation to be honest with strangers even in a very contrived situation, there is no reason to believe that honesty is in play in this study. Without a strong assumption of the cross-situational obligation to be honest with everyone, why would anyone think that honesty is at issue in such a bizarre situation? Why would we not conclude that the study is simply a matter of getting the most out of a strange situation or alternatively mere disinterest in the meager financial outcomes of a trivial task? I will provide an answer to this question below, but the point for now is that the researchers’ assumption that they are actually measuring honest behavior in such an enormously contrived situation is rather stunning, all the more so because the assumption is so tacitly and blithely made.

A reader sympathetic to the aims and methods used by Greene and Paxton could object that I am being overly critical and picky in my critique of abstraction in their study. After all, they are quite faithfully following the canons of psychological science. And is it not important to get our facts straight before we start talking about the morality of honesty? Moreover, it is only one study, and any one study will always have shortcomings that can be superseded by combining it with other studies that do not suffer from the same flaws.

Although there is some merit to these objections, it is important to recall that my focus on this particular study is to illustrate the deep and pervasive limitations of precisely the canons of psychological science that Greene and Paxton so faithfully follow. The point I am arguing is that although they claim to focus just on the neural facts related to decisions about honesty, they cannot even begin to see their study as focused on honesty without assuming either a universal obligation that dictates honesty or the appearance of honest dealing as the proper response to strangers even in a bizarre setting. Without one of these assumptions, there is no point to the investigation whatsoever. There is no real separation of fact seeking and morality here in spite of the extreme lengths to which the researchers went to abstract themselves, their participants, and their participants’ brains from anything like an ordinary situation.

It is fascinating to note the degree to which the two groups of participants in the study confirmed these assumptions about honest dealing. The honest participants quickly and automatically gave an honest answer when given the opportunity to cheat, which suggests conformity with an expectation of honesty. The only time that any of the participants showed effortful decision making (shown in reaction time and in control network activity) was when the dishonest participants decided to forgo an opportunity to cheat. The most straightforward interpretation of this effortful decision is as an attempt to appear honest by getting less than 100 % of the “predictions” correct. It is interesting that individuals who were manifestly willing to cheat found it worthwhile to forgo the financial incentive on some occasions in order to appear to be honest to a complete stranger whom they were unlikely to see again. This means that researchers and participants concur that honesty or its appearance has value, but they see that value differently, as I discuss in the next section.

Instrumental and Constitutive Activity

In his discussion of Heidegger’s concept of authenticity, Guignon shed some light on the two forms of valuing that showed up in the study on honesty. Guignon highlights “two different ways to understand the relation of actions to the whole of life” (1993, 230). The first is a means-end or instrumental approach in which means and ends are separable and the means are only valuable to the extent that they produce the desired end. The means are therefore external to the desired outcome and discardable if they prove ineffective. Guignon cites the actions of running to get healthy or helping a friend so one can expect return favors later. In contrast, in a constituent-ends approach to activities, the actions are seen as inseparable from the end because the means constitute the end. Rather than seeking an outcome, constituent ends actions are “undertaken for the sake of being such and such” (Guignon 1993, 230). In this approach, one runs as part of what it means to be healthy and helps a friend because that is what it means to be a friend. This distinction in the relationship between activities and one’s overall life makes it possible to understand the results of the honesty experiment in far richer ways than Greene and Paxton were capable of within the constraints of a value-neutral approach to science.

The means-end approach Guignon highlights was clearly adopted by the dishonest participants in their cheating to gain a better financial outcome. There are many circumstances in which one can obtain a better outcome in certain goods through cheating. What is more interesting is that they also refrained from cheating as a means to appear honest. This suggests that appearing honest is a likely means to be able to gain additional advantages later by making oneself appear trustworthy. It is also worth noting that the point of the cover story provided by the experimenters was to make it possible for the dishonest participants to believe that they could maintain the appearance of honesty in just this way.

The human capacity for deception and cheating is generally understood in this way by evolutionary psychologists. Human beings have evolved to have an extraordinary capacity for cooperation, which is based on trust. Evolutionary analyses suggest that cooperation evolved because it has enormous survival value when successful (Cosmides and Tooby 1992). Yet when one cooperates, one is vulnerable to being exploited by another person who is willing to cheat. There is great benefit to the cheater because he or she obtains a benefit at no cost. When a cooperator gets cheated, however, the cooperator loses valuable resources and gains nothing in return, thereby decreasing the cooperator’s likelihood of survival and successful reproduction. Therefore, the only way that cooperation could get off the ground is for humans to be capable of detecting cheaters. Cosmides and Tooby present a good deal of evidence that humans have a cheater detection capacity that involves evaluating whether others follow the simple heuristic of “if you take the benefit, you must pay the cost.” Evolutionary psychologists see a sort of arms race between cooperators and cheaters in which our ancestors developed the capacity for deception in order to appear honest even while cheating.

This species characteristic capacity for deception to conceal cheating appears to be in operation in the honesty experiment. The cost in the honesty experiment is to honestly report one’s predictions, which means that one can only obtain approximately 50 % of the financial incentives. But if one cheats, the gain can be greater. Maintaining the appearance of honesty is a strategy to deceive the other person to appear trustworthy in the interest of perpetuating the opportunities to cheat. This clarifies two complementary ways (cheating and appearing honest) in which the dishonest participants were acting instrumentally in their interactions with the experimenter to increase financial gain.

It is possible to view the honest participants as acting instrumentally as well if one sees the participant acting to demonstrate their honesty to the experimenter to maintain cooperation. But this interpretation is not parallel to the one for dishonest participants because the honest participants’ actions are for the sake of cooperation, not increasing the financial outcome. It is important to recognize that a cooperative relation is an end that can only be achieved through cooperative activity. That is, it is a constitutive end. The point of honest responses is to pay the cost of honesty in order to enact cooperation. The possibilities of cooperation are truncated in this experiment because it was an episodic event, but when people are not abstracted away from their ongoing human relationships, the prospect of an ongoing cooperative relationship is extremely salient.

Of course, a skeptical reader could say that cooperation is just a means to greater resource acquisition, and this is exactly the position that the vast majority of cooperation researchers take. It is extremely common for psychologists to automatically interpret all behavior instrumentally, an approach I have described elsewhere as instrumentalism (Fowers 2010), but instrumentalism is a strong and unjustified assumption that is inconsistent with Greene and Paxton’s data.

Another way to maintain an instrumentalist perspective is to see honesty as non-volitional. The rapid and automatic way that honest participants reported their predictions accurately could suggest that they are phenotypic cooperators, with little or no choice about it. As Greene and Paxton suggest, this would mean that these respondents are simply following an automatic script of honesty. On this interpretation, the dishonest participants are phenotypic cheaters. The rapidity and the low level of control network activity in their dishonest responses are consistent with this interpretation because it could be suggestive of an automatic response. This is Greene and Paxton’s favored interpretation.

Tellingly, they do not even give passing consideration to a constitutive interpretation. Honesty can be understood as a constitutive activity in that individuals have intentionally cultivated a character trait of honesty or dishonesty. The goal of character development, from an Aristotelian point of view, is to cultivate the kind of ready, spontaneous response to situations that call for a character strength, and these would look exactly like the automatic responses Greene and Paxton found. From a Heideggerian perspective, this would amount to resolutely taking a stand on honesty, and what Guignon has called acting honestly for the sake of being an honest person.

Explanations of the findings in terms of character or taking a stand are difficult from the abstractionist standpoint of the experimenters because such explanations require agency and particularity whereas the abstractionist approach calls for general causal processes. The key point here is that this kind of experiment cannot differentiate between instrumental and constitutive interpretations of the results. Yet the constitutive view of honesty does not seem to occur to Greene and Paxton in spite of how well it fits their data and that it provides a more cogent explanation of their “Grace” hypothesis than some mysterious automatic cognitive process.

As in most psychological research, Greene and Paxton abstracted themselves as researchers from the scientific report. Unraveling this abstraction of the researcher from the human endeavor we call psychological science reveals a fascinating, but obfuscated combination of instrumental and constitutive activities. To begin with, the researchers tell us only that they are interested in studying the “cognitive processes that generate honest and dishonest behavior” because “little is known about” them (Greene and Paxton 2009, 12506). This is a clear primae facie statement about the inherent value of scientific knowledge, and the absence of talk about practical application suggests that this knowledge may be good for its own sake. What is more revealing is that they say nothing at all about why knowledge regarding honest and dishonest behavior is worth having. Given the absence of an explicit rationale for this topic of study, it is reasonable to believe that Greene and Paxton thought the value of this topic was obvious. Of course, one can almost hear them saying, honesty is an important topic because it is central to human communication, intimacy, cooperation, and collaboration. But they do not take a stand on why honesty is an important topic. Yet they must believe that it is an important topic because conducting fMRI research is extremely resource intensive. Right away, these researchers’ ethical commitments to knowledge and to a better understanding of honesty emerge when the veil of abstraction through objectification is peeled back.

In contrast, Greene and Paxton very deliberately treated their participants instrumentally in three ways. First, individuals were invited to participate in the research and dismissed if their participation did not fit the parameters of the study (18 participants were excluded, 35 were included), and the included participants were dismissed when their usefulness was at an end. Participants were disposable means for obtaining data. Second, the participants were paid for their time. Third, without clarifying whether they appreciated the irony, the researchers deployed deception in their study of honest behavior. They rightly reasoned that in order to make the obvious opportunity to cheat plausible, they had to give the participants a cover story about the researchers’ interest in a paranormal ability to predict the future. Without this cover story, cheating would be less likely because it would be so obvious that the participants could not hope to give the appearance of honesty discussed above. The use of deception was clearly an instrumental action on the part of the researchers because they manipulated the participants into thinking that they had a non-obvious opportunity to cheat.

Did Greene and Paxton deceive their participants simply to attain an external end such as a publication, a research grant, or additional prestige? Such an interpretation would be very uncharitable. It is much more reasonable to believe that the purpose of the deception was to make it possible to gain scientific knowledge about honest and dishonest behavior, not just to pull a fast one on some research participants or gain a small increment of prestige through publication. They pursued their purpose of gaining knowledge through scientific procedures. Although there are many different procedures in psychological science, this science, as currently understood, is constituted by the practices of posing hypotheses, careful design of study procedures, data collection, error sensitive analysis, and justified conclusions. In other words, the pursuit of scientific knowledge amounts to an intricate set of constitutive activities for the sake of an end that is good in itself. If this is accurate, then the instrumental deployment of deception was subordinated to the constitutive aim of scientific knowledge. Similarly, the inclusion, exclusion, and dismissal of participants and paying the participants were undertaken for the sake of knowledge. This hierarchy of ends has no place in Greene and Paxton’s thinking about the roles of honesty and deception, but it puts their deception into a framework that makes its purpose explicit and justifiable. To be sure, investigators have to justify deception in research to human subjects review boards in just this way, but the hierarchy of aims becomes particularly important in this study because it justified deception in a study of honesty.

Greene and Paxton want us to believe that they have produced genuine scientific knowledge. Interestingly, they presented evidence that their dishonest participants exerted themselves to appear honest, even though these participants were demonstrably dishonest. Are we to believe the same about Greene and Paxton? They demonstrate a willingness to deceive their participants, so if one had a suspicious bent, one might worry that they are deceiving their readers as well, but making the deception plausible. There are, of course, three levels of assurances that they are not deceiving us about their research. First, they document their procedures meticulously, even the ones that cast some doubt on their findings. Second, this documentation is meant to make it possible for others to verify their findings through replication. Third, they are responsible for providing data and analysis results if there is a reasonable doubt about the accuracy of their report. These safeguards are also constituent aspects of contemporary science, which places a very high premium on the verifiability of knowledge claims among knowledge purveyors who are quite capable of cheating.

Psychological Science as a Social Practice

This brings us full circle in recognizing that the practice of science is, in some central ways, not very different from ordinary human activity. Practicing psychological science is a massively collaborative endeavor, including researchers, assistants, participants, review boards, peer reviewers, editors, publishers, and readers. Science cannot be practiced in isolation and it would have little value if the knowledge could not be shared. As in any cooperative endeavor, humans seek out others whom they believe to be honest cooperators, making the cooperation mutually beneficial. Given cooperative activities, some individuals will also attempt to cheat by appearing cooperative, but are actually freeriding. Freeriding through scientific fraud is painfully common. To make cooperation possible, human beings have evolved strong cheater detection capability and vigilance.

Scientists attempt to go beyond the honest signals of cooperative behavior to employ procedures designed to minimize the influences of personal bias and social influence. These procedures are generally described as supporting objectivity, but it is important to recognize that they are centrally founded on the recognition of the fallibility of human reason, on our vulnerability to bias, and on the possibility of cheating. As phenomenologists already understand philosophically, this argument makes it clear in researchers’ own terms that science does not give us a privileged view of truth that is guaranteed by method. Rather, our methods are specialized procedures that are useful in our pursuit of understanding within highly circumscribed parameters.

The accurate reporting of research findings is absolutely indispensable to science. From time to time, spectacular cases of scientific fraud remind us that we cannot take this honesty for granted. Given the enormously powerful incentives for fabricating results, it is no surprise that individuals would occasionally succumb to misrepresenting their findings, or “massaging” their data (e.g., academic tenure and promotion, grant funding, public notoriety, and political influence). Yet it is clear that honest inquiry and reporting are constituents of science because without careful inquiry and honest reporting of procedures and results, there can be no science. Because knowledge is the goal of science, dishonesty is inimical to the practice of science. Dishonesty about one’s results is an act of bad faith because deceptive reporting undermines the very possibility of pursuing the most accurate account of a phenomenon. One could even say that scientists who willingly and consistently report procedures and findings accurately are enacting the virtue of honesty, and they may be seen as doing so out of a cultivated devotion to knowledge.

Psychological science and its methods can be very revealing as long as they are contextualized within a richer understanding of what it is to be human. The place of honesty and deception, instrumental and constitutive activity, and knowledge pursuit are not just aspects of scientific work, but part of what it is to be human. Our humanity cannot be reduced to impersonal causal forces. For example, the fact that human beings are impelled to reproduce cannot obviate the joy and communion possible in a loving sexual relationship, the meaningful activity of parenting, or the principled choice not to have children. As Guignon has reminded us, what is decisive here is whether or not a person takes a stand on her relationships, how she projects herself into a future that grows out of her thrownness in a particular time and place. Even if one accepts that much is given biologically, such as a drive to reproduce, an inclination to cooperate or cheat, or the capacity to detect cheating, these givens are only the building blocks of a human life, not that life itself. As we have described it elsewhere, “our nature or being as humans is not just something we find (as in deterministic theories), nor is it something we just make (as in existentialist or constructionist views); instead it what we make of what we find” (Richardson et al. 1999, 212; italics in original). Or, as Heidegger put it, “the ‘essence’ of Dasein lies in its existence” (BT, 67/SZ, 42; italics in original).

In taking a stand, the individual takes up the possibilities provided by biological nature and the historical culture and makes them her own. If honesty or dishonesty are seen as possibilities available to all, then choosing one or the other repeatedly amounts to taking a stand on the question of honesty. Such repetitive choices create a habit of honesty or dishonesty, which renders the action relatively effortless. This is especially true when a person makes the choices self-consciously, either by deciding to be an honest person or by deciding to be dishonest because it is a dog-eat-dog world, and a clever person has to seek advantage through deceptiveness. And taking a stand on honesty is every bit as indispensable to being a good psychological scientist as it is to being a good friend. There is a strong parallel to Aristotle’s concept of habituation, through which one forms one’s character through repeatedly acting in a particular way. The aim of habituation is to form a settled character that makes it possible to spontaneously act in a virtuous way, without the kind of conflict between duty and desire that underlies Greene and Paxton’s Will hypothesis. Having taken a resolute stand on honesty or having the character strength of honesty would be perfectly compatible with Greene and Paxton’s Grace hypothesis. In their zeal to identify and document impersonal cognitive processes, these researchers apparently did not consider the rich explanatory power of stand taking or character.

A Broader Look at Psychological Science

Up to this point, my focus on a single study could give the impression that I have cherry-picked a single instance that is particularly vulnerable to critique. It is now time to clarify that psychology as a discipline is in need of reformulation along phenomenological lines and to say something about what that might look like.

From its very beginnings, the discipline of psychology had very strong leanings toward studying impersonal processes that could be described in terms of causal laws. In one of the first psychological laboratories, Hermann Ebbinghaus set out to study learning and memory through the use of nonsense syllables. He designed three-letter syllables to be meaningless, beginning and ending with a consonant and containing a vowel, and eliminating syllables that were similar to actual words. Ebbinghaus ([1885] 1913) published a monumental book describing his research, and several of the concepts he pioneered have remained important: the learning curve, the forgetting curve, and the spacing effect. Interestingly, he proceeded by being both experimenter and experimental subject. Yet all of his results were obtained by abstracting himself from the context of meaningful language and interaction because he wanted to study learning and memory purified from any personal or meaningful interference.

Others followed in Ebbinghaus’s footsteps, including Edward Thorndike, the American psychologist who placed cats and other animals in contrived puzzle boxes to “discover” that learning occurred through trial and error (1911). Thorndike promulgated a number of “laws of learning” such as learning is incremental, learning is the same process in all animals, and learning occurs as a result of rewarding outcomes. Many critics have pointed out that although there may be some value in these “laws,” their explanatory power diminishes rapidly outside of the artificial and abstracted environment of the laboratory. Moreover, his findings are largely artifacts of his procedure of placing animals in an entirely foreign environment.

The propensity to abstraction is also readily apparent in social, developmental, educational, and personality psychology. Experimental procedures and control are the sine qua non of these sub-disciplines, meaning that psychologists continue to favor the kind of abstraction and impersonal explanatory approaches described in this chapter. To give one contemporary example, social exchange theory has been the most influential theory of romantic relationships in social psychology and it suggests that people enter, stay, and leave romantic relationships on the basis of whether they are getting the best exchange of rewards possible for them (Karney and Bradbury 1995). This theory has been updated in interdependence theory which seems richer in including terms like investment, commitment, trust, sacrifice, and satisfaction (Rusbult and Van Lange 1996). Yet the criterion for relationship quality is individual satisfaction, and this satisfaction is the primary causal source of everything else. Although satisfaction is subjectively experienced, these researchers focus on it as central to “the laws that govern the experience of interdependence irrespective of the specific outcome under consideration” (Rusbult and Van Lange 1996, 567). What they fail to notice is that concepts such as satisfaction, commitment, investment and trust are far from being either culturally or historically universal. Therefore, the contemporary Western interpretation of romantic relationships is under investigation in their studies, not some set of universal laws of interdependence. Guignon and others have supported and guided a number of similar critiques of these unacknowledged assumptions in a wide range of psychological sub-disciplines, such as cognitive aggression theory, cognitive development, marital research and therapy, and positive psychology (Fowers 2008; Guignon 1998, 2002; Richardson et al. 1999; Richardson and Guignon 2008).

Re-envisioning Psychology

In this brief chapter, I can only outline a few of the ways that psychological science can be reformulated with insights from phenomenology. Some of this is drawn from a book-length treatment that promoted a re-envisioning of psychology (Richardson et al. 1999). The starting point for this reformulation is to overturn the excessive claims to objectivity and realism in psychology. The attempt to make psychology a science along the lines of the natural sciences is deeply ingrained, and the aim has been to develop a science of behavior that is focused on a mind-independent reality. The use of experimental research and an extreme emphasis on removing interpretation from measurement through the use of physiological and observational measures have been primary methods in this quest for objectivity.

The attempt to remove interpretation from a science of human behavior is bound to fail because humans are, by nature, self-interpreting beings. Human behavior is never interpretation-free, whether that is everyday activity, the behavior observed by researchers in laboratories, or the behavior in which researchers engage. The truth of this is both revealed and obscured in the elaborate methods psychologists use in their attempts to pin down the sources of behavior. Recall that Greene and Paxton set up an experimental contrast where participants either had or did not have an opportunity to cheat. This method is presented as interpretation-free, but psychologists are very careful to design their studies in ways that channel their participants’ interpretations of their experience to match the specific interests of the researcher. In other words, the experiment is an interpretation of a social situation in which cheating is relevant and either possible or not. Greene and Paxton constructed a cover story to make the cheating seem legitimate. This deception was necessary because the participants had to interpret the situation as one in which they could cheat without being detected. The researchers were well aware that cheating would be much less likely if it seemed obvious in the social situation. Greene and Paxton, like virtually all experimental psychologists, obscured these interpretive moves with a detached description of the experiment and by framing the object of study as impersonal cognitive processes. This suggests that psychological research in general is impossible without an extensive network of interpretations that constitute the experimental situation, the experimenter, and the experimental subject as specific elements of a highly elaborated, knowledge seeking endeavor.

The second important point is that the interpretations of the experimental situation are not simply those of an individual or small group. These interpretations are part of a shared historical context that constitutes the activities of knowledge seeking and the roles of researcher, subject, and audience. These activities and roles are outgrowths of a historical culture in which psychological science is a sensible pursuit. The idea that human beings can be studied in a way similar to the study of the natural world is a relatively new one. The practice of psychological research, in both its institutional form and in the personal involvement of researchers and subjects has been built up gradually from very humble beginnings to being comprised of tens of thousands of researchers, vast physical facilities, enormous funding, and rapt professional and public audiences. This enormous expenditure of time and money seems sensible to many people in the modern West, but it would not have been reasonable prior to the Enlightenment and the successes of the natural sciences, and it is still not comprehensible to people who have not been socialized to see it as legitimate.

Third, the very large expenditure of resources and the deep interest evident in professional and public audiences in contemporary Western societies demonstrate that knowledge about the sources of human behavior is extremely valuable. Knowledge is a very important human good, and its pursuit therefore has an ineliminable moral dimension. Individuals who are particularly drawn to this pursuit engage in a decade of training, an arduous form of work, and receive relatively low salaries given their expertise and training. In other words, psychological science is a human endeavor like any other, and it is pursued for the sake of goods such as knowledge, improving human life, even though status and income figure in. No amount of abstraction, technical language, apparatus, or double-speak can alter the fact that psychological scientists pursue ends about which they care deeply. It is worth noting that the participants in Greene and Paxton’s study were pursuing worthwhile ends as well (as I would suggest participants in all studies are doing). The dishonest participants were attempting to maximize their financial outcomes from their participation, whereas the honest participants were enacting honesty either for the sake of trustworthiness and cooperative relationships or to follow conventional rules about honesty.

I began this chapter with the question of whether psychological science was in need of assistance from philosophical phenomenology. I have answered that question in the affirmative through in-depth analysis of a single study and the extrapolation of the results of that analysis to the science of psychology in general. I have argued that the self-understanding of psychological scientists as engaged in the fully objective pursuit of a mind-independent reality of human behavior is erroneous from the perspective of philosophical phenomenology. In my opinion, philosophical phenomenology clarifies that psychological research is a much more interesting and human endeavor than its self-understanding suggests.

My conclusion is not that psychological science is irredeemably flawed. There is much to be gained in studying human behavior, but the value and the accuracy of the understanding we seek will be limited and distorted by the misperception of science as wholly distinct from other human practices. By recognizing that psychologists study self-interpreting beings whose actions are predicated on widely shared interpretations and directed toward intelligible goods, we can enrich our understanding of scientific practices and results. This enriched understanding can still accommodate a robust form of objectivity in that we can still employ the best available shared norms of inquiry, do our utmost to get our facts straight, strive to understand and minimize personal biases and interests, and remain open to critique of our inquiries and the valuable insights of others. I hope to have clarified that psychological science cannot be understood as a disinterested, neutral account of human behavior. It is instead part of the ongoing human drama of expressing, questioning, and shaping a way of life that is devoted to understanding and pursuing what is worthy or admirable. The pursuit of knowledge is a deeply ethical endeavor that can help us to gain greater insight into and enactment of the good life for human beings, and a phenomenologically informed psychology can aspire to this most worthy of ends.