Keywords

In 1933, the same year the Nazi regime ascended to power, Stanley Milgram was born into a working class Jewish family in the Bronx in New York City. During his formative years, Milgram was perturbed by the Holocaust. Later he became a social psychologist and obtained a tenure track position at Yale University. During the many Nazi war crime trials, “ordinary” Germans in the docks—like Adolf Eichmann in Israel—typically explained that in participating in the Holocaust they were just following higher orders. This led Milgram to wonder what would happen if he ran a social psychology experiment where ordinary (American) people were ordered to inflict harm on another person. Would they also do as they were told? He designed a basic procedure that tested this question and soon afterwards had his students run the first pilot.

The result from the first trial stunned Milgram—most subjects indeed obeyed orders to inflict what appeared to be intense shocks on an innocent person. Milgram immediately sensed he had captured essential elements of the Holocaust in the laboratory setting. Thereafter he applied for funding to run an official research program so that he could better understand so-called obedience to authority. Milgram’s intentions were not entirely honorable—running such an innovative research program could greatly boost his then precarious career prospects and financial security. Pre-tenure, Milgram told Jerome Bruner, a professor from Milgram’s graduate program at Harvard University, “My hope is that the obedience experiments will take their place along with...” contributions by the biggest names in social psychology: “Sherif, Lewin and Asch” (as cited in Perry, 2012, p. 57). Whatever drove Milgram on, he anticipated enormous benefits for both scientific knowledge and himself. So what exactly did he find? What follows is a basic overview of his two baseline procedures and the counterintuitive results they produced.

Milgram’s Baseline Experiments

The first official baseline experiment involved an actor posing as a potential subject. He entered a laboratory and encountered an apparent scientist (another actor, hereafter called the experimenter). The ostensible subject was then introduced to a waiting naïve, and actual, subject. The experimenter then told both the actual and supposed subject that the experiment they volunteered to participate in was designed to investigate the effects of punishment on learning. One person was required to be the teacher and the other the learner. A rigged selection ensured that the actor/subject was always the learner, and the actual subject the teacher. The actual subject (now teacher) watched as the experimenter secured the learner to a chair and attached an electrode to his arm. The learner was informed that the subject, using a microphone from another room, would ask them questions regarding a word-pair exercise. The learner was able to electronically transmit his answers to the subject’s questions.

The subject was then taken into an adjacent room and placed before the shock generator. This device had 30 switches aligned in 15-volt increments ranging from 15 to 450 volts. The experimenter instructed the subject to give the learner a shock for each incorrect answer proffered; and each incorrect answer warranted for the learner a shock one level higher than its predecessor. No shocks were actually administered.

Upon starting, the learner regularly provided incorrect answers and, as a result, acquiescent subjects quickly advanced up the switchboard. The experimenter responded to any signs of hesitancy by the subject with one or more of the following prods:

  • Prod 1: Please continue, or, Please go on.

  • Prod 2: The experiment requires that you continue.

  • Prod 3: It is absolutely essential that you continue.

  • Prod 4: You have no other choice, you must go on (Milgram, 1974, p. 21).

If the subject attempted to clarify the lines of responsibility, the experimenter asserted: “I’m responsible for anything that happens to him. Continue please” (p. 74). At the 300 and 315-volt shock switches, the learner banged on the wall and thereafter fell silent. This silence implied that the learner had at least been rendered unconscious. The experimenter then instructed the subject to treat all subsequent unanswered questions as incorrect and inflict a shock at the next level. The experiment was deemed complete upon the subject administering three successive 450-volt shocks. Sixty-five percent of subjects (26 out of 40) inflicted every shock.

After running this experiment, Milgram and his research team ran 23 variations. For example, for the fifth experiment, Milgram decided to run a second more radical “New” Baseline, where up until the 345-volt shock switch the subject could clearly hear the content of the learner’s increasingly distressed reactions (eventual panicked screams) to being “shocked.” The New Baseline condition also obtained a 65% completion rate, and thereafter became the model procedure that all subsequent slight variations were based on. During the final 24th “Relationship condition,” subjects were encouraged to inflict increasingly intense shocks on an eventually screaming learner who was at least an acquaintance, often a friend, and occasionally a family member (see Russell, 2014b). To clarify, prior to the experiment’s start learners were covertly informed of the study’s actual purpose (Will your friend follow orders to hurt you?) and then instructed on how vociferously to respond to their friend’s infliction of increasingly intense “shocks.” This particularly unethical variation saw the completion rate plummet to 15 percent.

Data collection took 10 months and involved a total of 780 subjects (Perry, 2012, p. 1). The amount of data collected was enormous. Despite this, to date, nobody has managed to develop a “conclusive” theory capable of accounting for Milgram’s findings (Miller, 2004, p. 233).

Why Did Most Subjects Complete the New Baseline?

Despite the theoretic drought, it seems many factors, some of which I will describe below, are (perhaps cumulatively) likely to have contributed to most subjects’ decision to complete the New Baseline experiment. The first such factor is termed “moral inversion” (Adams & Balfour, 1998, p. 20), which is where “something evil” (inflicting intense shocks on an innocent person) was converted by the experimenter into something “good” (advancing scientific knowledge on the effects of punishment on learning). The experimenter’s higher “scientific” goals meant (apparently) the data had to be collected. As Milgram (1974, p.187) put it, the infliction of harm comes “... to be seen as noble in the light of some high ideological goal” where, by inflicting shocks, “science is served.”

Another factor was the foot-in-the-door phenomenon, which is where persons are more likely to agree to a significant request if it is preceded by a comparatively insignificant request (Freedman & Fraser, 1966). For example, nearly every subject in the New Baseline inflicted the first six relatively light shocks (15–90 volts). However, in line with the foot-in-the-door phenomenon, doing so saw them comply with a small request which, unbeknownst to them, was about to be followed by some far more significant ones. The foot-in-door phenomenon is likely to have had two important consequences on subjects:

(a) it engages subjects in committing precedent-setting acts... before they realize the “momentum” which the situation is capable of creating, and the “ugly direction” in which that momentum is driving them; and (b) it erects and reinforces the impression that quitting at any particular level of shock is unjustified (since consecutive shock levels differ only slightly and quantitatively). (Gilbert, 1981, p. 692)

Across many small 15-volt steps, most subjects inflicted increasingly intense and eventually dangerous “shocks.”

Another likely influential factor over many subjects’ decision to continue inflicting shocks was the undeniably coercive—even bullying—force of the experimenter’s prods. The efficacious force of these prods was probably increased by the fact that the experimenter—a scientist—was closely associated with Yale University—a highly credible and authoritative institution of knowledge.

The final influencing factor I will discuss here was the experimenter’s offer to accept all responsibility for the subject’s infliction of further shocks. This offer enabled a subject to displace responsibility for their shock-inflicting actions onto the experimenter and provided the subject with an important self-interested benefit: if the subject was (apparently) not responsible for their actions, then they were under no obligation to stop the experiment. Consequently, the subject could, at the learner’s expense, avoid having to engage in the predictably awkward confrontation with the experimenter otherwise necessary to stop the experiment. That is, by accepting the experimenter’s offer, the subject could continue flicking the switches and—(apparently) absolved of all moral and legal culpability—simply blame the experimenter for their actions (see Russell & Gregory, 2011).

The most many obedient subjects were willing to do to help the learner avoid the intensifying “shocks” was to covertly sabotage the experiment by verbally emphasizing to the learner the correct answers to the questions. Thus, these subjects were willing to sacrifice the (apparently) all-important scientific pursuit of knowledge in favor of their self-interested desire to avoid a confrontation with the experimenter.

It seems the cumulative effect of these forces—moral inversion, foot-in-the-door phenomenon, displacement of responsibility, and appealing to the subject’s self-interested desires to avoid a confrontation—probably caused most subjects to fall in line with the experimenter’s groupthink desires: inflicting further shocks was (apparently) essential. And once subjects totally committed to doing as they were told, their passing of this moral Rubicon saw some engage in some rather unusual behaviors. For example, on reaching the high end of the switchboard, some subjects started anticipating the learner’s screams and then attempted to talk over them, thus actively trying to avoid having to hear (neutrialize) their pained appeals. These subjects—more concerned about alleviating their stress-related pain—did not want to know what they knew: that they had committed to hurting an innocent person but preferred to remain, as termed by Heffernan in a previous chapter, willfully blind to this reality.

At the earliest opportunity, Milgram attempted to and eventually succeeded in publishing the first official baseline experiment (1963). This publication, which mentioned the Holocaust in its first paragraph, garnered immediate media attention and with time became Milgram’s “best-known result” (Miller, 1986, p. 9). Because he thought he had captured key elements of the Holocaust in the controlled laboratory setting, Milgram likely thought the wider academic community would heap praise on his research. But the first scholarly response, by Diana Baumrind (1964) in the prestigious American Psychologist, was a scathing ethical critique that also questioned the external validity of the untenured Milgram’s experiment. Baumrind, for example, pointed out that unlike German perpetrators during the Holocaust, Milgram’s typically concerned subjects clearly did not want to hurt their victim. Thus, she remained unconvinced by Milgram’s generalizations towards the Holocaust. If Baumrind was right and no parallel to the Holocaust existed, then, as she also notes, Milgram had no justification for having exposed his subjects to, as stated in his 1963 article, the following torturous experience:

I observed a mature and initially poised businessman enter the laboratory smiling and confident. Within 20 minutes he was reduced to a twitching, stuttering wreck, who was rapidly approaching a point of nervous collapse. (Milgram, 1963, p. 377, as cited in Baumrind, 1964, p. 422)

If Baumrind was correct and harm was inflicted on innocent people for no reason at all, then the running of the Obedience studies were perhaps an example of groupthink captured in the laboratory setting. That is, through conformity and/or a desire for group harmony, somehow Milgram’s many helpers—actors, research assistants, and technicians—all agreed to treat innocent people in an injurious and ultimately unethical manner. Other critics of Milgram, like Harré (1979, p. 106) for example, have alluded to this potentially groupthink connection:

Milgram’s assistants were quite prepared to subject the participants in the experiment to mental anguish, and in some cases considerable suffering, in obedience to Milgram. The most morally obnoxious feature of this outrageous experiment was, I believe, the failure of any of Milgram’s assistants to protest against the treatment that they were meting out to the subjects.Footnote 1

Milgram’s research notes dated March 1962 showed he was aware of the ironic parallel between the subjects’seemingly harmful actions and his research team’s actually harmful actions:

Consider, for example, the fact --and it is a fact indeed, that while observing the experiment I ---and many others-- know that the naive subject is deeply distressed, and that…[it] is almost nerve shattering in some instances. Yet, we do not stop the experiment because of this […] If we fail to intervene, although weknowa man is being made upset; why separate these actions of ours from those of the subject, who feels he is causing discomfort to another. And can we not use our own motives and reactions as a clue to what is behind the actions of the subject. The question to ask then is: why do we feel justified in carrying through the experiment, and why is this any different from the justifications that the obedient subject’s feel? (Stanley Milgram Papers, Box 46, Folder 163.) [Italics added]

With his unrelenting ambition to develop a psychological (individual) theory capable of explaining why most subjects behaved in “a shockingly immoral way” (Milgram, 1964, p. 849), Milgram never further pursued this more sociological (group) and no doubt disconcerting observation.

To unravel why Milgram’s research team agreed to inflict harm on innocent people, I would argue it is important to analyze the start-to-finish journey that led to Milgram’s destination: his perplexing Baseline/New Baseline completion rates. This “behind the scenes” approach when viewing the actual running of the experimental program is, I believe, capable of revealing some of the more important sociological forces that encouraged Milgram and his research team’s groupthink decision to collect a full set of ethically questionable data. It is no coincidence, as we shall see, that these group forces coincide with those that likely affected the compliant subjects’ decisions to participate.

Group Forces Influencing the Research Team’s Agreement to Inflict Intense Stress on Innocent People

When, after the first pilot study, Milgram decided to pursue the official Obedience research program, an obstacle likely to inhibit the realization of such ambitions became increasingly apparent. That is, because subjects during the first pilot experienced, as stated in his research proposal, “extreme tension” (as cited in Russell, 2014a, p. 412), there was a risk some of the specialists whose help he needed to collect the official data might deem the research program unethical and refuse to fulfill their essential roles.

Milgram’s initial strategy to ensure that his research assistants, technicians, and actors all agreed to perform their roles was to encourage them to, as Fermaglich (2006, p. 89) put it, “view” the subjects’ obedience “as an analogue of Nazi evil.” Thus, much like he did with his subjects, Milgram morally converted “something evil” (imposing stress on the innocent subjects) into something “good” (generating scientific knowledge into better understanding perpetrator behavior during the Holocaust). The actor who most frequently played the role of the stress-inflicting experimenter, John Williams, for example, understood that despite his making “a man…upset,” data collection was of “tremendous value,” and thus the experiments “must be done” (as cited in Russell, 2014a, p. 416). Another example of moral inversion occurred when Milgram reassured his main research assistant, graduate student Alan Elms, that he did not need to worry about his “E[i]chman[n]-like” role of delivering a constant flow of subjects to the laboratory because they were all given “…a chance to resist the commands of a malevolent authority and assert their alliance with morality” (as cited in Blass, 2004, p. 99).

Although all helpers were encouraged to believe that they would be contributing to an important study, Milgram sensed that this in itself was not enough to secure everybody’s long-term services. Thus, when necessary, he bolstered his moral inversion of bad into good by anticipating and then appealing to all his helpers’ sometimes different self-interested desires. For example, Milgram offered actors Williams and James McDonough (the main “Learner”) a generous hourly rate (which Milgram increased three times within eight months), along with the offer of a cash bonus to be paid out once all the data had been collected (Russell, 2014a, p. 416). Milgram also paid Elms an hourly rate for his services but also strengthened the attractiveness of role fulfillment by supporting the graduate students’ emerging interest in the Obedience studies by publishing a journal article with him (see Elms & Milgram, 1966).

So, to promote involvement among all his helpers, Milgram basically applied what he suspected would prove to be the most successful individually tailored motivational formulaquid pro quo arrangements where benefits are provided in exchange for services rendered (Russell, 2014a, 416–417). Armed with typically similar justifications, it appears Milgram’s helpers resolved the moral dilemma over whether or not to become involved in a potentially harmful study by becoming sufficiently convinced and/or opportunistically tempted into making their essential specialist contributions to data collection.

One might suspect that Milgram’s helpers would have felt anxious about potentially harming innocent people, especially after weighing this risk up against the mere “scientific” and self-interested gains they hoped to obtain. This is especially so considering that during the official collection of data, at least two subjects were placed under such intense stress that they later complained that they thought they were going to have—or perhaps had—a heart attack (see Russell, 2009, pp. 104–105). However, alleviating such concerns was that as Milgram drew all his specialist helpers into role fulfillment, the issue of individual responsibility for harm infliction underwent a subtle yet powerful transformation. That is, after agreeing to perform their roles, all helpers unwittingly became links in an inherently stress-resolving and goal-directed assembly line-like bureaucratic process.

To clarify, before the official research program could proceed, Milgram had to design and then construct an inherently bureaucratic organizational process which would enable his research team to systematically and efficiently extracted data from 780 subjects. More specifically, “processing” involved training subjects, running the experiment, collecting data, and debriefing. For each subject, Milgram’s research team had to complete all of these tasks within a pre-determined one-hour block so that the stage, so to speak, could be reset before the next subject’s arrival at the top of the hour.

Intrinsic to all such bureaucratic processes is the division of labor (DOL)—where an organizational goal (in this case, collecting data) is subdivided into numerous tasks and then each of those tasks is allocated to a particular specialist functionary (Weber, 1976). For functionaries, however, this compartmentalization of tasks can cause a disjuncture between cause (for example, making partial contributions to Milgram’s goal of collecting a full data set) and any negative effects generated by goal achievement (the infliction of intense stress on subjects). Among all functionary helpers—so-called cogs in the organizational machine—this disjuncture between cause and effect can stimulate what Russell and Gregory term “responsibility ambiguity” (2015, p. 136). Responsibility ambiguity is a metaphorical haziness, which renders debatable which functionary helper is most responsible for any harm inflicted by the wider organizational process. Importantly, responsibility ambiguity makes it difficult for arbiters to later determine who should be held to account for such harmful outcomes. This haziness can render some functionary helpers genuinely unaware of their personal responsibility. However, this haziness can also enable others to opportunistically escape shouldering responsibility because they suspect that their harmful contributions will be rewarded in the short-term and, due to the availability of plausible deniability (“I didn’t know!”), never punished in the long-term. Therefore, it could be argued that the bureaucratic process structurally provided all of Milgram’s helpers with the “fog” of responsibility ambiguity (Russell & Gregory, 2015).

Perhaps the most common source of responsibility ambiguity among functionary helpers working across an organizational chain is the option to displace or “pass the buck” of responsibility for their harmful contributions elsewhere (Russell & Gregory, 2015). For example, had a subject been seriously injured during data collection, Williams the stress-inflicting experimenter could, if he so chose, blame Milgram for his actions: Williams was only following his employer’s instructions. Milgram, the principal investigator, was only undertaking the kind of ground-breaking research that prestigious universities like Yale pressured non-tenured faculty into pursing: he too was only doing his job. Perhaps the funders of the research—the National Science Foundation (NSF)—or the chair of Yale’s Department of Psychology, Claude E. Buxton (Milgram’s boss), were most responsible: they ultimately allowed, desired, and legitimized Milgram’s research. The NSF and Buxton, however, did not directly hurt anyone and they certainly never condoned Milgram’s pursuit of the particularly unethical Relationship condition. Perhaps, in the end the reified ideological pursuit of “scientific knowledge” was mostly to blame. The point is, as soon as a bureaucratic process forms, it suddenly becomes possible for all functionary helpers to blame someone or something else for their contributions to a harmful outcome. And because “others” were involved, it seems all sensed they could probably make their individual contributions with probable impunity. And on all realizing this, every helper thereafter only needed to concern themselves with reaping the personal benefits on offer for making their specialist contributions. This may help explain why Milgram’s helpers risked partaking in such a potentially dangerous experiment.

Another subtle yet powerful effect the DOL can have on functionary helpers is termed bureaucratic momentum (Russell & Gregory, 2015). Bureaucratic momentum has usually taken hold when functionaries experience pressure to perform their specialist roles by preceding and sometimes succeeding functionary links across an organizational chain. This coercive force appears to be generated by the cumulative momentum of the many simultaneously moving functionary “cogs” bearing down and exerting pressure on one another. Functionary links often experience this coercive force to fulfill their roles in the form of peer pressure: “to get along” one must “go along.” For example, in fear of causing a bottleneck or delay in organizational goal achievement, employees on a factory assembly line typically feel pressure to quickly fulfill their specialist roles. A single uncooperative functionary can—say because of moral reservations—resist such pressure; although doing so is rare because they must sacrifice whatever self-interested benefits they might otherwise have received for performing their specialist role. Also, this kind of resistance deprives other (potentially angry) functionaries from obtaining whatever benefits they anticipated receiving for organizational goal achievement. It is less stressful on everybody involved if all give in to the momentum of role fulfillment and just do their bit for goal achievement.

Bureaucratic momentum is likely to have had an influential effect during the Obedience studies. For example, to please his funders at the NSF, Milgram likely felt pressure to collect—despite any emerging ethical reservations—a full set of data. Doing so, however, required the long-term retention of the experimenter’s acting services. In return for being retained over a long period of time, the experimenter—despite any emerging ethical reservations—likely felt contractually obliged to continue placing subjects under enormous stress. And, of course, it could be argued that the experimenter’s seemingly unrelenting prods, like it having been “absolutely essential” the subject “continue” inflicting more shocks, saw—despite any emerging ethical reservations—the transfer of bureaucratic momentum to the last functionary link in the Obedience study’s data collecting organizational chain.

The final group force I’ll mention here likely to have influenced Milgram’s research team was (again) the foot-in-the-door phenomenon. For example, it could be argued that after Milgram’s research team agreed to undertake the first official and, relatively speaking, benign (first) Baseline condition (where the learner banged a few times on the wall), the more amenable (or perhaps desensitized) the team became to undertaking the fifth more radical New Baseline experiment (where an increasingly hysterical learner suddenly went silent). With the entire research team having agreed to undertake the more radical New Baseline, the more amenable they became to undertaking the most radical 24th and final Relationship condition where, as mentioned, subjects were pushed to inflict severe “shocks” on someone who was at least an acquaintance, often a friend, and sometimes a family member. The point being, it is unlikely Milgram’s helpers would have had the nerve to run the Relationship condition at the start of the data collection process. The slippery slope of the foot-in-the-door phenomenon—small and barely perceivable steps in an increasingly radicalized direction—likely had a powerful influence on those working within the Obedience study’s data-collecting bureaucracy.

In summary, much like with the obedient subjects, the forces of moral inversion, receiving self-interested benefits, displacement of responsibility, bureaucratic momentum, and the foot-in-the-door phenomenon all (perhaps cumulatively) likely exerted an influence on the research team’s groupthink decision to collect a full set of ethically questionable data.

Prioritization of Milgram’s Self-Interests over the Scientific Pursuit of Knowledge

It seems the reason Milgram decided to run the experimental program was because he believed the benefits—greater knowledge into mankind’s destructive tendency to obey—outweighed all the costs. As he said in the draft notes of his 1974 book:

Under what conditions does one ask about destructive obedience? Perhaps under the same conditions that a medical researcher asks about cancer or polio; because it is a threat to human welfare and has shown itself a scourage [sic] to humanity. (As cited in Russell, 2009, p. 104).

But, when Milgram decided to pursue his research program, it seemed the only people faced with paying any “costs” would be his obedient subjects (whom, as far as he was concerned, only got what they deserved for, as mentioned, failing to “assert their alliance with morality”). Again, he, on the other hand, could only envision personally benefiting from running the official experiments. But after the publication of Baumrind’s (1964) critique, this all suddenly changed.

Baumrind’s critique rather suddenly threatened to label his research unethically abusive and perhaps even held the potential to destroy his fledgling academic career. With his personal self-interests suddenly on the line, Milgram realized he might have to pay a high price for his earlier decision to proceed with the study. With his back against the wall—and much like those subjects who attempted to sabotage his experiments—Milgram also started prioritizing his self-interests over and above the so-called importance of generating scientific knowledge. That is, post-Baumrind, Milgram set about protecting his personal interests by compromising the accuracy of the knowledge he had collected—what he did and found during data collection—by massaging the truth, omitting certain facts, and even telling complete lies. Thus, like many examples of groupthink, the emergence of certain negative outcomes was followed by a carefully calculated cover up.

For example, despite encountering subjects complaining about their hearts, in his response to Baumrind (and repeatedly thereafter) a perhaps willfully blind Milgram (1964, p. 849) described his subjects’ stress as mere “momentary excitement,” a sudden change in tone that Patten (1977, p. 356) observed to be “a most astonishing about-face.” In his book, Milgram noted that before each trial subjects had to sign “a general release form, which stated: ‘In participating in this experimental research of my own free will, I release Yale University and its employees from any legal claims arising from my participation’” (1974, p. 64). But what he failed to disclose was, as stated in his personal notes, “The release, of course, was not used for experimental purposes, but to protect us against legal claims” (as cited in Russell, 2014a, p. 418). If Milgram honestly believed his experiments only caused “momentary excitement,” why did he need legal protection?

Another omission was that although before Baumrind’s critique Milgram promised to publish the Relationship condition’s results, after her critique he mysteriously never mentioned the variation again (Russell, 2014b). Of course, if Baumrind’s critique of the relatively benign first Baseline could, as Milgram clearly sensed, threaten the reputation of his research, one can only imagine the ethical firestorm she would have unleashed on him had he published a variation where some subjects were pushed into inflicting harmful “shocks” on a relative. And in terms of outright lies, Milgram counter-critiqued Baumrind for confusing “the unanticipated outcome of an experiment with its basic procedure,” then elaborating that “the extreme tension induced in some subjects was unexpected” (Milgram, 1964, p. 848). Milgram said this despite him having earlier undertaken numerous pilot studies where, as mentioned, some subjects experienced what he termed in his research proposal “extreme tension”.

It can therefore be argued that Milgram’s self-interests—protecting his name, career, and the ethical reputation of his world-famous experiments—ended up being prioritized over (and thus ultimately corrupted) his espoused purest beliefs surrounding the so-called scientific pursuit of knowledge. Cementing this chapter’s focus on the overlap between individual and group behavior during the Obedience experiments, at some level Milgram self-reflexively sensed a connection between his obedient subjects’ self-centered decisions to prioritize their personal interests over the well-being of the learner and him prioritizing his self-interests over the subjects’ well-being:

Moreover, considered as a personal motive of the author --the possible benefits that might redound to humanity --withered to insignificance along side [sic] the strident demands of intellectual curiosity. When an investigator keeps his eyes open throughout a [scientific] study, he learns things about himself as well as about his subjects, and the observations do not always flatter. (As cited in Russell, 2009, p. 186)

Conclusion

Milgram naturally viewed himself as a detached, objective, and scientific observer of destructive social behavior. That is, he set up an experiment but perceived himself to be independent of the results it produced. He, however, failed to sense his own highly involved non-scientific role in the social engineering of those results. Two particular factors he remained oblivious of were, first, the subtle power inherent within the data-extracting bureaucratic process he constructed (and the necessary role it played in helping generate his surprising results— a key structural force that likely explains much of the ironic overlap in group and individual behavior). The largely invisible role of bureaucratic organization no doubt plays a key role in helping socially engineer many other “real life” examples of groupthink behavior—particularly because of its ability to promote, among all functionary links across the chain of command, feelings of responsibility ambiguity. Second, Milgram was largely unaware of the important role that his and his research team’s self-interests played in both helping generate the surprising results and corrupting their scientific pursuit for new knowledge. This last point may have implications that extend beyond Milgram’s laboratory walls. For example, what role did the pushes and pulls of bureaucratic organization and personal self-interest play in stifling dissent among some of the scientists working on the Manhattan Project? Finally, I am confident that Milgram’s dissectible research—somewhat uniquely captured in the (semi)controlled social science laboratory—is likely to provide scholars with great insights into the inner workings of other more contemporary examples of highly destructive and seemingly unstoppable groupthink behavior, like for example, climate catastrophe.