Keywords

1 Introduction: How Our Behaviour Determines How We React to Uncertainty

Throughout the earlier chapters in this book we have alluded directly and indirectly as to how individuals and social groups see, formulate, and interpret information. In this chapter, I’ll be addressing in more detail those aspects of human behaviour which influence our ability to make objective decisions such as cognitive biases, misapplied heuristics, cognitive dissonance, social anomie (and alienation), as well as those fears humans have in relation to uncertainty.

For example, in Chap. 4 we saw how Pedbury (2019) stated that surprises come from places people are not looking at and that many organisations are focused too much on predicting the expected future, those high probability, high impact developments that could disrupt their operations. It was also highlighted that cultures exist within organisations that militate against addressing new challenges to current policy, acting as barriers to foresight.

I’m now going to turn everything we’ve discussed about the uncertainty profile—if not on its head—then on its side!

Let us remind ourselves as to the major components of the risk/uncertainty profile template initially developed in Chap. 2. This template reflects an attempt to establish an objective interpretation of the different profiles of uncertainty—and as discussed quadrant 3 is the main problem area—why is it so, and how can we mitigate its impact and in doing so alter our perception as to how uncertainty can be mitigated so that a more rational response be formulated?

Quadrant 3, or the “unknown-knowns” is where the most problematical interpretation of uncertainty manifests itself. This is largely due to human perception about uncertainty or rather its misconception. The challenge for management is to recognise how its own behaviour can influence how it interprets and faces up to situations containing high levels of uncertainty. More crucial is why people (including individuals, groups, management, and organisations) are all too ready to classify uncertainty as an “unknown-known” or an “unknown-unknown” when the vast majority of future events can be visualised so that they reside in quadrant 2—“known-unknowns”. In this category, uncertain events can be visualised; they may happen and thus contingency plans developed—reducing the impact of surprises. As has been highlighted earlier—if we can think it—it can happen!

In essence we need to be able to transform Q3 behaviour into Q2 behaviour—and in the process eliminate or at least mitigate cognitive biases and other behavioural characteristics in order to develop hierarchy-based options as “notional probabilities”. Quadrant 3 is where us humans “mess things up” or rather put barriers in place to adopting a more rational approach to uncertainty in spite of us thinking of ourselves as rational beings.

2 The Fallacy of the Rational Man

Conventional wisdom in classical economics is that humans are rational actors who make decisions and behave in ways that maximises advantage and usefulness whilst minimizing risk.

Much of modern thought concerning decision-making under uncertainty has strong behavioural roots. Psychology professors Daniel Kahneman and the late Amos Tversky (1982) developed the foundations of the area of research known as “cognitive bias”. Their work challenges traditional economic theory, stemming from the days of Adam Smith concerning the rational man whereby people make rational choices based on self-interest. The research carried out by Kahneman and Tversky indicated that people often fail to fully analyse situations when required to make complex judgements. They found that people and organisations fall back on rules of thumb (known formally as “heuristics”) as opposed to rational analysis. Moreover, such decisions are based on historical experience, fairness, or aversion to loss rather than more formal economic considerations.

Kahneman and Tversky succinctly demonstrate the impact of cognitive biases in decision-making—especially when exacerbated by the state of uncertainty at the time of making that decision. In the quotation below they highlight the relationship between the availability of data and the degree of uncertainty ascribed to a situation.

The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. For example, the apparent distance of an object is determined in part by its clarity. The more sharply the object is seen, the closer it appears to be. This rule has some validity, because in any given scene the more distant objects are seen less sharply than nearer objects. However, the reliance on this rule leads to systematic errors in the estimation of distance. Specifically, distances are often overestimated when visibility is poor because the contours of objects are blurred. On the other hand, distances are often underestimated when visibility is good because the objects are seen sharply. Thus, the reliance on clarity as an indication of distance leads to common biases. Such biases are also found in the intuitive judgment of probability. (Tversky & Kahneman, 1974)

Other analysts have come to similar conclusions but from different disciplines.

3 The Conundrum of Bias

Any form of analysis, whether carried out by individuals or groups, can fall victim to cognitive biases and intuitive traps. Cognitive biases can be characterised as the tendency to make decisions and take action based on limited acquisition and/or processing of information or self-interest, overconfidence (hubris), or attachment to past experience.

Randolph Pherson (2019) a former CIA intelligence analyst states that:

How a person perceives information is strongly influenced by factors such as experience, education, cultural background and what that person is expected to do with that data. Our brains are trained to process information quickly which often leads to processing data incorrectly or not recognize its significance if it doesn’t fit into established patterns. Such short-cuts in our thinking processes are called heuristics—experienced based techniques that quickly produce a solution that is often good enough to solve the immediate problem.

As identified earlier another term used to explain a heuristic is “rule of thumb”.

Under conditions of uncertainty where there is little or no data then we increasingly fall back on heuristics in an attempt to reduce this uncertainty—and therein lies the rub—as it allows our biases to insert themselves into our decision-making processes. Using such “rules of thumb” when addressing problems can “lead to cognitive biases and prevent analysts from accurately understanding reality even when they have all the data and evidence needed to form an accurate view” (Pherson, 2019) adding that such misapplied heuristics and intuitive traps are quick to form but hard to correct and that:

After one’s mind has reached closure of an issue even a substantial accumulation of contradictory evidence is unlikely to force a reappraisal—tending to ignore or dismiss outlier data as ‘noise’.

It is therefore apparent that such biases manifest themselves amongst decision actors when faced with Q3—unknown-known situations—and it can be argued that much poor decision-making reflects such behaviour whereas if biases can be identified, challenged, and overcome much decision-making to mitigate uncertainty needs to be brought back to Q2 status—the “known-unknowns”.

Our friends at Wikipedia identify some 188 different kinds of cognitive bias broken down into 3 main groups.

  • Decision-making, belief, and behavioural biases—119

  • Social biases—27

  • Memory errors and biases—42

The area of concern in relation to decision-making under uncertain is the first one—again we see a great proliferation of approaches, in effect, topic overload—how can we manage to identify the more prominent and impactful ones.

US psychologist Dr. J Taylor (2013) points out that Kahneman and Tversky “argue that cognitive biases can result in perceptual blindness or distortion (seeing things that aren’t really there), illogical interpretation (being nonsensical), inaccurate judgements (being just plain wrong), irrationality (being out of touch with reality), and bad decisions (being stupid). The outcomes of decisions influenced by cognitive biases can range from the mundane to the lasting to the catastrophic”.

Taylor goes on to say that cognitive biases can be broadly placed in two categories. Information and ego biases. Information biases include the use of heuristics, or information-processing shortcuts, that produce fast and efficient, though not necessarily accurate, decisions and not paying attention nor adequately thinking through relevant information.

Ego biases include emotional motivations, such as fear, anger, or worry, and social influences such as peer pressure, the desire for acceptance, and doubt that other people can be wrong. He identifies (Taylor, 2013) 12 cognitive biases that appear to be most harmful to decision-making, notably in the business world and reflect a number of such biases as presented by Kahneman and Tversky.

Information biases include:

  • Knee-jerk bias: Make fast and intuitive decisions when slow and deliberate decisions are necessary.

  • Occam’s razor bias: Assume the most obvious decision is the best decision.

  • Silo effect: Use too narrow an approach in making a decision.

  • Confirmation bias: Focus on information that affirms your beliefs and assumptions.

  • Inertia bias: Think, feel, and act in ways that are familiar, comfortable, predictable, and controllable.

  • Myopia bias: See and interpret the world through the narrow lens of your own experiences, baggage, beliefs, and assumptions.

Ego biases include:

  • Shock-and-awe bias: Belief that our intellectual firepower alone is enough to make complex decisions.

  • Overconfidence effect: Excessive confidence in our beliefs, knowledge, and abilities.

  • Optimism bias: Overly optimistic, overestimating favourable outcomes and underestimating unfavourable outcomes.

  • Homecoming queen/king bias: Act in ways that will increase our acceptance, liking, and popularity.

  • Force field bias: Think, feel, and act in ways that reduce a perceived threat, anxiety, or fear.

  • Planning fallacy: Underestimate the time and costs needed to complete a task.

Pherson (2019) makes a good attempt at identifying some key biases, misapplied heuristics, and intuitive traps. I’ve taken the Pherson list of cognitive biases (and misapplied heuristics and traps) as being a useful set to be aware of at the early stages of projects and attempted to reduce these to an even smaller set (15) of the most common biases to be aware of. These are identified in bold italics (Fig. 9.1).

Fig. 9.1
A table of key biases has three columns with 21 rows. The columns are titled as biases, misapplied heuristics and intuitive traps. The entry of the first row has text values in 5 cells and is less when compared to the third row, which has text values in 18 cells.

Key biases

Let’s examine the highlighted (italicised) items in more detail.

Cognitive biases (selection of biases that can impede analytic thinking)

  • Evidence acceptance bias: Accepting data as true without assessing its credibility because it helps create a more coherent story.

  • Confirmation bias: Seeking only the information that is consistent with the lead hypothesis, judgement, or conclusion.

  • Hindsight bias: Claiming to see past events as being predictable at the time those events happened.

Misapplied heuristics—(when misapplied can impede analytic thinking)

  • Anchoring effect: Relying too heavily on one piece of information and of accepting a given value of something unknown as the starting point for generating an assessment.

  • Desire for coherence and uncertainty reduction: Seeing patterns in random events as systematic and part of a coherent world (ref CLA).

  • Groupthink: Occurs when a group of individuals desire conformity or harmony leading to members trying to reduce conflict and reach a consensus without critical evaluation of other viewpoints often by suppressing dissenting opinions.

  • Premature closure: Stopping the search for a cause when a seemingly satisfactory answer is found before sufficient information is collected and proper analysis can be performed.

  • Satisficing: Selecting the first answer that appears “good enough” (a form of Occam’s razor).

Intuitive traps—examples of common mistakes made by practitioners)

  • Assuming a single solution: Thinking of only one likely (and predictable) outcome instead of acknowledging “the future is plural” and several outcomes should be considered.

  • Confusing causality and correlation: Inferring causality inappropriately and assuming that correlation implies causation.

  • Ignoring inconsistent evidence: Ignoring or discarding information that is inconsistent with what one expects to see.

  • Ignoring the absence of information: Not addressing the impact of the absence of information on analytic conclusions.

  • Over-interpreting small samples: Making conclusion based on too small sample sets.

  • Projecting past experiences: Assuming the same dynamic is in play when something appears to be in accord with an analyst’s past experiences.

  • Rejecting evidence: Continuing to hold a judgement when confronted with a mounting list of contradictory experiences (cognitive dissonance).

  • Hubris: Arrogance born of overweening pride—usually based on erroneous information.

3.1 Bias and Expert Opinion

One can also challenge the reliance on “expert opinion”. A Financial Times (2011) article cited how Raghuram Rajan warned of the looming financial crisis prior to 2008 but was ignored by leading central bankers. Post-crisis Rajan argued: “Economists had all the models required to understand the credit crisis, but that the subject suffers from being segregated into increasingly narrow fields.” Which led to a dearth of generalist experts capable of connecting all the various strands. This concern about so-called expert opinion, has been most succinctly expressed by Tetlock (2005), where he explores why experts are so often wrong when predicting future events, and invariably were often no better than the informed amateur. Again, the issue of complexity is highlighted and Tetlock asks whether we are living in a world which is too complex to understand, or rather we are not using the proper tools which allow us to understand and predict social, economic, and political phenomena. Tetlock (re-cycling Isaiah Berlin’s 1950s classification) breaks down experts into two main behavioural groups: “hedgehogs” who hold strong, definite views which are rarely changed and “foxes” who tend to view matters over a broader spectrum and whose opinions are often changed as new evidence becomes apparent. “Foxes” tend to work incrementally, whereas “hedgehogs” can be wrong for long periods but occasionally right for the big event. However, neither group seems significantly competent when forecasting disruptive events because it is so inherently difficult. Yet, there is still strong resistance to accepting the inaccuracy of forecasting and where the hegemony of the expert is rarely challenged. Under such circumstances we see even here that whilst many experts claim to be objective in their judgement and claim to be scrupulous when applying scientific rigour, they can be as prone as anyone to under-acknowledge or even deny biases which are part of their personal psyches. See Chap. 13 for some recent research on expert levels of accuracy.

3.2 Bias and the Determination of the Future

It was noted in the previous section how individual behavioural traits can lead even experts astray when attempting to determine the future whether in the form of shorter term forecast or longer term foresight predictions.

If quantitative analysis has been on the front foot in much recent (financial and economic) history—this is not due to a lack of research in the boundaries and limitations of qualitative methods. Rather that vested interests have driven much analytic research to be directed at satisfying the demands of the business and political communities, demanding high levels of perceived precision and certainty which they saw as only being achieved through a highly “mathematical” or causal approach to problem-solving. Such communities tend to be attracted to short-term solutions in response to their working within short time horizons—e.g. the next quarter’s results. The short-term nature of the time horizon seduces these policy-makers into thinking that they can control events within such a short timescale—leading to silo mentalities and setting in motion a vicious myopic circle.

Makridakis and Taleb (2009) confirm the existence of these entrenched views in behavioural terms by identifying it as a resistance to accepting the inaccuracy of forecasting by inflexible decision-makers aggravated by large numbers of academics who feed off such beliefs—a pretty damning indictment of hubris compounded by myopia (aka stupidity). Our old friend “the illusion of control”.

Two earlier papers by Makridakis (1982) set out to argue the inability to

accurately predict economic and business events, and to accept such an inability, instead of illusorily relying on the predictions being correct when planning and formulating strategies” and on the other “the inability of economists to forecast forthcoming recessions that were often confirmed long after they had started .... However, the majority of academics and business people were not willing to accept these findings, and instead preferred the illusion of control, pretending that accurate forecasting was possible.

What is striking here is that this research was conducted in the early 1980s—nearly three decades before the 2008 crisis which has so revitalised our thinking about accuracy and uncertainty.

Makridakis, Taleb, and others are not alone. The over-reliance on quantitative methods has been questioned by a growing number of science trained academics (again before 2008) where:

The mathematical modelling community believed so strongly in models that it insisted on using them even when there was no scientific basis for their application (Pilkey & Pilkey-Jarvis, 2007).

John Kay (2008) writing in the London Financial Times, in the wake of the 2008 Autumn financial meltdown, commented on the earlier false belief of computing being the panacea for accurate forecasting—which has since been belatedly challenged and that “we now understand that economies are complex, dynamic, non-linear systems in which small differences to initial conditions can make large differences to final outcomes....”. Kay refers to a comment attributed to John Maynard Keynes that it is usually better to be conventionally wrong than unconventionally right—a variant of what John Vanston (2007), an American forecaster, said “...precision and the future are incompatible terms. In essence it is far better to be approximately right than precisely wrong”.

There is a growing body of informed comment that concurs with this stance. It thus comes as no surprise that it was the Crash of 2008 that has forced analysts to re-assess the validity of, or rather, the excessive reliance on established (and largely quantitative) methods and processes. As we have noted, such clarity has still not pervaded all practitioner areas.

Apart from a significant amount of anecdotal observations, the earlier references (Makridakis & Taleb, 2009), whereby too many practitioners and academics refuse to contemplate the inaccuracy of forecasting and quantitative methods, reside not only at an individual level, but at an organisational level. This phenomenon has been termed having a “silo mentality”—where many organisations are populated by “hedgehogs” (who resolutely continue to attempt crossing the road until flattened by a vehicle appearing suddenly). Other phenomena of a similar ilk have been termed “following the herd” (Keynes’ comment about being conventionally wrong rather than unconventionally right) or expressed as “no-one ever got fired for buying IBM (or hiring McKinsey)”—in effect being constrained by orthodoxy. Uncertainty scares people—we don’t like it—so in many cases we ignore it—with often disastrous results. These individual characteristics when transposed to the organisation create inflexibility and create a real lack of foresight.

An interesting manifestation of the fragility of quantitative performance measures, when impacted by behavioural factors, has been identified by Princeton psychologist, John Darley. In 1994, Darley postulated, in what has become known as “Darley’s Law”, that:

The more any quantitative performance measure is used to determine a group’s or an individual’s rewards and punishments, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the action patterns and thoughts of the group or individual it is intended to monitor. ...The critical control system unleashes enormous human ingenuity. People will maximize the criteria set. However, they may do so in ways that are not anticipated by the criterion setters, in ways that destroy the validity of the criteria. The people make their numbers but the numbers no longer mean what you thought they did.

In effect Darley says that ethical problems (which we cannot quantify) are almost always inherent in systems designed to measure performance. We all know what the examples are: body count manipulation in the Vietnam War (via Robert McNamara formerly a Ford executive), the pilot in “Catch-22” who manipulates the performance measurement system by flying safe routes, sales force commission plans, and, you’ve guessed it—bankers’ bonuses—a beautiful set of unintended consequences.

Finally, Gowing and Langdon (2017) ask “why are ‘unthinkables’ not thought about?” Having questioned a number of corporate and government leaders and decision-makers as to why it is so difficult to think about the unthinkable, they identified 9 recurring words and phrases, these being:

  • Being overwhelmed by multiple, intense pressures

  • Institutional conformity

  • Wilful blindness

  • Groupthink

  • Risk aversion

  • Fear of career limiting moves

  • Reactionary mind-sets

  • Denial

  • Cognitive overload and dissonance

The earlier section of this chapter, which identifies a number of cognitive biases and heuristic traps, illustrates why the above items will not come as a surprise. The constraints to thinking about the unthinkable—but still possible—are largely behavioural and are barriers to the development of quality scenario narratives.

3.3 Bias and the Media, Bias Clusters, and “Le Defi Objectif”

Over the last few years there has been increasing attention to the role of the media in helping to develop, increase, and re-enforce a number of cognitive behaviours such as bias and dissonance in both individuals and groups—generally with negative impacts. The rapid expansion of Internet-driven social media has been a notable feature of this new century.

This growing number of media options, not just via the Internet but the profusion of cable channels targeting niche interests, increases the options available to people so that they can selectively expose themselves to particular media messages. In effect, they can control what they want to see and not want to see at a much higher level of granularity which all too often means media which is only consistent with their own point of view. Such a trend leads to a stifling of any form of objective opinion making.

Social media has increasingly been seen to be a corrosive influence on opinion forming—where bias clusters have arisen to re-inforce pre-established biases in individuals whilst acting as a barrier to alternative viewpoints which could help to “fact-check” the often highly entrenched positions of such clusters. With the tendency to seek out similar ideas with which we are more comfortable with, no longer being restricted to physical geographic communities such as a town or city—the much broader reach across geographic boundaries that social media afford strengthens the bond between individuals and groups within similar mind-sets and perspectives—ranging from a love of a particular pop group to more insidious and darker interests such as various forms of more fundamentalist political dogma and religious views.

What continues to strengthen the role and influence of bias clusters are the algorithms that social media use to re-enforce similar behaviour patterns and attitudes to the detriment and exclusion of alternative opinion to those who do not conform to our ideas lifestyles. Algorithms can be thought of as the fuel tank of search engines, social media websites, recommendation engines, online retail, online advertising, etc. There is increasing concern amongst social scientists and commentators as to the so-called neutrality of these algorithms. The term “algorithmic bias” has now entered the lexicon. Authors such a Sara Wachter-Boettcher (2017), Steffan Mau (2019), and Cathy O’Neil (2016), where they identify the foibles and dodgy coefficients of a number of algorithms which incorporate the biases of their creators—such as in credit scoring across various social and indeed racial groups.

In a recent CBC news blog Ramona Pringle, an associate professor at Ryerson University specifically addressed the issue of social media blinding us to other points of view (Pringle, 2016). In the blog she refers to research from Columbia University showing that users tend to click links that affirm their existing opinions. “Facebook is designed to prevent you from hearing others,” says media scholar Douglas Rushkoff. “It creates a false sense of agreement, a ‘confirmation bias’ when you are only seeing stuff that agrees with you or makes the other side look completely stupid.”

In essence she confirms that the problem of much social media is that, when an individual’s pre-existing opinions shape the news he or she wants to see, they are not getting an accurate picture of what is really happening. Pringle goes on to state that because people are only seeing news and media through a self-selecting group of similar minded individuals they are subjecting themselves to living in a filter bubble and not engaging with people with differing views. She quotes a study by “Pew Research whereby 79% of social media users said they have never changed their views on a social or political issue because of something they saw on social media”.(Pringle, 2016)

She identifies that more insidious is that it is not just a self-selected social network that creates this echo chamber but that the social network (e.g. Facebook) “filters the news we see on the site, by suggesting media — like the ads we see — that is tailored to our preferences”. Such points of view have recently been expounded by the Facebook whistle-blower Frances Haugen.

4 Cognitive Dissonance

Whilst there has been a growing interest in cognitive bias based on the work of Kahneman and Tversky—and given a boost in 2012 by the very successful publication in 2012 of Daniel Kahneman’s “Thinking, Fast and Slow”, there is another behavioural trait, often neglected, which the author believes is as important as cognitive bias, in explaining why humans persist in certain beliefs even when confronted by firm evidence to the contrary. This form of behaviour is called “cognitive dissonance” and there are signs that can occur at both the individual and group level.

The theory of cognitive dissonance was developed by social psychologist Leon Festinger (1957) in the 1950s. In brief cognitive dissonance is themental discomfort(psychological stress) experienced by a person who holds two or more contradictorybeliefs,ideas, orvalues. This discomfort is triggered by a situation in which a person’s belief clashes with new evidence perceived by the person creating psychological conflict resulting from incongruous beliefs and attitudes held simultaneously”. It can also be expressed as being the psychological conflict resulting from incongruous beliefs and attitudes held simultaneously.

Festinger based the theory on the belief that humans want all of their actions and beliefs to be consistent. When this not the case, a frequent occurrence, a discomfort arises. Dissonance can manifest itself in a variety of ways such as stress, anxiety, embarrassment, shame, regret, and negative self-worth.

To reduce this mental discomfort the individual seeks to create psychological consistency. This allows the afflicted person with cognitive dissonance to lessen mental stress by actions that reduce the magnitude of the dissonance. This is achieved by changing, by justifying against, or by being indifferent to the contradiction that is inducing the mental stress. In practice, people reduce the magnitude of their cognitive dissonance in four ways:

  1. 1.

    Change the behaviour (“I’ll stop smoking.”)

  2. 2.

    Justify the behaviour, by changing the conflicting cognition (“I only smoke once a day.”)

  3. 3.

    Justify the behaviour by adding new cognitions (“I’ll only smoke low tar filter cigarettes”)

  4. 4.

    Ignore or deny information that conflicts with existing beliefs. (Smoking helps me relieve stress.)

One can add a fifth category whereby there are increasing trends of outright denial of the evidence and that such evidence is a plot “by the elite”—witness the denial associated with the COVID anti-vaxxers. Conspiracy theory is now an active ingredient in the cognitive dissonance mix.

And of course, we had (maybe still have) the highly polemicized position of the Brexit issue. Social media is causing increasing cognitive dissonance leading to many people only searching for data or evidence which re-enforces their own cognitive positions (or prejudices). In essence the theory states that people do not like to have previously held beliefs challenged. People following a given perspective, when confronted with contrary evidence, spend a great amount of effort in justifying why they should retaining the challenged perspective.

Cognitive bias does contribute to dissonance theory. Biases include justification that one does not have any biases, the bias that one is “better, more moral and nicer than average” and confirmation bias (see above list of biases).

4.1 Examples of Cognitive Dissonance

Apart from the example of trying to give up smoking used above other common areas where dissonance manifests itself can be found in the worlds of advertising and decision-making.

Advertisers regularly exploit the arousal of cognitive dissonance amongst their target audiences by hinting or suggesting that if you want to adopt a particular lifestyle image then why not wear this item of clothing or use this type of cosmetic or drive this model of car (years ago it used to be what type of cigarette you smoked). In other words, if you don’t buy the product your self-image will be weakened in some way.

Cognitive dissonance is part of decision-making, between two or more alternatives. And of course, we can see that when the options themselves contain high levels of uncertainty then cognitive dissonance can really cause problems for the decision-maker especially when the options have positive rather than negative attributes. Which one does one select?

5 Anomie (and Alienation)

Back in 1970 Alvin Toffler popularised the term, “Future Shock”, highlighting the negative impacts of too much change in too short a time. The reality is that most inevitable surprises arrive without much warning, if any at all—they arrive as “shocks” in spite of their inevitability and presumed contingency plans. As Mike Tyson is famously quoted as saying: “Everyone has a plan ‘till they get punched in the mouth.”

Apart from our personal biases behaviour can also be affected by how we react to what is happening around us—at the group and individual level. And when what is happening is subject to rapid change and associated levels of uncertainty we can feel a sense of alienation and purposelessness.

Emile Durkheim, a nineteenth-century French sociologist, adopted the word “anomie” to express a condition where belief systems are challenged or in conflict with what has previously been experienced, to the extent that they create a breakdown in the social bonds between an individual and the community. Durkheim developed his ideas on anomie through the study of suicide (Durkheim, 1897). He believed that one type of suicide, anomic, occurred following the breakdown of the social standards necessary for regulating behaviour. He posited that when a social system is in a state of anomie, where common values are no longer understood or accepted, a social vacuum exists where new values have not been established. Such a society produces a variety of negative psychological states typified by a sense of futility, lack of purpose, emotional emptiness, and despair.

In the 1930s, American sociologist Robert K Merton (1938) also studied the causes of anomie. As an example, he said that if a society heavily encouraged its members to acquire wealth but provided them with few means to do so, the stress would cause many people to violate norms. Under such circumstances the only form of regulation would be the desire for personal advantage and fear of punishment causing social behaviour to become unpredictable.

Readers may be more familiar with the term “alienation” rather than “anomie”. Both anomie and alienation make an individual isolated in different forms. There are differences such that anomie can lead to a breakdown of social bonds between an individual and the society in which he or she lives due to the individual himself or herself not accepting the values or norms of that society. Alienation on the other hand occurs where the situation the individual finds himself or herself in is a result of external forces driving that individual to feel isolated or estranged. Karl Marx’s interpretation of alienation was that it was the capitalist system which drove its workers into a state of alienation as they had no control over their fate.

6 Our Behaviour in Relation to Others: Considerations

In early 2020, and once the COVID-19 pandemic had taken hold, I wrote a blog note posted on LinkedIn posing the following question:

Will the post pandemic “new normal” make us more aware of ourselves in relation to others? Such changes in behaviour will only be meaningful if we understand how others experience Us and We, Them—and that is no easy transformation.

With much chatter about what might be the “new normal” (or indeed “new normals”) one major theme is how individuals within society may become more understanding of our own and others’ foibles.

Back in the 1960s, a controversial Scottish psychiatrist, R D Laing, developed radically different views, from the contemporary orthodoxy, on how to treat mental illness especially schizophrenia. His book, The Divided Self, first published in 1960 was a landmark publication in the interpretation of schizophrenia—and was an early indication of a more existentialist approach to how the human mind can work. Later in the decade he published a dual tome book called The Politics of Experience and The Bird of Paradise (1967). It was the first of these papers that heavily influenced my thinking at the time (well, I was studying sociology), and has done so at frequent intervals since then. Within The Politics of Experience is a detailed linguistic description of how we see others and others see us.

Laing’s script is not an easy read. The original core text described as “Experience as Evidence” is only 581 words long. However such is the intensity of the logic contained therein that I urge the reader to make an effort to discover this intensity. You will need to re-read it several times in order to understand its impact and how we might better relate to one another in an uncertain world. Over 50 years after it was first written the text is still key to our understanding of what is a fact. New social media-driven behaviours are driving society into increasingly fragmented opinions and behaviours.

Key extracts from the original text, along with an interpretation, highlight the complexity of fact-based evidence and how individuals experience such phenomena and their behavioural responses.

  • “Even facts become fictions without adequate ways of seeing “the facts”: This challenges our real understanding of what is “a fact” and highlights the influence of bias when seeking to determine such facts.

  • “We can see other people’s behaviour, but not their experience” and “I cannot experience your experience. You cannot experience my experience.”: Experience is in effect a form of internalisation which determines our behaviour, yet humans only externalise their experience of the other person; they cannot internalise it.

  • “I see you, and you see me. I experience you, and you experience me. I see your behaviour. You see my behaviour. But I do not and never have and never will see your experience of me. Just as you cannot “see” my experience of you.” This position is an expanded version of the previous statement but re-enforces the difficulty of eradicating cognitive biases—the latter reflecting highly personal earlier experiences—unique to an individual. The best we can hope for is perhaps empathy.

  • “If, however, experience is evidence, how can one ever study the experience of the other? For the experience of the other is not evident to me, as it is not and never can be an experience of mine.”: This statement has profound implications for bias mitigation—we can only go so far in reducing biases—however this acknowledgement alone may be sufficient to reduce the worst of bias denial which is best that we can hope for, as I cannot directly experience you experiencing your experience.

  • “I do not experience your experience. But I experience you as experiencing. I experience myself as experienced by you. And I experience you as experiencing yourself as experienced by me. And so on. The study of the experience of others, is based on inferences I make, from my experience of you experiencing me, about how you are experiencing me experiencing you experiencing me … .”: In summary, and as indicated in the previous reflection, this may be the best we can hope for. Just being aware of such limitations is a powerful defence against accepting all evidence as fact—and as Funtowicz and Ravetz (1994) postulated earlier in a post-normal world “facts are uncertain, values in dispute, stakes high and decisions urgent”.

Extracts from page 15, 16, and 17—Chapter 1—Persons and Experience. “The Politics of Experience and The Bird of Paradise” R.D. Laing. Published by Penguin Books 1967.