1 Introduction

Efforts to improve the quality, safety, and efficiency of complex work often call for increasing standardisation of tools, supplies, and procedures as a fundamental strategy (Berwick 1991; Berwick et al. 1990; Smith 2009). In these calls, the benefits of this standardisation are presumed to be commonsensical and intuitively obvious; but the theoretical, philosophical, and sociocultural aspects of standardisation are generally unexplored. This paper attempts to bring some of those issues to the surface, for four reasons:

  • To remove the veil that obscures subtle interactions between the popular, binary distinctions associated with standardisation (e.g., standardisation vs flexibility, centralisation vs decentralisation, exploitation vs exploration, and feed-forward vs feedback control) (Macrae 2013; March 1991; Perrow 1967; Reason 1997);

  • To better understand “resistance” to standardisation efforts;

  • To better manage the unintended consequences of poorly thought out standardisation programmes;

  • To clarify the sorts of problems for which standardisation is both suited and useful so it can be more thoughtfully employed.

1.1 Perspective

The value of any argument is inextricably entangled with perspective from which it is made (Dekker et al. 2011). Therefore, it is important to note that the author’s experience has been almost entirely within the field of health care, an area somewhat notoriously resistant to calls for standardisation for a variety of reasons (McDonald et al. 2006), and further that most of that experience has been within emergency medicine, a specialty that particularly values the ability to deal with contingency and the unexpected and to react adaptively and opportunistically to events and environmental changes (Wears 2010; Zink 2006). In addition, Reason has noted that it seems curious that health care—traditionally a bastion of discretionary control by professionals—is moving steadily towards standardisation and similar means of control, at a time when many other domains are moving in the opposite direction (Reason 1997). But by discussing standardisation unabashedly and acknowledgedly from this perspective, I hope to increase understanding by adding to the diversity of viewpoints in these discussions (March et al. 1991).

1.2 Benefits

To some extent, arguing against standardisation is a bit like arguing against motherhood, because any such discussion must first admit that standardisation has many benefits. A world in which every light bulb had to be custom-fit to its socket would be a very dark world indeed; this journal has standardised on the English language as its mode of communication; the print layout is standardised (left to right, top to bottom, front to back); this paper itself was composed using a standardised (QWERTY) keyboard, while looking at a clock whose hands rotate “clockwise”, and on and on.

Similarly, standardisation contributes to efficiency in communication; when it creates common ground among the parties, it allows a dense, compact, and encoding of complex ideas and supports communication by omission (e.g. that which is not mentioned can safely be assumed to be absent or irrelevant). The success of the highly standardised communication forms developed in aviation crew resource management is broadly accepted (Weiner et al. 1993), and often advocated for other fields (Catchpole et al. 2007). In addition, standardisation is highly valuable in supporting coordination of action across disparate groups whose mutual communication may be undependable (Berg 1997a; Timmermans and Berg 2003).

The many benefits of standardisation, especially in reasonably arbitrary circumstances (such as highway driving) have served to support a common view of standardisation as a sort of universal good—the Philosopher’s Stone that turns the base substance of ordinary life and work into gold. Standardisation, in this view, is seen as the natural outcome of the Enlightenment, producing order, reason, and reproducibility in care; a technical solution to the problem of complexity that could only be opposed by the irrational, perverse, or deluded. Standardisation fits nicely with other elements of the “programme of technical rationality” such as practice guidelines and evidence-based medicine and so is synergistic with many other current influences on health care (Timmermans and Berg 2003).

Standardisation promotes routinisation, which enables organisations to exploit their accumulated knowledge, thus increasing process efficiency (and to some extent, personal efficiency since actors following standardised procedures may not have to acquire the knowledge that underpins those procedures). This can free up attentional resources, diverting them from mundane to truly complex or pressing issues (Macrae 2013). Yet at the same time, this routinisation presents a risk: when organisations are guided by old knowledge, they do not create new knowledge, unless special (and by definition, inefficient) efforts are made to understand gaps between standardised processes and the context in which they are deployed (Hunte 2010).

2 Problems

Despite its obvious benefits, unthinking use of standardisation is associated with a set of problems. This section will explore 5 problematic aspects of standardisation as an improvement strategy.

2.1 Lack of specificity

Many calls for standardisation in health care lack specificity and have an almost magical, “wishful thinking” quality (see Sect. 2.3), as if standardisation were some universal good in itself. Thus, an important first step in these discussions is to clarify a set of issues: what bits of work, exactly, should be standardised; at what level; along what dimensions; by whom; and for what purpose? Discussions of standardisation could be improved by increasing their specificity in all these areas.

Even after the main target area has been defined, it is still necessary to specify which of the different dimensions of the work is to be targeted: its organisation and structure; the terminology used by workers; its outcomes without regard to process; its procedures; or its data or content. Within a selected dimension, the level at which standardisation should be applied still needs to be defined. For example, building materials are almost entirely standardised, but the buildings they are used in are less so, and the neighbourhoods containing those buildings still less. We have standardised roads, but not standardised travel paths; standardised grammars but not standardised stories, standardised instruments, notes and scales but not standardised music.

In addition to being non-specific, calls for increased standardisation ironically often miss the degree to which the activities in question are already standardised. For example, there have been many recommendations in health care to standardise shift-change handoffs (Joint Commission on Accreditation of Healthcare Organizations 2008). These calls generally construe handoffs as haphazard episodes (Arora et al. 2005; Gandhi 2005), and because they tend to focus only on the data dimension, they miss other, already standardised areas. In fact, observational studies of handoffs (Behara et al. 2005; Brandwijk et al. 2003; Kowalsky et al. 2004; Wears et al. 2011) have shown they consistently follow a four-phase pattern, use a consistent order among patients, vary the amount of investment in the handoff according to the degree of uncertainty about the clinical problem space (Nemeth et al. 2007), and use a consistent ordering of the discussion within patients. Thus, by limiting one’s vision only to the dimension of data, the standardisation already present is missed. This is exacerbated by the problem that this standardisation tends to have arisen “bottom-up”, organically and emergently from the work context, rather than being engineered “top-down” by managers.

2.2 Philosophical basis

Standardisation is inextricably associated with the industrial revolution, Taylorism and ultimately the rationalism of the Enlightenment (Berg 1997b). Its philosophical underpinnings in a Newtonian–Cartesian understanding of the world as a complicated, but ultimately decomposable, understandable and linearly predictable domain are seldom examined by its proponents, who generally show little awareness of even the possibility of other philosophical stances (Dekker 2010; Dekker and Nyce 2012; Dekker et al. 2010, 2011; Kneebone 2002; Wears and Kneebone 2012; Xiao and Vicente 2000). Although there are areas of clinical work where this view might be accurate, for the majority of clinical work it is not. Clinical work systems have many of the characteristics of complex, self-organising systems: they are comprised of a large number of mutually interacting elements, with multiple enhancing and inhibiting feedback loops; they are open to the environment, and their boundaries are hard to define; they operate far from equilibrium; they are path dependent (i.e. their past is partly responsible for their present behaviour); their structure does not come from a priori designs; and it changes dynamically to adapt to changes in their environments (Cilliers 1998). In these complex (as opposed to complicated) systems, it is not possible to predict the trajectory of the system from fundamental principles and its current condition; thus, overly ambitious efforts to standardise are likely to create disorder, either in the target area or elsewhere in the system (Greenhalgh et al. 2009; Snowden 2012).

These problem are often euphemistically labelled “side effects”, or “unintended consequences”; while they are no doubt indeed unintended, it is important to note that “side effects are not a feature of reality, but a sign that our understanding of the system is narrow and flawed” (Sterman 2000). A simple example of this problem in health care has been the standardisation on the Luer lock connector. The Luer lock was intended to provide a standard way of easily connecting and disconnecting syringes and intravenous tubing, but because so many devices use this standard, it has led to numerous, fatal adverse events by allowing easy connection of items that should never be connected (Berwick 2001; ECRI 2010; ISMP 2003, 2004). At its worst, this sort of standardisation becomes the “arrogance of design”, a privileging of the ex ante judgment of remote designers over that of the worker situated in a specific context (Bisantz and Wears 2008).

Similarly, at the front-line worker level, clinical work tends to be much more about making sense of an uncertain and ambiguous jumble of unfolding phenomena, and in so doing developing contextual judgments, explanations and situated actions that support and help revise shifting goals, than it is about rule-based decisions. It is about phron \(\bar{e}\) sis rather than techne (Greenhalgh and Wong 2011; Hunte 2010); practice rather than prescription. Thus, at least some of the resistance of front-line workers to standardisation is explicable, because the models of work inscribed in standardised routines clash to strongly with their actual work. Even such an orthodox spokesman as Donald Berwick has noted this mismatch and remarked that the prevailing strategies for improvement in health care rely largely on outmoded Taylorist theories of control and standardisation of work, noting that “if we want to understand how the workplace needs to be changed, we must understand and call into question many of the principles of Taylorism” (Berwick 2003).

2.3 Psychological and organisational comfort

The rationalism underlying standardisation comes partly from its dominance in modern thinking, but also partly from the psychological and organisational benefits it provides to its proponents. Rather than having to deal with the uncomfortable reality of a world full of risk, ambiguity, chance, and disorder, the rationalist model underlying standardisation offers clear, explicable, and understandable explanations. Although its proponents may recognise some of the properties of complex systems outlined above, they see those properties not as certainties about the world, but rather as defects that can and should be managed away through standardisation and other rationalising modalities; the linearising orderliness of standardisation provides a bulwark against the unpleasant realities and holds forth the reassuring prospect of control (Dekker et al. 2012).

It is interesting to note that some standardisation efforts have provided only those sorts of psychological benefits. Berg has noted that IT-enforced standardisation often produces “… no clear-cut ‘benefit’ emerging anywhere from the alignment of staff members with the reading and writing artefacts; the only ‘benefit’, often only perceived as such by management, lies in the alignment itself. The artefacts are not occasioned to afford the emergence of new tasks, but to ‘standardise’ already existing ones. They are not allowed to potentialise anything: in a misplaced equation of ‘standardisation’ with ‘quality’—whether of the care delivered or the staff members’ work—framing is introduced for framing’s sake” (Berg 1999). Thus, in some instances, the benefits of standardisation are entirely aesthetic—things look better on paper, whether they actually work better or not.

2.4 Non-neutrality

Given its roots in Taylorism and the rationalism of the Enlightenment, it is not surprising that standardisation often depicted as a technical, politically neutral exercise; one best performed by experts, not involving negotiations, sociopolitical considerations, and certainly not involving winners or losers. But standardisation efforts are not neutral activities; they privilege one view of the world over another and so often one group over another. For example, an information system may standardise data relevant for some purposes but not others; this forces the unprivileged group to engage in a continual translation process, or in the worst case makes data relevant to them invisible (Garfinkel 1967; Johnson 2009). Although attempts at standardisation invariably invoke the common good, different groups tend to have differing ideas about what, exactly, is the common good, and in addition, what means are legitimate in its pursuit.

In addition, standardisation often restructures the work environment, changing relations among users, and thus potentially creating additional negotiation and occasional conflict. For example, standardisation tends to elevate the role of the managers and technocrats, who organise and plan the work, over that of front-line workers, who merely execute their instructions (Kanigel 1997). It makes invisible the articulation work of those who fill the gaps between prescriptive standards and the messy uncertainties of real work (Nemeth et al. 2008).

2.5 Heterogeneity

Finally, standardisation assumes that heterogeneity and variation are inherently undesirable properties that should be eliminated, or at the least, nuisances to be minimised. But to the extent that the clinical problem space is heterogeneous, this assumption clashes with three real-world properties of complex systems: the law of requisite variety (Ashby 1957, 1958) (that every controller of a system must exhibit at least as much variety in behaviour as the system under its control); the principle of equifinality (that there may be many, equally good paths to a goal); and the principle of multifinality (that similar initial conditions may result in dramatically different final states).

In healthcare settings, standardisation presumes that average results will be equally obtainable by everyone despite individual differences, but this is hardly ever the case. Most “standard treatments” provide a large benefit for a small number of patients who cannot be specifically identified in advance, at a small cost to a large number. For example, routine treatment of hypertension prevents heart attacks and strokes in a small number of cases, while exposing large numbers of patients who would never have suffered those conditions to the expense and side effects of lifelong medication. While this trade-off might in fact be considered justifiable, it still involves an asymmetry of benefits and burdens, and the “average benefit” calculated over the entire group will be realised by virtually no one. This is well known in epidemiology as the ecological fallacy (the attribution of group averages to individuals in the group).

Finally, there is another form of heterogeneity—change over time—that poses a peculiar challenge to standardisation efforts. They are inevitably aimed at moving targets; developed for static manufacturing systems, their application to complex, open, sociotechnical systems composed of multiple mutually influential elements, constantly changing and evolving over time, will always and necessarily be behind the times, late in adapting to new or local circumstances.

3 Caveats

Just as unthinking application of standardisation as an improvement strategy results in the sort of problems outlined above, fairness demands that we admit that unthinking opposition to standardisation raises issues of its own. Claims of special knowledge and corresponding immunity to standardisation can be self-interested. Thus, in health care, clinicians’ frequent resistance to standardisation might sometimes be based more on enacting professional identities and reinforcing occupational boundaries than on a careful consideration of its advantages and disadvantages (Dixon-Woods 2010; McDonald et al. 2006). Furthermore, the view that “rules do not apply to us” might clearly be dysfunctional when applied indiscriminately in areas where variations in judgment are irrelevant or even harmful, or be used to justify poor practices (Dixon-Woods 2010).

Similarly, much of this discussion has presumed that standardisation is imposed on a group from the outside, in a classically Taylorist manner. But, there is no reason in principle why it could not be negotiated or developed emergently from within a community of practice.

4 Application and guidance

Since standardisation is such a complex issue, a tangle of problems and solutions in which certain activities would seem to benefit from being standardised, while others would not, this section will attempt to provide broad guidance about where standardisation might be helpful and where harmful. Perrow and Reason suggest examining two dimensions in making this determination (Perrow 1967; Reason 1997):

  • The number of “exceptional cases”, i.e. the degree to which surprises, novel or unexpected events are likely to arise; and

  • The difficulty of the search process, i.e. the degree to which solutions are well understood and easily found by analytic reasoning, as opposed to being poorly understood and requiring extensive knowledge-based processing.

Two extreme combinations of values along these two dimensions mark cases where standardisation is either very well, or very poorly, suited. For example, situations in which exceptional cases are commonplace and in which solutions are poorly understood or identifiable via analysis are poor candidates for standardisation and are best left to discretionary control. Examples of such situations might be combat operations or crisis management. Conversely, situations where exceptional cases are truly the exception, and in which solutions can be readily identified by simple means, are good candidates for standardisation. Such situations might include assembly line operations, or traditional construction. Intermediate situations are, of course, intermediate and would require a judicious mixture of strategies.

In addition to this guidance, it is important to note that the usefulness of standardisation, and so choices about where to apply it, differs according to the skills of the actors involved in a field of practice. To a novice, many if not most situations will appear exceptional, and the search for their solutions difficult; although prescriptive rules in such a setting would not be recommended for experts, for novices, falling back on “standard procedures” might be better than trying in vain to work out a solution to a problem beyond their training or experience.

5 Summary and conclusion

It should be clear from the foregoing that standardisation is far from a simple, technical solution that is a “natural fit” for quality or safety problems. It has importance social and political aspects that are often ignored, and some of its benefits may be primarily psychological. Yet, there are benefits to be gained from exploring standardisation thoroughly in all its aspects.

In civilisation and its discontents, Freud wrote of an irreducible tension between the individual (seeking freedom for autonomous action) and civilisation (demanding a necessary conformity) (Freud 1930). Similarly, Greenhalgh has argued that the tension between standardisation and contingency can never be resolved, but rather must be actively managed, a task that gets harder as the domain of application gets larger (Greenhalgh et al. 2009). Thus, standardisation cannot be a universal approach to quality and safety, but will always require continual grounding and judgment if it is to be used safely and effectively.