Introduction

The Disputed Validity of Action Research

Action research is a key methodology in the User Centred Research Programme (UCRP)—a well established programme of R&D [15] with core funding provided by the UK National Health Service (NHS). This programme emphasises research prompted by health service user interests and experience with a mix of collaborative projects (in which professional, academic and user researchers’ work as partners in the research process), and user controlled projects (where users design, undertake, and disseminate the results of research projects, sometimes facilitated by an academic or professional researcher).

The UCRP is associated with external research grant income of more than £3/4 million (additional to and larger than NHS core funding), involves seven NHS Trusts (health care providers engaged principally in mental health, learning disability and ageing research) and is led academically by the University of Sheffield Hallam—although many UCRP members are affiliated with or employed by other universities. For reasons we discuss later the UCRP is endangered.

All UCRP projects emphasise participatory principles but user-controlled projects emphasise user expertise and power, often linked to an engagement with political action [20]. There are a range of qualitative and quantitative methodologies in use in UCRP projects but it is user controlled projects engaging politically with NHS institutions that made more use of action research—especially the IMPACT and Trailblazers projects discussed later with which one of the authors has been personally and directly involved for approximately 10 years.

The appeal of action research in the programme arises at least partly perhaps from the perception of project participants that action research is a practical, understandable, confidence building process they feel they own and control, that is a flexible way of finding answers to questions of immediate interest to the participants, informing and coupled strongly with the attempt by participants to negotiate actions with a variety of stakeholders.

However, there is a major problem faced by all action researchers in health care as summarised in the systematic review by Waterman et al. [67]:

In certain academic circles, action research has been criticised as being unscientific and not research ... that action research is anecdotal and subjective, and that it is inherently biased due to a lack of researcher independence or separation from the research process. (p. 2)

This echoes a similar observation by Denzin and Lincoln [14, p. 4] about the attitude of the traditional science lobby to qualitative research in general (although many common forms of qualitative research are not action research, for example ethnography). What is more, Waterman et al. (op cit) go on to say that

it is difficult to predict or evaluate outcomes of action research projects ... The complexity of the action research process has meant that researchers, managers and funders have experienced difficulties in assessing the value and outcomes of action research protocols and project reports. (p.3)

The corollary is that managers may not be sure what they are going to get in return for their support and the main funding sources will tend to prioritise lower risk more traditional research and development.

Somewhat ironically, given their purpose, Waterman et al’s observations commissioned by a key UK scientific health R&D body—the Health Technology Assessment programme—constitute potent criticisms of action research. From this one might conclude that action research is eccentric and risky at best or is simply invalid and useless and this can be seen as inefficient or unethical.

It follows that getting mainstream support for action research projects is likely to be more difficult. It is noticeable that very few calls for research from the UK Department of Health or NHS make any direct reference to action research. The reasons for this are not clear. It may be that research is construed primarily in terms of knowledge generation rather than the generation and utilisation of knowledge to achieve desired change.

Since the stakes seem high a good response to the critics of action research is indicated. Instead Waterman et al. simply conclude that “action research needs to be judged according to its own terms; that is, whether the work is participatory; whether it is aimed at change; and whether it involves movement between reflection, action and evaluation” (p. 3). Again, ironically, this message will appeal to both action research enthusiasts who will agree strongly and to scientific and managerial sceptics who will simply find it confirms precisely their negative views. Unfortunately for action researchers it is the latter that control the majority of resources in health services R&D.

Waterman et al.’s decision to exclude a justification of action research from the objectives of the systematic review (op cit p. iii) may have been shrewd because, as Reason [51] observes, to attempt discussion of what he terms “participative inquiry” is to enter “several lions’ dens simultaneously” (p. 325)—including those of scientific orthodoxy as well as the advocates of various forms of “participative inquiry”.

Yet this has left Waterman et al’s review somewhat defensive and disappointing and maybe why the Welsh Assembly Government later commissioned its own review [70]. Waterman et al. asked neither whether traditional science has anything to learn from action research (and vice versa)? Nor whether action research can in any way complement traditional health science? To answer these questions we have to try and understand action research.

What is Action Research?

Varying accounts of the origins and nature of action research can be found in [23, 24, 30, 39, 47, 51, 64, 73], and others. They tend to identify philosophical, disciplinary and political motives of particular leaders sensitive to human needs in a particular context in which they are seeking a beneficial change, either in a situation or a system.

Lewin will be familiar to many social scientists as the post war “father” of action research (e.g. see [30, p. 28]) but readers should note many others before and after Lewin appear to have had relatively independent but broadly convergent ideas including for example Moreno in 1913 ([37] citing unpublished research by Altrichter and Gstettner), Dewey in the early 20th century [39, 17] and Shumsky [56].

More recently, in UK health care Waterman et al. [67] found action research mainly involved nurses and was aimed at improvement in clinical and technological skills, education, the service provided, perceptions and attitudes, management processes, the quality of life of patients and service delivery in community, primary care or hospital services.

Research topics included health promotion to reduce coronary heart disease (CHD) or reducing the spread of HIV; setting up services for children with special needs or a nurse practitioner service for patients with dementia and their carers; improving splint aftercare, mental healthcare in Accident & Emergency, pain management, self-medication for older patients, improving clinical leadership, getting research into practice and multi-agency procedures for referral, care management, training, audit and records, organisation and management of midwifery and so on.

This array of topics seems impressive yet many “soft” or “community” operational research health projects overlooked by Waterman et al. can be considered as forms of action research. For example Soft Systems Methodology (SSM) [9] is specially interpreted by Dick [18] as an action research method. SSM has numerous health applications (see [29, 54, 68] for examples of applications). Ulrich’s [61] “critical heuristics” (discussed below) are used often in health service projects [22, 59] as is Ackoff’s [2] interactive planning (see [5, 32]) and Friend and Hickling’s [19] Strategic Choice (see White [69]). Waterman et al. make no reference to any of this material which would have broadened their scope considerably. It is beyond ours to address this gap here. Instead, we must consider what characterises any project as action research?

Unfortunately, unambiguous definitions of action research have proven elusive. Part of Waterman et al.’s purpose was to clarify the nature of action research but their definition is 139 words long and consists of a list of activities and practical and moral conditions. Paraphrased, they say action research

describes, interprets and explains [local] social situations while executing a change intervention aimed at improvement and involvement ... with ... partnership between action researchers and participants ... [producing different] types of knowledge [and] Theory [with] application explored through the cycles of the action research process. [67, pp. iii–iv, our words in square brackets]

This echoes but is not the same as Reason’s definition of “cooperative enquiry”:

[a] way of doing research in which all those involved contribute both to the creative thinking that goes into the enterprise—deciding on what is to be looked at, the methods of inquiry, and making sense of what is to be found out—and also contribute to the action that is the subject of the research. [51, p.1.]

Similarly, but not identically, Wadsworth explains that “participatory action research” (PAR)

involves all relevant parties in actively examining together current action (which they experience as problematic) in order to change and improve it. They do this by critically reflecting on the historical, political, cultural, economic, geographic and other contexts which make sense of it. [64]

Indeed Wadsworth argues that PAR is a more general, more reflective, more critical approach to research distinguished from traditional or “positivist” research—ironically but nevertheless substantially—only in the degree of reflection on each of the elements of what Wadsworth says is a universal research process:

the cycle of action, reflection, raising of questions, planning of ‘fieldwork’ to review current (and past) actions—its conduct, analysis of experiences encountered, the drawing of conclusions, and the planning of new and transformed actions. [64]

Each of these views of action research is more detailed than Lewin’s “spiral of steps composed of a circle of planning, action, and fact finding about the result of the action” [36, pp. 205–206], or Payne et al.’s view of “a special branch of policy research” [48, p. 163]. Yet in neither Waterman et al. nor Reason’s definitions are methods of inquiry, or participation, or the nature of knowledge and theory unambiguous. For instance, Waterman et al. (op cit) emphasise description and explanation of social situations but Reason (op cit) emphasises a more general process of making sense of what is to be found out. In contrast Wadsworth (op cit) emphasises change and improvement of current action by critical reflection.

Wadsworth does go further in emphasising a condition for PAR that it is characterised by “a genuinely democratic or non-coercive process whereby those to be helped, determine the purposes and outcomes of their own inquiry” (Wadsworth, op cit).

This vision of PAR at least superficially resembles the way action research has been attempted in the User Centred Programme, although Wadsworth cautions that “it is very hard to achieve the ideal conditions for putting it fully into practice” (Wadsworth, op cit).

Instead of struggling with idealistic definitions many authors list key principles or characteristics (for examples see [24 p.37, 26 p. 299, 34]). These lists tend to be overlapping but are not entirely consistent with each other. For instance, Holter and Schwartz-Barcott (four principles) emphasise collaboration between researchers and practitioners unlike Hart and Bond (seven principles) who say action research is educative. Both are implied by Lathlean (three principles) who instead emphasise solution of practical problems and change in practice. Similarly Altrichter et al. [3] list ten factors they deem necessarily present if action research is occurring and emphasise participative learning by doing and making mistakes, in a process characterised by power sharing and reflection linked with action in a publicly open “critical community” addressing participant’s own questions (p. 130). There are similarities between this and McTaggert’s “16 tenets of PAR” (published in [63]).

Clearly it cannot be concluded that participatory action research, action research, cooperative inquiry or any other form of participative inquiry are identical in theory or practice—indeed there are so many alternative views and nuances each with their own relatively separate communities of practice that valid representation of similarities and differences between them is both difficult and likely to be very contentious.

Indeed action research is necessarily variable in practice or else it cannot be adapted to the difficult contexts in which it is most needed. This necessity of adaptability of course challenges, yet again, the consistency of definition of action research. The greater the numbers of strict principles—guarded by some elite—allegedly defining what is or is not action research the less accessible and, ironically, more elitist the practice risks becoming.

Undoubtedly the philosophical origin, methods and purpose of action research remains contentious and with the best intentions we find it impossible to discuss adequately here the consistency, necessity and sufficiency of its principles and definitions. Indeed Waterman et al. simply point out there are at least some exceptions to some of the principles identified in various sources and select only two (partnership and cyclical process) as fundamental (op cit, p. 12). However it seems both from our review of literature and our experience in the UCRP that a well formed action research project requires more than two fundamentals. We propose:

  • A cooperative process of inquiry by a group of participants using any methods quantitative or qualitative or even artistic.

  • Ownership of the project by the group.

  • Focus on issues of interest to the group.

  • Reflection on and re-negotiation of aims, methods and membership of the group.

  • Negotiation of coordinated action by the group—which may draw in other stakeholders—aimed at an agreed improvement.

Realistically any of these URCP principles may not be achieved but this could fatally weaken the credibility and practicality of an action research project in the same way a low response to a survey might fatally weaken a scientific project. On the other hand we feel these principles are less exclusive and more realistic than those of Altrichter et al (op cit) who imply, for example, that public openness is always either feasible or desirable. Indeed, openness requires great confidence and security neither of which is necessarily true of vulnerable groups like those the UCRP largely consists of.

Nor do these groups necessarily or consistently behave in an idealistically democratic or non-coercive way as Wadsworth implies is necessary for PAR. Instead we emphasise “cooperation” (as understood by Reason, op cit) in a “group of participants” because we see the action researchers as a group who in the end own the project and have negotiated their roles as contributors and beneficiaries. In the course of a project they will try to negotiate actions between each other and with other stakeholders with and between whom relationships may be highly political or full of conflict—as typified by the IMPACT and Trailblazer projects described by Walsh and Hostick [66].

This also contrasts with Waterman et al.’s differentiation between action researchers and participants (implying unequal roles) “all of whom are involved in the change process” on the basis of an idealistic partnership (op cit p. 11).

None of the principles mentioned are sufficiently elaborated to specify an exact epistemological, ontological or ethical position except that “change” in something, be it in a physical or virtual world (to use Popper’s [50] terminology), is inevitable. This is not least because action research goes beyond the narrow allegedly value neutral paradigm of traditional health science.

Burrell and Morgan’s [7] broad paradigm distinctions are useful here: functionalism (aimed at improved regulation of or incremental change in a social situation by mobilising objective knowledge—for example through systematic review) is the paradigm of traditional health science. In addition Burrell and Morgan identify interpretivism (aimed at improved regulation of or incremental change in a social situation by mobilising subjective knowledge), radical structuralism (aimed at radical social change by mobilising objective knowledge) and radical humanism (aimed at radical social change by mobilising subjective knowledge).

It is conceivable (if highly contentious) for action research to be entirely functionalist but in our experience it is possible for one project to contain more than one paradigmatic element. This is contrary to Burrell and Morgan [7] who protested the mutual incompatibility of paradigms (though Morgan [43] later argued for a paradigm complementarism strongly rejected by Burrell [6]). There are also those who take a postmodern view of research (see [58]) which, of course, challenges any attempt at definition of action research.

This diversity may have led to the confusion identified by Waterman et al. as to what action research is but it is also why action research enables value-laden questions in health care to be addressed. Action research is value-full, not value-free, and values can be made explicitly part of the inquiry.

It is also why, as Waterman et al point out, action research has to be judged according to its own terms. It is inconsistent to evaluate interpretivist or radical aspects of projects in functionalist terms—as inconsistent as treating the paintings of Picasso, Constable and Da Vinci as if they were satellite images of military targets, or vice versa.

Indeed to appreciate the case for Action Research in modern health services it is necessary to reflect on the limitations of what might be termed traditional “health science”.

Health Science and its Limitations

The biggest contributors to the good health of all people alive in the world today are probably clean water, sanitation, vaccination and adequate food. Of course before scientifically informed health care people often discovered relatively effective ways to live healthily but understanding that microbes cause diarrhoeal disease and how to rehydrate a child is, without doubt, an example of useful scientific knowledge with health benefits.

The perceived need for and political attractiveness of more and better science better disseminated underpins contemporary visions of R&D in health care. For instance, databases like Cochrane exemplify the perception that more value can be extracted from scientifically collected data. The role of science in the management of the UK NHS is exemplified by the major UK conference in 1996 “the Scientific Basis of Health Services” [49]. This has continued with the current emphasis in NHS R&D [16].

A key British scientific institution, the Medical Research Council [41], claims its supported scientists have won 17 Nobel prizes for world influencing contributions. In contrast no Nobel prizes in health care have been awarded for action research [46].

Few will doubt the amazing and beneficial impacts of advanced health science for its beneficiaries. There are seemingly endless possibilities for scientific improvement of health and well being through, for example, gene therapy [72]. This great promise and dominance of health science perhaps rests on two foundations: (1) health scientists have created an organised body of credible high value knowledge gained through a quality assured process of inquiry which is scientific method and (2) there is a strong and continuing international political consensus about its value typified by the World Health Organisation, national health institutions and professions with international standing and influence (for example the American Medical Association, the British Medical Association). The consensus guarantees the flow of resources to support the international scientific evidence base for health care. In the UK the eight British research councils distribute annual R&D grant allocations of more than £2.5bn of which the MRC contribution is approximately £500 m [42] and the recently combined MRC/NHS R&D budgets are worth more than £1bn to support health R&D each year [53].

Compliance by researchers with the rules of scientific method guarantees the validity of scientific theory whilst compliance with rules of accountability to scientific, professional and public authorities guarantees the public (i.e. moral, legal and social) acceptability of the science.

However, whilst plausible, many problems have been highlighted by critics.

Sophisticated western style health care is irrelevant to the majority of the world population and will remain so for their lifetimes because of poverty or culture or politics. In both poor and rich nations health inequalities linked to poverty remain a persistent concern [33, 45, 60].

There is a perpetual “quality gap” between what is demanded publicly and what is possible scientifically and economically [65]. Since cost containment is an international concern [4, 71] and more and better science coupled with rising public expectations is probably inflating the costs of health care [1] the quality gap may, paradoxically, get bigger.

Nevertheless the political demand to close the quality gap through higher quality science has led to recent reform of policy on health research in the UK. The aims of Best Research for Best Health [16] are to establish the NHS as an international centre of research excellence, attracting the best research professionals, commissioning research focused on improving health and social care, managing knowledge effectively and making better use of public money. This is to be achieved by concentrating a greater proportion of increasing resources on fewer centres where scientific competence is perceived to reside.

Interestingly, this policy explicitly denies elitism (or egalitarianism) [16, p. 33] whilst actually appearing “unashamedly” [13, p.7] to augment elite R&D through the establishment of an academy of top and career health researchers. Where “user” researchers might fit within this is not clear.

The elitism of Best Research tends to confirm the perception that traditional science has been disempowering because control resides precisely with “professional organisation, its theories and rules” [40, p.39]—what Illich et al. [27] provocatively calls “disabling professions”—and it is these elites that decide what counts as evidence. The elite practice of scientific medicine has sometimes been criticised in the UK Parliament for being both offensive and uninformed [10] and this tends to undermine the frequently made claim to value neutrality in science.

Even if scientific value neutrality is allegedly achieved clinical decisions remain contentious. For example an organ transplant candidate with Down’s syndrome clinically will be often a lower priority candidate than someone else—and therefore people with Down’s tend not to get transplants [35]. Yet critics find this completely unacceptable (see McDonnell [38]). Why indeed should rationing of scarce resources be based on an elitist capacity to benefit when ethics, according to Schulze and Kneeze [55], varies culturally both historically and geographically?

In fact, a cultural perspective finds scientific health care simply one of many cultural health systems around the world, including Chinese herbal and Ayervedic medicine for examples, which evince other ways of understanding health, illness and medicine. Even the practice and language of scientific medicine varies culturally around the world [25].

Certainly culture has consequences. The traditional understanding of the symptoms of HIV in Southern Africa has proven, arguably, disastrous for more than five million people now infected with HIV [62] when health science shows how this infection can be avoided. Yet in what way can useful health science jump the culture gap in Kwa Zulu when it apparently cannot in the allegedly scientifically informed public of the UK who are at ever increasing risk from obesity leading to scare stories that younger people may die before their parents [12]?

Influenced by the limitations of health science, and perhaps especially in the UK by scandals of incompetent [31], unethical [52], or criminal [57] clinical practice, the politics of health care have shifted toward a greater role for patients and the public in all decision making.

A lead on patient and public involvement in NHS R&D was taken by the group now called INVOLVE [28]. There is now a major policy commitment to place “patients at the centre” of NHS R&D with involvement of patients and public in priority setting, defining research outcomes, selecting research methodology, patient recruitment, interpretation of findings and dissemination of results [16, p.34]. It is ironic that this commitment to involvement does not diminish the elitism of Best Research—even if it makes for better science, as INVOLVE [28] claim rather than simply making science projects administratively more tedious.

It is also ironic that greater patient and public involvement is seen as an antidote to low quality, incompetent or wayward science whilst action research actually depends upon involvement. This convergence though reveals a necessary complementarity between action research and science. The process of action research explicitly can involve patients and public in priority setting, defining research outcomes, selecting research methodology, patient recruitment, and interpretation of findings and dissemination of results—precisely the agenda for the Department of Health. Moreover it attempts necessarily to go beyond mere dissemination of results onto the negotiation of change without which Best Research can never achieve Best Health.

Unfortunately a conservative establishment dominated by traditional science, like that of the NHS, is highly unlikely to value action research projects. So to illustrate the case for action research we want to describe two of nineteen projects from within the UCRP—IMPACT and Trailblazers.

The IMPACT Project

In the late 1990s four mental health “service users” (or patients) formed the IMPACT research team. In return for expenses and provided with a facilitator by a local community NHS Trust asked the team to produce a process and tool to enable the evaluation of the impact of service users and carers on NHS decision-making. The team was told they were free to pursue this aim in whatever way they pleased. In other words, the Trust accepted the risk that the resources could be wasted.

The team produced a process at the heart of which was Ulrich’s [61] questions (Table 1). These questions are used in facilitation to probe hidden or suppressed norms about motivation, control, legitimacy and expertise embedded in any plan (see [66] for details).

Table 1 IMPACT version of Ulrich’s critical heuristics

The team envisaged using selected critical questions to gather information from groups and individuals in a problem situation and then moving onto the negotiation of feasible and desirable improvements with participants.

The team’s first opportunity to apply their process arose when between January and June 2001 the team became involved with what was North Lakeland NHS Trust at the request of an emergency board. This Trust had been severely criticised in November 2000 in a report to the Government by the Commission for Health Improvement (CHI) for a “culture” that “allowed unprofessional, counter—therapeutic and degrading—even cruel—practices to take place” and for “failures of management consultation and communication” [11]. The Chair and Director of Personnel had been dismissed, the Chief Executive suspended and other senior managers given “disciplinary warnings”. An acting Chief Executive from another Trust was in temporary charge.

The team provided a training programme on user involvement for approximately 190 staff (including consultants and the new Trust Board) and approximately 157 service users and carers at 42 venues around North Cumbria. The project ended with a “consensus conference” involving members of the Trust Board and approximately 100 other staff, service users and carers. From this a user—carer strategy was produced, identified by the Chief Executive as a valued contribution by the IMPACT team [66].

From a functionalist viewpoint the project produced a process that can be transferred by training other teams to apply it. It also produced a tool that is identified as the team itself by Walsh and Hostick [66]. However, from an interpretivist viewpoint the project also helped with the expression of views by different stakeholders and created an opportunity for them to negotiate a way forward at a conference. Therefore, both the functional and the interpretive elements of the project contributed to a high profile NHS Trust gaining control over what, hitherto, was seen as a wayward and perverse culture of care.

The Trailblazers Project

In 2002 Trailblazers were a group of service users offered expenses and the help of a facilitator by the local community NHS Trust. Unlike IMPACT the team were told they were free to identify their research question, problem, topic or issue and then design and undertake a project entirely of their choice. Trailblazers then met on a regular basis initially to work through issues they had with current service provision and to develop a way of consulting with professionals to improve mental health services. Their first project was called “Spearhead GPs”.

The team adapted and modified an idealising approach into an iterative, cumulative, postal process of consultation similar to a Delphi method, in which east Hull General Practitioners (GPs) were involved in five rounds of consultation. Why GPs were selected by the team is unclear but the team did identify many stakeholders (see Fig. 1) and discussed how to draw each one into the process.

Fig. 1
figure 1

An ideal model for mental health services?

The team wrote to the GPs on Trust notepaper asking a few simple open questions prompting them to express their views as to what would constitute an ideal mental health service. Replies were read and discussed by the team who decided on what points to seek further clarification. Often this was simply asking practical questions in the style of whom? What? Where? How? and Why? about both what was said and not said by respondents. The replies were transcribed, anonymised, and sent back with requests for clarification on topics Trailblazers felt were important, so that all GPs could see what others had said. GPs were also told they too could comment or ask for clarification on anything in the documents. In this way anyone in the process could raise or respond to any issue and everyone could see how the content was developing.

During iterations Trailblazers discussed many themes that appeared to emerge from responses. They eventually selected one theme they labelled “Aspect 1: A ‘close’ Primary Care team with excellent relationships and efficient and effective working practices” (Fig. 1) and created a picture sumarising responses they associated with this theme of an “ideal future for mental health services” that GPs were asked to comment on (Fig. 1). The process ended at this point.

The process is novel in that service users were in control and used it to produce an idealistic picture of local mental health services based on the opinions of professionals rather than the other way around. GPs were prompted to articulate ideas the team felt would be taken more seriously coming from a professional than from a service user. For example, the team had already discussed their personal experience of fragmentation and communication problems in mental health care and was delighted when one GP proposed the “single point of referral” (Fig. 1).

However, it is important to understand that the team’s aim was not the usual functionalist one of producing a better model of an “ideal” mental health system that later would have to be disseminated, implemented and evaluated. Instead the team’s aim was the interpretivist or potentially radical one of using the model as a device or tool or “trick” to draw in both GPs and other stakeholders into negotiation of structural and process changes in local NHS services. Unfortunately this was not achieved because of team and finance problems [66]. What is more radical change would perhaps have been unlikely because of the inertia of local culture and politics. But the project did, at the very least, formulate a novel “reverse consultation” process that can be reproduced by others and would benefit from further development. Anyone interested in the many other diverse features and outcomes of Trailblazers should refer to Walsh and Hostick [66].

The Case for Action Research

An immediate question arising is whether either the IMPACT or Trailblazers projects can be called “action research”? A full justification of these projects as action research in terms of the UCRP—or any other—action research principles would require a substantial elaboration in which the meaning of each term (e.g. “cooperative”, “inquiry”, etc) is discussed. Those who wish to do this can refer to Walsh and Hostick [66] as a starting point.

It is interesting to see that neither project produced causal theories or objective views of local “social situations” as Waterman et al (op cit) imply is part of the action research process. Instead, they both appeared closer to Reason’s (op cit) view of cooperative inquiry—those involved contributed to the creative thinking, decided what was to be looked at, chose the methods of inquiry, made sense of what was to be found out and contributed to action. Both projects featured phases of planning, taking action and fact finding about the result of the action—in other words they did conform to Lewin’s (op cit) classical spiral. It cannot be said they conform entirely to Wadsworth’s views of PAR. However, we believe these projects can be justified as action research according to the UCRP principles listed earlier and there remains a documentary record in support of these claims—as well as the direct witness of those involved or affected [66].

Interestingly both projects mobilised rather than suppressed the scientific knowledge of health professionals. By drawing in scientifically informed professionals into negotiation with other stakeholders both projects enabled questions to be asked about local health services and elicited important responses. IMPACT was able to facilitate the negotiation of actions on the basis of these responses. Moreover in an objective sense both the IMPACT and Trailblazer projects generated knowledge about processes that can be evaluated through repetition elsewhere.

Therefore it follows that traditional health science is not irrelevant to the action research approach—it can be an input and an output to the process of action research. This should be expected if Wadsworth’s [64] view is accepted that action research differs from traditional scientific research only in degree and not in kind.

However whilst the process of science emphasises the production of generalisable knowledge the process of action research emphasises the contextual generation and application of knowledge (amongst other kinds) in negotiating change acceptable to a local public. So action research clearly is research because it involves getting answers to questions but these are usually of local concern although they can be of much broader interest.

Action research in the style of IMPACT and Trailblazers can be commissioned as long as the risk of “no result” or even a negative result is accepted. However, this is not simply a problem of action research. The failure to report negative or null results in traditional health science, the well known publication bias, is described by Chalmers [8] as scientific misconduct.

Both IMPACT and Trailblazer projects were highly creative at modest costs. It is beyond our scope to discuss cost effectiveness but we want to point out that we estimate the maximum marginal costs in both projects were 1/3rd of the facilitator’s annual salary plus total participant expenses of less than £1,000 (at 2006 prices). The Trailblazer project also involved, we estimate, no more than 25 hours in total of GP respondent’s time. IMPACT subsequently generated income from consultancy projects.

Of course it is possible to conceive of very high cost action research projects because of higher participation costs but traditional scientific projects too vary in marginal costs from low to extremely high and the majority of funds are allocated to science.

Yet who indeed should get organ transplants? Should people who smoke not get bypass surgery or pay more tax? Should nurses prescribe medicines? Should wards or hospitals be closed? Should a new service be offered? Should a new drug be offered? Traditional health science can inform such debates but the questions themselves are not solely or even primarily scientific (in traditional terms) because each one implies not only measurement but also valuation and then, eventually, negotiation of change.

This poses the health economic problem as Mooney [44] points out as to whose values count? Ackoff [2] proposes an ethical answer—anyone affected by a proposal should be involved in planning it. Ackoff also makes this a practical imperative of “plan or be planned” (op cit) because failure to take the initiative leads to the imposition of other people’s values. The Down’s transplant debate exemplifies just such a group of people who may well be the victims of the imposition of someone else’s values. A Trailblazer style action research process facilitated and funded properly may be one way in which vulnerable people can begin to draw in powerful stakeholders into more open debate and then into the negotiation of feasible and desirable improvements. Therefore action research is a necessary complement to traditional health science.

The Next Step

What is lacking, as Waterman et al. (op cit, pp. 59–60) conclude, is the support for high quality action research, for example in the form of review by peers who appreciate the meaning and value of the processes and outcomes of this style of R&D. But this can only happen if there is development of this capacity within the R&D establishment.

Unfortunately, one effect of Best Research is perhaps to have reduced support for action research in the NHS, possibly substantially although this will only be clear in the next few years.

One casualty may be the UCRP itself which is threatened because the only access to NHS funds from 2007 is through competition in place of the previous system of allocation of NHS funds to Programmes with annual reports on progress. Almost inevitably UCRP action research projects will be in competition with traditional scientific projects and will tend to be ranked much lower than seemingly lower risk, more specific, more traditionally measurable scientific projects. This is not least because of the difficulty accounting for URCP style research projects in the narrow terms required by the establishment which we have criticised elsewhere [21]. In these circumstances, entering the competition with an action research proposal is likely to be regarded as pointless.

In the UK Waterman et al. did not address the need for support for action research more substantively perhaps precisely because it is very difficult to see any change to the status quo. Indeed the options are limited: either leave the current arrangements unchanged (the status quo), or develop significantly the critically reflective capacity of those currently dominating health R&D so that scientific research is eventually transformed into the reflective cycle described by Wadsworth [64] or increase the representation of action research enthusiasts within all the facets and functions of the establishment of health R&D or modify current peer review and public accountability arrangements or some combination of these.

Certainly without support action researchers will necessarily stay mainly at the margins of the R&D establishment—for some of the more radical action researchers maybe this is where they want to be—and wait for the next wave of reform though after these have passed (as surely they will) they will always emerge at least in small numbers in response to the pressing local needs of the day, as the history of action research demonstrates.