Keywords

1 Introduction

Last week I sat in a conference listening to several presentations back to back about implementing qualitative research in the clinical setting. Each of the presentations reported a failure. I moved to the microphone. “When are we going to listen to our evidence?” I asked. “All of the evidence points to our research as not being implemented.”

Something is wrong. For many years I have been concerned about the low funding rate for qualitative inquiry, as well as the meager dollar amount granted for the awards. It often seems that qualitative research is only funded if it is “tacked onto” a quantitative project, or couched within a mixed method study. This speaks volumes about the way the scientific community values qualitative inquiry. Perhaps that is the reason our research is not noticed or valued.

Several years ago, Sandelowski (1997) noted that qualitative inquiry must “be of use” if it is to be worth doing, and this is particularly true in applied fields in health, mainly medicine and nursing, which are often physiologically or epidemiologically oriented, and based in quantitative inquiry. Of greater concern—if the development of knowledge and theory building is considered the primary use of qualitative inquiry, and if we privilege the criterion of usefulness as application—is that qualitative inquiry may be devalued and overridden by other methods. Clearly, application to practice is something we are not doing very well. Why?

In this chapter, I do not argue that the basic contribution of qualitative inquiry to knowledge is unimportant, but rather argue that this basic contribution to knowledge does not preclude application of this knowledge to practice. I suggest that qualitative researchers ourselves are guilty of, and in part to blame for, this lack of appreciation for qualitative inquiry, for as researchers we undervalue and underestimate the strength of qualitative inquiry. We fail to generalize, to cite each other, and sometimes do not even use our own research incrementally, purposefully building research programs that target application.

I suggest here that with individual projects we do not extend our findings far enough for the practitioners to use or for quantitative researchers to take seriously and use as a foundation or their inquiry. By far enough, I mean that we cease our investigation before we have made concrete recommendations with practical instructions for the clinicians, and before we have provided enough data for applied researchers to set up a program to implement, test, and make a standardized part of practice. This leaves the clinicians who read our articles thinking, “Oh, how interesting, but so what?” It leaves our research in limbo, stuck in journals unread and waiting for the inevitable 5 or 10 year expiry date, only to find some time later that our studies have been replicated and our findings rediscovered (Morse 2003).

But I go further. I suggest a solution, an advocate—a new clinical role for a nurse to be responsible to oversee the implementation of changes from our research.

2 Types of Contributions for Qualitative Research

When we examine the research programs of qualitative researchers who have made important contributions, we find they may be categorized into three groups: (1) by individual projects; (2) a lateral research program (multiple projects) by scope; and (3) an incremental research program (multiple projects), addressing a particular problem.

The first type of qualitative research that may be applied is the most common unit of qualitative research—the single publication. As our “use” criterion demands that each article or book must have some level of relevance and application, and often these are unique and do not interface with the work of others, they often pass unnoticed in our growing number of journals.

The second group has conducted significant projects within a general area, developing knowledge piece by piece, laterally, and even developing a subfield. An example of such contribution is that of Arthur Kleinman (Collins 2011), who developed a field combining medical anthropology, psychiatry, culture, and health. His interests have ranged from the illness experience (Kleinman 1988), social suffering (Klienman et al. 1997), global health (Kleinman 2010a; Kleinman et al. 2008; Farmer et al. 2013), and caregiving (Kleinman 2010b). These are all important works complete in themselves, but they occur in different (but complementary) domains around a central area of inquiry.

The third type is an incremental research program, and this of equal importance. It consists of a targeted (and funded) research program, comprising a number of related projects in a particular substantive area. These projects focus on a particular problem, and work towards an intervention. These programs of research are therefore less broad in scope than the second type of program. For instance, Beck systematically studied postpartum depression for a number of years, beginning with phenomenological studies on postpartum depression (Beck 1992, 1993), and conducting a meta-analysis of predictors of postpartum depression (Beck 1996). Next, with a collaborator, Gable, they developed a screening scale for post partum depression (Beck and Gable 2000), compared its performance with other instruments (Beck and Gable 2001a), conducted further validation (Beck and Gable 2001b), and published a Spanish version of the scale (Beck and Gable 2003). From this it is clear the value of qualitative inquiry—indeed the validity of the subsequent quantitative studies rests on the foundation of the qualitative theory developed in the first projects. Both the second and third types of qualitative research programs consist of numerous articles and sometimes even books, all complete in themselves. They are most effective and have most impact as a group created by individual researcher. We do not always respect the work of a single researcher in qualitative inquiry, denigrating the research as one who “only cites themselves” (even if there is not other work to cite). Further, as Beck (2013) notes, such incremental work by a single investigator is quite uncommon.

3 The Single Qualitative Study

I began this chapter by suggesting that our primary task in qualitative inquiry is to develop knowledge that will contribute to practice. Yet, because a single qualitative study is limited in scope, application of findings may not be achieved in one study, as it may take several sequential studies to achieve this aim. However, our studies are often very descriptive, and therefore close to practice. There may be other problems that impede application. Some have blamed the practitioners for not using the research literature. However, I suggest an alternative view: that qualitative researchers are hesitant in trusting their own findings and therefore hesitant to recommend that others apply them or use them to change practice. In the final part of this article, I will suggest some strategies to facilitate integration of qualitative inquiry into practice.

4 How Bad Is the Problem?

As qualitative researchers, we have been carefully taught several criteria that have impeded our own perception of the worth of our studies, but these are criteria that are, in fact, not correct. The worst is “Qualitative research is not generalizable” (Morse 1999; Hall 2013). At no time do we expect qualitative research to be generalizable in the quantitative sense, but the findings and the theory produced from qualitative inquiry is generalizable by abstraction at the level of concepts and theory, and by linking these findings with the concepts and theories of others. In other words,

the knowledge gained is not limited to demographic variables; it is the fit of the topic or the comparability of the problem that is of concern […] it is the knowledge that is generalized.

(Morse 1999, p. 6)

This is an exceedingly powerful form of generalizability, for once we develop, for instance, a concept that others can recognize, it becomes evident everywhere; once we develop an applied theory, it becomes essential. We have seen this with much qualitative inquiry—consider, for instance Goffman’s work on stigma (1963/2009) and the subsequent impact that it has had on health care in the past two decades.

But I must agree that there is a difference in the years of work Goffman put into his work on stigma (a type three research program) and the monographs he published, and the scope of some of the single 15+ page articles we see published in as qualitative research. Some of these published studies often have little cognitive/theoretical development, have small samples, descriptive findings that are obvious, and therefore contribute little. I agree that it is hard to produce enough qualitative articles to make tenure, when there is little appreciation for the amount of unseen cognitive work that goes into micro-analytical description or good interpretative qualitative inquiry. The system, in part, does work against us.

But even if we have a solid theory, with innovative findings, those findings often remain at the theoretical level. We tend to consider theory as the outcome—we do not attempt to move the findings back into the practice arena and provide the detail necessary for practitioners to actually use our work. We are researchers, after all. So we describe our theories in an interesting form and as generalizable knowledge—but not in a useable theory or form that may actually be applied in a program or policy.

Thus, it seems that qualitative researchers truncate inquiry before they have actually finished or pushed the conceptual results as far as possible, back to application.

I asked: How bad is this problem? I surveyed practice-oriented journals looking for qualitative articles published in a 1 month period, and reviewed these articles for detailed recommendations for implementation.Footnote 1 Because of this method of sampling, the articles addressed a variety of topics and many issues. Some of the journals spotlighted the implications in a bordered callout (for instance, the Journal of Advanced Nursing inserts them as a bulleted list in a callout box titled “Implications for practice and/or policy”). But mostly the implications for caregivers are hidden in the Discussion section or in the Conclusions of the article. For example, in the callout of an article describing diabetic immigrants self-care with cardiac rehabilitation (Nielsen et al. 2012), the callout for “Implications for Practice or Policy” reads:

  • Understanding the interplay of multiple identities held by immigrant participants will allow clinicians to better understand how to support adoption of cardiac rehabilitation recommendations to mitigate cardiovascular risk.

  • Nurses should probe beyond standardized assessment forms to identify and reinforce immigrants’ creative and effective approaches to diabetes self-care and cardiac rehabilitation.

  • Nurses should advocate for health services that address immigrants’ needs better, such as offering cardiac rehabilitation programmes and materials in other languages and targeting cardiac rehabilitation referrals to immigrants.

(Nielsen et al. 2012, p. 2726)

These implications are at best, principles for the provision of care, and are too far removed from everyday practice to actually provide guidance for practitioners. These recommendations must be pragmatic, precise and clear.

4.1 The “Shoulds,” the “Oughts,” and Our “Need to Realize” the “Suggestions”

When we examine qualitative articles for recommendations for practice, the most striking characteristic of such articles is that the findings are written in a soft and non-directive way. Authors suggest in generalities what nurses ought to do as general principles, and the recommendations for patients’ needs are not specifically identified. Many suggestions are in the realm of common sense, are not new, and do not seem to be linked directly from the data presented to the project to the clinical setting or patient care. Some of the findings are so obvious that it is unclear why a study had been conducted at all, or else the findings fail to inform the reader. For example, the primary recommendation for Marshall et al.’s (2012) study on patient’s views about patient-centered care, was the obvious recommendation that “Patients must be included in the discussion around patient-centered care.” (p. 2671). The next two recommendations were that: “Common attributes around staff behaviours require further exploration” and “Additional exploration of the impact of health system elements on patients’ view of patient-centered care is recommended” (Marshall et al. 2012, p. 2671). These are vague recommendations for which a research study was not necessarily required to develop. Thus qualitative findings may appear weak, obvious, and lacking specificity. Without innovation, direction, and just plain “oomph,” it is not surprising that qualitative findings are ignored. The ideal research program could be carried on to the point where an intervention program is developed, implemented, and evaluated.

Some qualitative articles have made a fine contribution to developing theory, but the findings have been left at the theoretical level without informing the reader what it means, or most of all, what to do. It is only expected that the reader will recognize “what is going on” and therefore experience enlightened understanding of the patient’s situation. But again actual guidelines for care are omitted. The questions remains: why?

4.2 Why Do Qualitative Researchers Not Trust Qualitative Research?

Qualitative researchers, on the whole, are cautious in the presentation of their findings. It is as if they tend to adhere to quantitative principles when planning their research, designing their study, and submitting it for publication. Qualitative researchers are reluctant to predict, or to claim causality (Hall 2013). Not understanding that qualitative research is generalizable, by developing internal rigor (Meadows and Morse 1991), as well as through its theoretical outcome, authors may inappropriately delimit their work. They apologize for its small size, for the lack of randomization and representation of the general population. Similar problems occur with the overzealous administrations of interrater reliability with interpretative research (Morse 1997), by preliminarily truncating data collection, working with meager data sets, and cherry picking data for presentation in the results (Morse 2010). Researchers confuse analytic procedures for categorizing and theming (Morse 2008) and have an overt fear of theorizing. Although we have moved a long way in the past two decades, there is still a considerable distance to go in communicating the principles of qualitative inquiry to both quantitative and qualitative researchers.

The result is a crisis of confidence in qualitative researchers. If they themselves do not rust their findings, then the result is that their recommendations for practice are made lightly, cautiously, and tentatively.

5 Making a Contribution: Requirements of Qualitative Inquiry

How are the results of qualitative inquiry used in practice? Presently, this is not very clear, with the clinician reading or hearing about a theory with relevance to practice, and somehow remembering it in his or her mind. But the theory usually contains very few specific recommendations for practice. Yet developing a theory is the most frequent outcome from qualitative inquiry. Because of their loose application to clinical action, somehow the theory is virtually ignored by others, even researchers themselves. Generally, theory is supposed to assist the nurse to “make sense” of whatever is going on with a particular client, but lacks specific directions.

5.1 The Theory Fails on Its Lack of Pragmatic Application

Some theories are supposed to provide a foundation for practice. But how such theories fit into ongoing practice and what they offer the clinician is not stated. One could ask: Does learning a caring theory make a caring nurse? That is, does the theory teach how to care by providing directions for caring, or how to recognize a caring nurse? Or, perhaps, does a caring theory explain what a caring nurse does, compared with a non-caring nurse? And the most tricky: Does caring make a difference to patient outcomes?

This begs the question: if theory is actually used clinically, how? And is it important in solving clinical problems? Or is it simply window dressing for a hospital: adopted, distributed in brochures and philosophy, but not consulted again, nor demonstrated at the bedside. These questions are of utmost importance given the theory requirement of the Magnet program for improving standards of care in hospitals (ANCC, http://nursecredentialing.org/Magnet/ProgramOverview).

Theory is supposed to be used for informing practice. We rarely read a qualitative study that reveals to the reader “what is going on,” and combines the general with the particular, that also contains with a particular client, in a convincing and compelling way. Theory that informs must be logical, have grab and fit (Glaser 1978), and provide a clear direction for practice, yet often we fail with this dictum. This is an obvious statement and criteria for the evaluation of grounded theory (Glaser 1978) that may be applied to all qualitative research with the goal of theory development. Minimally, the theory must provide a detailed description of the participants’ behavior as it moves through the theoretical description; it must link the behavior to the context, when where and why the behaviors occur, and what the clinician must do to modify these behaviors. These interventions must be clear enough to be followed and implemented, the outcomes observed and preferably measured.

Of course, if such a qualitatively derived theory could be readily tested, the final project in the research program could be such a study (see, for example, Wuest et al. 2013). However, this is easier written than done. Recall the reason that a researcher used qualitative inquiry was because there were no identified concepts to measure or to use within the topic, in order to conduct a quantitative study. At the end of the qualitative study, even though new concepts are now identified, the problem remains for quantitative measurement—the only pertinent concepts are those just developed—and of course, it would be a tautology to use the just-developed concepts to measure itself. Again, qualitative researchers must trust their own procedures to establishing internal rigor (Meadows and Morse 2001), so that qualitative findings are rigorous, solid and tested within the process of qualitative inquiry. Remember that testing is not necessary and often not possible.

6 Essential Project Requirements Necessary for Application

If a qualitative project (or series of projects) is to make a pragmatic contribution, certain characteristics must be in place:

  1. 1.

    Rigor: the project must be sound;

  2. 2.

    Relevance: the project must address a significant clinical problem;

  3. 3.

    Comprehensiveness: the findings must be comprehensive in scope, depth and time;

  4. 4.

    Role: the project must be explicit in its mode of utilization, thereby fitting into practice; and

  5. 5.

    Forms of recommendations: recommendations must be consistent with and used in practice.

While this list may appear simplistic, as I discuss each I will also discuss the inhibitors to achieving these criteria.

6.1 Rigor

The project must be conducted rigorously and transparently, so that the reader/user is convinced that the study is sound. There must be twofold criteria: methodological and substantive rigor. The researcher provides clear evidence that the criteria to rigor has been met. These are: sampling adequacy and appropriateness; methodological cohesion, description of theoretical construction, and so forth, as well as adequate substantive evidence of the nature of the data, and examples of the cognitive development of the theory as it is abstracted and linked with the literature.

Inhibitors: Qualitative researchers frequently do not understand the principles of qualitative inquiry, and use quantitative criteria. As previously mentioned, researchers themselves do not appreciate the generalizability of their own work. They write that, as study was conducted in one ____(nursing home, community, fill in here), it therefore cannot be generalized to other _____(elderly, communities, fill in here). Researchers do not understand how qualitative research is generalized (Hall 2013; Morse 1999). Thus it is the author’s own erroneous recommendation, that his or her findings not be generalized which intercepts the application of these findings.

6.2 Relevance

The project must address a significant clinical problem. This is the most difficult criterion to meet, for it is often not possible to conduct basic research that has direct application. Alternatively as the study progress, a problem that may initially appear to have clinical application, may become more complex and lose its direct link to application. The most unfortunate scenario is that in the conduct of an apparently basic study, relevance for application emerges, but the researcher himself is not aware of this aspect of the study.

Inhibitors: The most common inhibitor to relevance is revealed in the case above. We are so ingrained “not to go beyond our data” that we forget in qualitative inquiry that it is the concepts and theory that is transferred, and that our findings in this study have relevance for similar problems in other areas, if similar characteristics are present. An example is a study on privacy in a nursing home, has relevance for other groups with problems of privacy-maintenance when residing in another total institution (See Applegate and Morse 1994).

6.3 Comprehensiveness

The findings must be comprehensive in three dimensions: in scope, in depth and over time. The trick in qualitative inquiry is to keep sampling and collecting data and analyzing until you are finished, i.e., until your theory is completed, rich, convincing and significant, so that you “know it all.”

Inhibitors: Too many qualitative researchers terminate data collection before they have adequate data. Others may have delineated their problems so tightly that the findings become obvious and the variation and the complexity in context is not appreciated. Or, they use an inappropriate method: for instance, semi-structured interviews delimit data, forcing participants to focus unnecessarily, and when administered to a small sample, results in data inadequacy (Morse 2012), which is a common problem with some methods, such as interpretative phenomenological analysis (IPA) (Smith et al. 2009). And, as previous mentioned, the problem is in analysis, with the researcher not trusting (nor defending) their own interpretive analysis, and conducting interrater reliability that further restricts the development of interpretative theory (Morse 1997).

6.4 Role

The project must be explicit in its mode of utilization. Any recommendation for practice that emerges from the study should be more than simply admonishment or principles. If you tell practitioners what to do, you must also tell them specifically why and, most importantly of all, how.

Inhibitors: We cannot tell practitioners of what they should do without telling the how and why. For example, once I identified “talking through”—that is, the way nurses talk to conscious and extremely distressed trauma patients. Talking through assists these patients to maintain control and not to fight caregivers (Proctor et al. 1996; Morse and Proctor 1998). Talking through needs no instruction—nurses are already doing this in trauma rooms, but without documentation, learning from each other. They realize it is important, but it is not a part of nursing tests or instruction or incorporated into their work role. Our research identified and documented the practice. We disseminated, presented, and published. But talking through remains an informal strategy for care.

Importantly, we have demonstrated the qualitative adequacy of the procedure, but not demonstrated concrete efficacy, such as, how talking through makes a difference to morbidity and mortality in trauma care. Doing so would require a randomized control trial, and we could not determine how such as study could be designed, given the complexity of trauma patients. Our only evidence for efficacy was that the patients did not fight off the caregiver, and remained in control until the anesthetist took over. Those who were partially aware told us that they “just heard the nurse’s voice and held on.” Secondly, nurses themselves do not have control over their workload, so that talking through was only done if there was a nurse available and who “had time.” Administrators were not prepared to provide an extra staff member in the trauma room to be there for the patient. So, the practice was not formalized and is now being lost in the literature as its 10 year publication anniversary nears.

6.5 Form of Recommendations

The research must be developed and disseminated in a form that is accessible to the clinicians, or to the nurse educators and students.

Inhibitors: The main problem is that researchers publish in research journals; clinicians read clinical journals. Attempts to remedy this disconnect have been tried, but with limited success. For instance, Evidence-Based Nursing summarizes research articles to their essential details needed for application for speed-reading clinicians, who are supposedly not concerned with the technical details.

Occasionally, qualitative researchers will take their research to the level of practice, developing their research into assessment tools and so forth. But this is a several step process: we must decide what a “contribution” actually is, and then, must a program be developed, and subsequently tested.

6.6 What Is “a Contribution to Practice”?

When we examine the work of qualitative researchers who have made a difference, we find definite patterns of research programs: those who have worked in different substantive areas, and those who have worked on a single problem or phenomenon intensively throughout their career, in a cohesive research program. But can a single qualitative project impact on a discipline?

Obviously, whether or not a single article can have important implications for practice depends on the topic, and the researcher’s agenda. Some projects may be implemented as basic research, research to find out what is, but the majority of research in health has practical application. Next, the level of abstraction, and the form of the findings, must be considered. If the researcher was addressing a practice issue, and moved the inquiry into areas that enabled recommendations to be made, then the recommendation should be in the form that the practitioner can use. For whom is the intervention suited? Recipients of the program should be identified. Researchers need to present clear goals, with the steps for achieving these goals, a program of implementation outlined, and outcomes specified and plans for evaluation. Such detail may be presented in a second article, and even in a different journal. If the research is conducted by a team, then it is possible that one of the team members is an expert in translational research.

7 Strategies for Moving Qualitative Research into Practice: Forms of Recommendations

7.1 From Descriptive Studies

The most straightforward interventions are from descriptive studies—developed from observations in the clinical area. Often these studies are conducted by videotaping care, and explicating the interventions used—of which the caregivers may or may not be aware.

An example of such an intervention was previously mentioned as “talking through,” or the comfort talk register (Proctor et al. 1996) that enabled patients in extreme distress to maintain control. An adaptation of this was then developed for patients in second stage labor (Bergstrom et al. 2009).

Mary Betth Happ (2013) conducted microanalytic studies of patients on ventilators in the ICU, examining ways that nurses could best communicate with patients on ventilators. She developed a very basic training program, iSPEAK, to facilitative nurse-patient communication, consisting of the necessary equipment and teaching nurses how to read patient cues (Happ et al. 2010).

7.2 For Assessment Using Theory

Moving theory from grounded theory back into clinical practice is remarkably straightforward. The format of grounded theory in stages, phases and behavioral descriptions lends itself to recognizing the behaviors that lead to the theory, and each behavior—and description of how the behaviors appear in each stage or phase of the process—is clearly delineated. These descriptions may then be converted into an assessment guide (Morse et al. 1998). Each behavioral description becomes an item, and converted into a question asking the clinician if it is present or absent. If the behavior is absent, then strategies are identified for the clinician to assist the person to work towards attaining those goals.

7.3 Assessment Guidelines

The Hope Assessment Guide (Penrod and Morse 1997) used a stepwise theory of the attainment of hope (Morse and Doberneck 1995), to develop criteria to assess a patient’s stage of hope and the ways staff may help them attain hope. For instance, the first step, before one commences the process of hoping, is to identify what you are hoping for and hoping against, recognizing the threat. Next, identify the steps necessary to achieve the hoped-for goal, making a plan, envisioning alternatives, setting goals and bracing for negative outcomes. As the person moves through the process, they must periodically take stock, by realistically assessing personal and external resources and conditions. They reach out, soliciting mutually supportive relationships; continuously evaluating signs of reinforcement; holding ondetermine to persevere (p. 1062). In this way the caregiver may identify the person’s level and needs in attaining their hoped-for goal, and support them in this process.

7.4 Use by Clinicians: Developing Assessment Guides

Clinicians who observe clinical problems do not necessarily have to start as researchers. They may find a reasonable amount of qualitative literature in their topic of interest—even grounded theories, and even a model or theory that already addresses their concern. Such was the case of Vennie Ying, a DNP student, who wanted to identify a way to intervene with others who had experienced traumatic birth. She was aware of Beck’s (2004a, b) model of causation and interventions for traumatic birth, and recognized the need to develop an assessment guide. The methods of converting a theory to assessment questions (above) provided a straightforward and valid method for the development of an assessment guide (Ying in review). Such a guide enables the collection of complete and systematic information and allows the application of appropriate interventions.

7.5 Research Programs

Research programs beginning qualitatively may extend in several directions. They may, using the initial qualitative studie(s) as a theoretical base, move into quantification—to the development of a questionnaire. They may extend even further, developing into intervention programs or become a foundation for policy.

7.5.1 Developing Quantitative Questionnaires

Quantitative surveys and questionnaires are frequently constructed from qualitative data. Most often they are developed from focus group data that determines the theoretical structure of the questionnaire and provides the wording for the items that are derived directly from the group. The use of focus group data to develop the questionnaire is considered to be more valid than developing the question from the researcher’s own perspective. One warning about this approach: the researcher has a lot of weight for the validity of the questionnaire (and ultimately the entire study), resting on the focus group data; it is therefore essential that these data (usually themes) be adequate and appropriate for their purpose.

A more satisfactory way to develop the theoretical basis of a programmatic study is to conduct qualitative study first, then use the resulting theoretical development as the foundation of the questionnaire. An example of such a research program was an inquiry into adolescents’ response to menstruation. This research program consisted of the following projects:

  1. 1.

    Developing a qualitative base and theoretical structure: This was acquired by administering a semi-structured (written) questionnaire to 7–9th grade girls. Content analysis provided the theoretical structure (Morse and Doan 1987).

  2. 2.

    From this a trade book was written for girls, using their words and their questions, explaining menarche (Doan and Morse 1985).

  3. 3.

    Using the theoretical structure as hypothesized factors, we developed a Likert scale (Morse et al. 1993), and administered to all 7–9th grade girls in 48 randomly selected schools. From this we validated the scale and obtained normative scores and symptoms for girls (Morse and Kieren 1993).

  4. 4.

    Two other quantitative studies followed: Studying preparation factors for menarche (Kieren and Morse 1992), and developmental factors and post-menarcheal factors (Kieren and Morse 1995).

Most important is the way that all of the quantitative studies including the questionnaire, “sit” on the foundation of the qualitative study.

7.6 Developing Intervention Programs

Wuest and her collaborators (Wuest et al. 2013) conducted a series of studies on the health of women after leaving an abusive partner. These studies used grounded theory, and the core variable was “Strengthening Capacity to Limit Intrusion.” This process consisted of six components: Managing basics; managing symptoms; regenerating family; renewing self; cautious connecting; and safeguarding.

The investigators then translated the theory, by developing a program, iHEAL, for women across Canada. As they launch the program, all instructors are taught the theory, and all activities and practices are cohesively linked. The next step that they are embarking on is evaluation of these programs across communities.

Note that these researchers are extraordinary in that they: (a) converted their research into a useable program that may be clinically applied, and (b) moved across Canada training instructors in the use of their research.

7.7 Using an Advocate

Researchers are the scarcest resource in this equation. We cannot all take time to travel across country to “sell” our interventions. It seem that our present model of publishing and waiting for our findings to be noticed and used is simply not working. Placing our findings out there is not realistic and does not give our studies enough “dose,” or “hard sell” to be adopted into the clinical arena. Perhaps it is time to introduce a new clinical partner, the advocate. This person will work between the emerging research findings and the clinicians with problem. This person will evaluate the suitability of the evidence for the clinical area, set policy, educational programs and oversee change.

7.8 Developing Policy Recommendations

Some have used policy recommendations to make change. Kayser Jones’ research program has remained focused on the care of the elderly in nursing homes for almost three decades. But within the context of nursing homes she has systematically studied critical aspects of care for the elderly, primarily using ethnographic methods. Her ethnographic dissertation compared the care of the institutionalized elderly in the USA and Scotland. In Old alone and neglected (Kayser-Jones 1989/91) she focuses on the use of restraints in the USA comparing it with fieldwork in restraint-free care in Scotland. Since then, Kasyer-Jones has systematically explored acute illness on nursing homes (Kayser-Jones 1995), dehydration (Kayser-Jones et al. 1999); malnutrition (Kayser-Jones 2002); pain management (Kayser-Jones et al. 2006); pressure ulcers (Kayser-Jones et al. 2008; Kayser-Jones et al. 2009), drawing national attention to conditions in nursing homes. Finally she raised the important questions, do “nursing homes promote health or dependency in the elderly?” (Kayser-Jones et al. 2009).

Using ethnography, it is relatively easy to translate the findings into a language that policy makers, governmental agencies politicians and the lay public can relate to and understand. Dr. Kayser-Jones wrote lay publications, on radio and television, and in 1997, she appeared before the US Senate, Special Committee on Aging, thereby moving her research into policy at the highest level.

8 Conclusion

In this Chapter, I suggest that the reason for the lack of appreciation of qualitative findings is because even qualitative researchers themselves are not confident of their own results. Evidence of this hesitancy may be found in the forms of the recommendations themselves—they are tentative suggestions, rather than specific strategies for active practice. There is an urgent need for qualitative researchers to gain confidence in their work, and to present it in a way that is meaningful for clinicians.

I recommend five criteria considered necessary for implementation: rigor, relevance, comprehensiveness, role of the recommendations and the form of the recommendations that will greatly facilitate the fit of the findings in practice, hence adoption, by clinicians. The final step, examination of the strategies for moving qualitative research into practice, using assessment guides, and developing intervention programs will provide clinicians with the concrete tools to assist them with patient care.

But placing our results in a form that they may be used is still not enough. I am recommending a new clinical role for an advocate: someone assigned to evaluating the research and evaluating the practice. That person will be responsible for the implementation: for the education of staff and the changing of policy. We can no longer put the onus on busy staff to scan the literature, identify solutions and each, one by one, put them into practice. It is a professional and institutional responsibility, as well as an individual quality of care issue.