Keywords

1 Introduction

Criteria for ensuring rigor for qualitative research are well documented in existing literature (e.g., Cohen and Crabtree 2008; Morse et al. 2001; Morse et al. 2002). Given the important role that recognition of the determinants of health (Public Health Agency of Canada 2013) plays in moving forward health sciences research, and the contributions that qualitative approaches are making to this body of evidence, there is an ongoing need for clarity in the processes that contribute to the rigor of research and its outcomes (Cohen and Crabtree 2008; Meadows and Morse 2001; Morse et al. 2002; Tracy 2010). Morse et al. (2002) note that strategies that are part of the overall research design and that are fully integrated into the research process demonstrate and provide evidence for the rigor of qualitative research. They further note that debates regarding rigor in qualitative research have ranged from arguments over terminology (Hammersley 1992; Kuzel and Engel 2001; Yin 1994) to debates over the universality of strategies of rigor across theoretical paradigms (Bochner 2000; Guba and Lincoln 1989; Guba and Lincoln 2005; Tracy 2010). We agree with Morse et al. (2002) that the debates referenced above led to a focus on evaluation of research outcomes, leaving the attention to rigor during the research process (from conceptualization to publication) lacking.

Identification of the techniques that support rigor is ongoing. Cohen and Crabtree (2008) use terms including the importance of the research, doing ethical research, coherence, validity, and verification of the research. Meadows and Morse (2001) use strategies of verification and validation within the research process, including techniques of study design, bracketing, member checks, and auditing, among others. Tracy (2010) presents a model for quality in qualitative research through eight “Big-Tent” (Denzin 2008) criteria for excellence in qualitative research. These include “worthy topic, credibility, sincerity, meaningful coherence and ethical research” (p. 839). Morse et al. (2002) argue for a return to the use of the terms and strategies of validity and reliability, including techniques of verification as a process internal to the research and the responsibility of the researchers. Very simply, Morse et al. (2002) write, “Verification is the process of checking, confirming, making sure, and being certain.” They note the need for methodological coherence, sufficient sampling, and engaging in a dynamic and iterative process throughout the research that addresses sampling, data collection, data analysis, and other techniques of verification. Readers who want more information on how to implement the various techniques of verification and validation are referred to the cited articles.

The focus of this chapter now turns to further discussion of verification and validation. Strategies of verification are internal to the research process, and are the responsibility of the researchers and their team. The techniques used in the verification process incrementally contribute to the validity of the research (Meadows and Morse 2001). Qualitative research is iterative: that characteristic provides the opportunity and demands the attention to the work and fit of the research as a whole to ensure rigor. The process of validation is also internal to the research project, and also the responsibility of the research team.

The strategies that, taken together, are inherent to the verification process include the following: the preparatory literature review that situates the research questions; the study design that identifies the strategies and techniques that will coherently and cohesively guide the research; a budget that reflects and fully supports the research project; and careful choice of the research team (internal such as investigators, partners, research assistants, transcriptionists, and managers and those external to the funded team such as auditors or evaluators).

Similarly, traditional and evolving strategies have been established for the validation process. These include establishing an audit trail to make clear the timing and rationale of methodological decisions during the research process (Lincoln and Guba 1985); inter-rater reliability; the use of multiple methods; and the often misunderstood technique of member checks. Recently Morse et al. (2002) have noted that member checks constitute a technique of rigor early in the research process but invalidate the researcher’s analysis and interpretation if used inappropriately.Footnote 1 Their use must be appropriate to the task.

Good researchers have to pay attention to the quality of qualitative research while they are doing it and also evaluate the results after the research is done. Quality qualitative research is important for many reasons: to educate those new to qualitative paradigms; to ensure funders and evaluators are exposed to excellence in qualitative research; to debunk the myth that qualitative research is simple and easy; and to provide evidence for policy and practice decisions, among others. Terms that are common in qualitative research must be identified and put into practice during the research process with adequate understanding of their use and utility. While many terms are well established, the dogmatic use of terms without understanding undermines, rather than supports, the processes of ensuring rigor. The field of qualitative research continues to make strong contributions to a variety of scientific disciplines; additional tools for ensuring rigor are being developed and tested. Sometimes funding agencies require assessments of rigor by the research team as part of the overall research design; in other instances the research team members realize the value of engaging an expert in qualitative rigor from outside the project team to work with them, and provide ongoing feedback throughout the tenure of the project. No matter the source of the decision to include a person whose specific role is to assess rigor internal to the research project, these experts play an important role and make an essential contribution to the nature of qualitative evidence.

The remaining sections of this chapter provide illustrations of rigor in qualitative research as assessed from within a research project by its team members and assessment of quality from outside the research team by an independent scientist. These two approaches have been identified by Reynolds et al. (2011) in the context of health research policy. Reynolds and her team used a meta-narrative approach in a search of journals, databases and grey literature to investigate the nature of quality in qualitative research. They identified two main narratives in the literature: “The first focuses on demonstrating quality within research outputs; the second focuses on principles for quality practice throughout the research process” (p. 43). They again reiterate the importance of steadfast attention to quality throughout the research process.

Two strategies for assessing the reliability, validity and trustworthiness of qualitative health research, are addressed in the following pages. In the first section we discuss evaluation, and illustrate a rigorous approach to evaluation of qualitative health research from the inside. While the word evaluation may hold connotations of external assessment in qualitative research, evaluation as an internal approach is the recognition of evaluation as an a priori part of the research design. In the second section of the chapter we discuss auditing qualitative health research and present an example of auditing a research project, as it is in process, by an expert external to the research team.

Evaluation is concerned with assessing how well a project’s processes operate. Evaluation requires careful design, collection, analysis, and interpretation of data. Evaluation also has an important learning purpose. It should provide clear feedback to everyone involved in the research project: researchers, staff, funders, and the wider community (Hart et al. n.d., p. 9). Audit is a quality assessment process where performance is measured against predetermined standards within defined parameters or criteria, which are chosen as important indicators of overall performance. Changes can then be implemented to improve performance (Hart et al. n.d., p. 7).

2 Evaluation of Qualitative Health Research

To assess the quality of qualitative health research, we support the notion of using guidelines on the condition that they keep the key features of qualitative research in mind and are not rigidly prescriptive. As an alternative approach, we suggest the use of guiding principles and questions because, in this way, we can retain flexibility and creativity and promote rigor and transparency.

There are many labels for evaluation approaches, and the labeling is contradictory. Some evaluations are classified by their purpose (e.g., formative and summative (Scriven 1986)), the end user (e.g., utilization focused (Patton 2002)), evaluator role (e.g., internal or external), stakeholder role (e.g., participatory), methodology (e.g., qualitative), and ideology (e.g., feminist). With this in mind, we take an approach that is inclusive of evaluation types and approaches and does not privilege one over another. Rather, the approach used to assess rigor of the qualitative research process must be consistent philosophically and methodologically throughout the evaluation process.

What is quality in qualitative evaluation? Similar to the criteria for qualitative research, the evaluation of the research process should be valid and reliable (however these terms are assessed in qualitative research), methodologically sound, ethical, and logical and should have congruence between evidence and judgements. Regardless of whether the person responsible for assessing the quality of the research process (the evaluator) is internal or external to the research team, some attention ought to be paid to his/her credentials and expertise. For example, the American Evaluation Association (2004) proposed five principles (including 25 standards) that evaluators should uphold:

  • Systematic inquiry: Evaluators conduct systematic, data-based inquiries about whatever is being evaluated.

  • Competence: Evaluators provide competent performance to stakeholders.

  • Integrity/honesty: Evaluators ensure the honesty and integrity of the entire evaluation process.

  • Respect for people: Evaluators respect the security, dignity and self-worth of the respondents, program participants, clients, and other stakeholders with whom they interact.

  • Responsibilities for general and public welfare: Evaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare.

The research team should ensure that whoever is responsible for assessing the quality of the research enterprise is qualified for the role, that he/she understands the research process from conceptualization, design and analysis, and is able to assess the integrity of the analytic process and the interpretations thus derived. The evaluator ought to have full access to the research team and documentation, while still enjoying a degree of autonomy that allows independence and lends credibility to the findings of the assessment. The evaluator’s findings must be forthright and honest while still respecting the dignity of the research team and the inevitable challenges that they faced while carrying out the research. Above all, the evaluator must be cognizant of the ethics of qualitative research and ensure that no harm has come to any participants.

Further, the evaluator should uphold the following four key principles:

  • Utility: The assessment of the qualitative research process should serve the information needs of the research team and funders.

  • Feasibility: The assessment should be realistic, prudent, diplomatic, and frugal.

  • Propriety: The assessment should be conducted legally, ethically and with respect for the welfare of those affected by the results.

  • Accuracy: The assessment should be conducted rigorously and be well documented so that conclusions are defensible, valid, and reliable.

(Adapted from The Joint Committee on Standards for Educational Evaluation (1980; revised 1994))

Whether you are an internal or an external evaluator, these guiding principles should inform how you do your work. Further, consideration should be given also to timeliness, clarity about the context of the program and the evaluation, and (as far as possible) perspectives of all stakeholder groups.

3 A Case Example of Evaluation from Inside a Project

A call for proposals for cardiovascular risk reduction projects was released and, with the collaboration of key community leaders, the principal investigator (PI) put together a proposal for funding that addressed the community’s concern about the health status and health knowledge of a particular ethnic community in a large urban Canadian city (Jones et al. 2013). The project was designed as a mixed method cardiovascular risk reduction screening program that took place in faith institutions in a number of neighborhoods over a period of several months. After funds were received, a condition on the grant was that an external evaluation be conducted. The PI contacted the evaluator for a consultation and it was decided that the evaluator would become an integral part of the project team.

3.1 Understanding the Project

The first step in the process was to use the proposal and, with team members, create a logic model of the project. This important step allows the epistemology, that is, the knowledge and assumptions underpinning the project, to be articulated and used as a framework for judging the research process. Epistemology is critical to the development of research (and evaluation) questions, methods, and interpretation of the data collected. Using Dwyer and Makin’s (1997) framework, a logic model was developed that described the project goals, target groups, component activities, long- and short-term process and outcome objectives, and the resources available to the project. After some discussion explicitly linked to quality assurance, indicators were determined for each objective, and the evaluator then designed the evaluation, determined data collection and analysis methods, created the instruments for data collection, and submitted the evaluation proposal to a Research Ethics Board (REB) for approval. The evaluation question was straightforward: What went well, what did not, and what do we need to change for next time? The PI had a set of research questions that guided the research itself and a parallel application was submitted to the REB for the research project.

To ensure transparency that the principles of qualitative research were being followed, journals describing the processes of research readiness were kept by team members. Thus began the assessment of decisions, assumptions, interpretations, and adherence to best practices in community-based research. During the pre-intervention period, journals detailed access and entry processes and challenges, research team communications with key stakeholders (e.g., community physicians, faith leaders, volunteer coordinators), and kept track of the various activities to set the screening initiative in place.

While waiting for REB approval for the research and the evaluation to be granted, equipment and supplies were ordered and delivered, faith leaders were approached to gain access and entry to the community, documents were translated into the several languages prominent in the community, and lay volunteers were sought and trained by the project team. The PI met with physicians in the target community to explain the project and prepare them to receive letters if screenings indicated there was need for a referral for hypertension, hyperlipidemia, and hyperglycemia. All activities were documented for process evaluation purposes.

Upon implementation, the faith leaders helped to inform their members by announcing the clinics during services, and posters were placed in prominent places to encourage people to attend. Clinics were set up to take place once services were over; lay volunteers manned stations (welcome and registration, consent, weight and waist circumference, blood pressure, random glucose, cholesterol, and consultation).

3.2 Ensuring Adherence to the Principles and Protocols of Quality Research

Once the actual intervention began, demographic data were collected to describe the attendees and baseline health data were recorded; unique identifiers were assigned so that attendees’ names would not appear on the data set. Attendees were provided with educational materials in their language and letters to their family physicians if their screening results were not in the preferred range of normal, and received consultation about what their results meant. While the actual screening results were part of the outcomes of the research itself, how the clinic processes worked and participant satisfaction were part of the quality assessment of the project in accordance with the epistemological position of the research.

Attendees were then randomly assigned to buddy support or no ongoing support. In the second screening clinics that took place several months later, attendees were again screened, and a questionnaire collected information about any lifestyle changes they had made since the first clinic and if there had been consultation with their physicians. During this period of time the community coordinator (CC) maintained a journal to capture her interactions with attendees, volunteers and the research team. This journal allowed the evaluator to assess whether or not the principles of the research had been upheld since the CC worked remotely from the research team. The journal was used in triangulation efforts with the project coordinator (PC) and the PI (who were also maintaining records) as a method to examine research rigor during the time between screening and re-screening. Team members did not share their journals with the evaluator; rather, they used them to assist with recall when they were interviewed.

3.3 Ensuring the Clarity of the Role of Evaluator Within the Team

Although the evaluator was a part of the team, her role was at arm’s length as far as project implementation. She was available to answer questions and provide advice throughout when evaluation issues arose that needed resolution. Since the intent was to do a combination formative-summative evaluation, it was expected that the research process would be tweaked during implementation. Key team members used reflective journaling to capture their experiences, and regular team meetings allowed emerging issues to be discussed with the PI. Forms of evaluation data collection throughout the project included document collection and analysis, observation, surveys, and key informant interviews. It is worth reiterating that, because human subjects were involved, the evaluation plan had been approved by a REB.

3.4 Ensuring the Quality of the Resources Used

Since the project was in part sponsored by Hypertension Canada, documents from that source were used to inform hypertension education. A manual was prepared by the PI to inform volunteers about hyperlipidemia and hyperglycemia. Educational materials were prepared and compared to the literature and best practice guidelines for the various topics. Pamphlets and posters were translated and back-translated to ensure accuracy. The evaluator assessed the quality of the materials created for the project against the documents from Hypertension Canada to ensure their consistency.

3.5 Ensuring Quality Data Collection

Volunteers were trained by the CC and the PI with respect to the various aspects of the screening process and the machines used. Surveys were used to capture the confidence, competence and comfort of lay volunteers with the various machines and the new information they had been taught. A survey was used also to assess the degree of collaboration intended and knowledge gained by physicians that attended the informational seminar lead by the PI.

The evaluator attended several screening clinics to capture the essence and culture of the project as it was being conducted. She acted as a pure observer and did not interact in any way with the attendees or the volunteers. Any questions were directed to the PC who was in attendance at every clinic. The PI and the PC collected and analyzed the health data and made comparisons between those that were in the buddy group and those that were not. The PI reported on the data collected for the project.

For evaluation purposes, particularly to ensure transparency and responsibility of decision making, good ethical practice, and that a systematic approach was being honored, key informant interviews were conducted after the first set of clinics had been completed and then again after the project had come to a conclusion. After the first phase, the evaluation report was used to make changes to the process and inform the second phase.

Not everything went smoothly. Focus groups had been planned for the volunteers and some attendees to share their experiences but this was not implemented for a variety of reasons: volunteer fatigue; lack of willingness by attendees to participate; and extremely inclement winter weather.

3.6 Understanding the Context of the Research

Before and during the project the evaluator kept notes and reflections on the context within which the project was taking place. Also captured was information about the various inputs to the project, including several in-kind contributions. This information was important because Stufflebeam’s (2003) Context-Inputs-Processes-Products (CIPP) approach was being used as the theoretical foundation for the evaluation. This approach allowed for the creation of an audit or decision trail and for ongoing discussion with the team about quality issues, assumptions, and biases.

3.7 Assuring the Accuracy of the Evaluation Results

Once the summative evaluation report was drafted, the evaluator presented it to the PI and key team members for feedback, and as a member checking process to ensure validity. At a closure meeting, the evaluator sought feedback, insights, whether or not the report “told the story” accurately and if the conclusions drawn were supported by the results of the evaluation. When challenged to soften some of the comments about “things that went wrong,” the evaluator tried to determine if there was sensitivity or embarrassment, or if she had read the situation incorrectly. When team members agreed that the report was accurate, she helped them to understand that the things that went right outweighed those that did not, and that knowing about those issues would be helpful next time a similar project were implemented.

In addition to the evaluation of the research process, the PI and the evaluator assessed also whether the research itself met its objectives. The overall evaluation question for this activity was: Have the objectives of the research project been delivered in the specified time frame? Component questions included the following:

  • What was the environmental context in which the research operated?

  • What resources were needed to complete the research?

  • What research activities occurred and how?

  • What were the outputs of the research?

  • What dissemination activities occurred?

  • What impact will this research have on the field?

A short report was written that was used to inform future replication of this project with different ethnic groups in multiple sites across the country. The benefit of this approach was that the evaluator, charged with assessing the quality of the research process, was an integral part of the research team, had full access to all aspects of the process, and was able to use a wide variety of strategies to ensure the quality of the research. In this instance, the evaluator was conducting evaluation research on a community-based cardiovascular risk assessment research project from the inside.

4 Auditing of Qualitative Health Research

The foregoing section has focused on evaluation of quality from within the research project; we now turn to a discussion of qualitative audits. The two processes are not mutually exclusive, although we make a distinction here for heuristic purposes. In the cases used in this chapter we also differentiate between inside (evaluating qualitative health research by a member of the research team) and outside (auditing health research using an expert external to the research team). The differences between evaluation and audit are posed as guides for this chapter; they are however to be considered as appropriate ultimately by the various classifications of audits and evaluations. An identified approach must be consistent with the views and theories of the guidelines being used and applied methodologically throughout.

The concept of the audit in social science research was first discussed by Halpern (1983) and addressed the concern of trustworthiness in the growing area of naturalistic inquiry (Akkerman et al. 2008; Lincoln and Guba 1985; Schwandt and Halpern 1988). Initially the concept was built on the metaphor of a fiscal audit (Akkerman et al. 2008; Guba 1981; Lincoln and Guba 1985). Ideally, the audit procedure is negotiated before implementation of the research to be audited, and is negotiated between the auditee and the auditor. As with an evaluation, an audit is best supported by an a priori objective for the audit, and the accompanying goals from which a logic model, including timeline, can be developed. The auditee and auditor must also agree upon the extent and nature of materials that will be provided for the audit, that is, the audit trail.

Large-scale qualitative research studies can be incredibly complex (e.g., multiple case studies with diverse cases) and face the challenges of working in the everyday world subject to contingencies that arise during the course of the study. An iterative process of application of research strategies often characterizes studies of even some complexity. There are challenges to making clear the rigor of all research, for research that is iterative and may involve the need to addresses decision points in the study as they arise is an important part of supporting evidence of quality. Building on earlier work discussed above, Akkerman et al. (2008) identify three generic criteria that act as underlying standards to support decisions during the research process. These criteria are visibility, comprehensibility, and acceptability (p. 258). Visibility is conceptualized as the transparency of decisions made through the research process noted as appropriate for each stage of the study. Comprehensibility is conceptualized as having documentation to support the progress of the project to date, for example the funding proposal, logic model, and implementation process. Acceptability is conceptualized as the substantiation of decisions made by the researchers according to the standards, norms, and values of qualitative research methods and their disciplinary and accrediting bodies.

5 A Case Example of Auditing Qualitative Health Research from the Outside

A call for proposals was issued to assess the endpoint knowledge differentials across a number of subspecialties in an allied health care profession. The call for proposals was in part aimed at better understanding graduates’ preparedness for their scope of practice; the information was also expected to be useful in understanding pedagogical standards across several institutions providing education for the profession. One proposal was duly deemed fundable and the multi-disciplinary research team, including a project coordinator, academics, research assistants, and research associates, as well as administration from the profession’s education institutions agreed to work together in the project (CARNA 2009). A Steering Committee composed of representatives from each subspecialty and supporting government agencies was also formed to oversee the overall project.

In the research described below, the auditor was engaged on behalf of the investigative team approximately 1 year after funding and ethical approval had been received. In order to best serve the project and its verification and validation needs, a number of applicants for the position of auditor external to the research team were solicited and the successful candidate, a recognized international expert on qualitative methods, was selected. Subsequent to ethical approval of the auditor’s proposed audit logic model, the auditor was given access to existing documentation for the project and its progress.

During the audit the auditor created and maintained a spreadsheet based on the project proposal. This spreadsheet noted all of the elements of the audit process: recorded audit timelines and project timelines; tracked meetings and audit trail materials requested and received; cross-indexed location and type of materials used in the audit; and essentially created an audit trail of the audit. Given the complexity, iterative nature and need for transparency, this type of tracking and checklist is essential for an auditor to establish and maintain.

5.1 First Stage of the Audit

The auditor reviewed the rationale and planning for the project, investigated the credentials of those engaged to work on the project across roles, and reviewed the proposal as approved by ethics as a first step. Sampling and recruitment for the project and data collection were found to be interdependent and iterative processes, as data included: a literature review of current knowledge and theories relevant to the project; curriculum, legislation, and professional statements relevant to the profession; site visits to the educational institutes; and key informant and interview data from personnel at the educational institutes for the early project stages. One challenge to the initial audit was to understand the breadth of the project and the various stakeholders, as well as the educational institutions, administrators, instructors, and students forming the body of the project. This challenge was in part because the profession being studied was not familiar, pedagogically, to the auditor. Another challenge, one that remained throughout the project, was that the research team chose a paper-based data collection and analysis approach to the project.

The initial stage of the project was duly found acceptable in visibility, comprehensibility, and acceptability. A Letter of Attestation (LoA) (see sample in Appendix) was provided to the project PI approximately 3 months after the auditor was engaged. The LoA included recommendations for next stages of the project, including a strong recommendation that a computer-based qualitative analytic program be used to support data management, organization, and analysis.

5.2 The Second Stage of the Audit

The second stage of the audit examined the analysis of all data collected during the initial audit stage plus further data collection based on the first round of analysis. For the purposes of the audit, data analysis was defined to include transcription of the recorded data; checking the transcribed data for accuracy; checking coding of the transcribed data; examination of the analysis of the transcribed data; review and inclusion of data from individual interviews; and document review of materials provided by the various educational programs.

As in the first stage of the audit, the work done by the auditor was challenged by the research team’s commitment to a paper-based data management, organization and analysis process. It was necessary to hold several meetings, either by phone or face to face, with the data collection/analysis staff to clarify visual coding, color coding, and decision points in these strategies and the actual decisions themselves. Usual and accepted techniques in qualitative analysis, include journaling, memoing, and field notes, had been well and appropriately used by the research team and duly recorded. These documents again added to the volume of data to be audited, as they were handwritten in some cases and typed in others. These paper-based data in their entirety, audited through charting and checklists, resulted in a confirmation of both visibility (literally and figuratively) and comprehensibility. And finally, the process and its complex components were adjudged acceptable. Again, recommendations were made regarding data form, management and final analysis and report writing as part of the second LoA.

In order to evaluate students from each of the three types of professional programs, it had been decided that case scenarios would be used in focus group interviews with the graduating students to assess their knowledge of the issues being presented and the care plans that they, as a group, developed. The resultant data were to be used to identify the competencies of each of the three groups, and make comparisons among and between them to ascertain systematic and patterned similarities and differences. The researchers developed a template to identify themes and patterns in the students’ focus group interview work. The coding templates were based on the results of analyses that were audited in stage 2. A codebook was developed by the researchers with the key concepts that had been identified as competencies. Each analyst then coded the focus group interview transcripts using their own color coding scheme. The two analysts held a series of meetings that resulted in themes to which they both agreed. These themes were then presented to the full research team and the Steering Committee. The analysts were provided with feedback from this meeting that was to be incorporated into their final report for the project.

5.3 Auditing the Final Product

The auditor’s role during this final stage was complex, as an understanding had to be achieved of how and why each decision had been made in the analysis, if all data had been used by the research analysts, and if their conclusions were acceptable given their work and the feedback received from the full research team and Steering Committee. At this stage of the audit the original deadline had been delayed due to late availability of the audit materials. Furthermore, a need for the project results to be reported to the funders was imminent. In due course the final LoA was delivered to the research PI.

As the researchers are supported and their work enhanced by due diligence, an auditor demonstrates her or his credibility by the self-audit process. The results of this process during the project discussed above suggest the following points for future audits, resulting from lessons learned in applying a theoretical process to a qualitative project, but also mindful of how evidence in qualitative research is assessed and accredited.

  • There are significant advantages to engaging a project auditor prior to implementation stages of a project; the conceptualization stage is ideal.

  • Mutual agreement on the project timeline, nature of the audit trail, and timing and discussion of feedback from the auditor benefit the project.

  • Given the mutual agreement stated in the last point, an iterative process between auditor and auditee throughout the project is ideal, and should be considered when developing a timeline.

6 Discussion

The field of qualitative research has a long history; however, the process of recognizing and acknowledging the powerful contributions this paradigm can make to scientific knowledge across disciplines has been slow. Dating from early work by Lincoln and Guba (1985) and others (Crabtree and Miller 1992; Guba 1981), strategies for rigor and their associated techniques have gained recognition and implementation. An important part of ensuring continuing rigor in qualitative research is periodic review and assessment of various strategies to ensure changes in knowledge, including science, technology, and disciplinary advances, are taken into account in the field of qualitative research. Good qualitative research and important contributions to science may require years of careful work; sometimes a program of qualitative research may require a career to address the complexity and contexts of the subjects that are studied. Acknowledging the expertise of others, and inviting experts to be an a priori part of the research team as either part of the team or external experts, will continue to support excellent and mature research.

7 Conclusion

In this chapter, we discussed the importance of addressing quality in qualitative research throughout the research process—from conceptualization to final outcomes of the research. The introduction of this chapter provided a brief review of the dynamic nature of the science and debates that characterize the quest for rigor in qualitative research. We noted debates that focused on narratives regarding choice of language as well as techniques that are used to facilitate verification and validation strategies. In our case studies we illustrated practical examples of the importance of addressing rigor inside the research project by the research team as recommended by Morse et al. (2002) among others and from outside the research project and team (Akkerman et al. 2008; Reynolds et al. 2011). The first strategy—evaluation—usually considered to address reliability and validity, was illustrated by an evaluation within a community-based mixed methods health research project. In that section the importance of having a qualified evaluator was stressed. In addition, the principles of utility, feasibility, propriety, and accuracy were presented, along with an explanation of each of the terms. Auditing, usually considered to address trustworthiness, was illustrated through a case audited from outside the research project. Akkerman et al.’s (2008) three generic criteria of visibility, comprehensibility, and acceptability, the underlying principles to support decisions during the research process, were presented.

Audits and evaluations need to be part of conceptualization and planning stages of qualitative health research, be included in proposals with sufficient budgets and time, and should involve mutual negotiations among the research team members and evaluator or auditor. These strategies are invaluable to building a solid body of rigorous qualitative health research, and across all research paradigms.

While there are many named concepts used in the processes of verification and validation in qualitative research the goal is shared: reliability and validity of the process that is undertaken by the research team to ensure high standards of solid scientific outcomes. The acknowledgement that qualitative methods are dictated by the research questions being asked in health research requires that researchers truly understand and implement strategies for rigor as appropriate for their projects, and with expertise as appropriate, whether inside or outside the research team.

8 Appendix: Sample Letter of Attestation for Audit

DATE

ADRESSEE

RE: Audit Point One

This Letter of Attestation refers to work done in the conceptualization, staffing, proposal development, ethical approval, sampling, recruitment, and data collection phases of the PROJECT NAME.

The Audit of the first phase of the PROJECT was done in the following way. First, the Auditor was required to sign a contract stipulating conditions, including confidentiality of study proposal and resultant data, including identities of programs that participated in the Project. After a meeting with the Research Committee the Auditor was provide with electronic copies of the following documents:

  • Meeting notes for the Project Steering Committee dated XXXX

  • Meeting notes for the Project Research Committee dated XXXX

  • Ethics application and supporting documents

  • Ethical Approval

  • Ethical Approval extension

  • Documents used to inform Topic A of the Project

  • Documents from those agreeing to participate in the Project (organizations or individuals)

  • Data collection instruments including interview guides and scenarios

  • Identification of programs from which data collection was done plus schedule and confirmation of that data collection

  • Receipts from participants of honoraria/participant costs as appropriate

  • Opportunities to ask questions (and receive answers) from the Project Coordinator and Project Associate Coordinator

  • Teleconference with Project Research Committee including Research Assistants

Once in possession of documents and information provided by the Research Committee and Steering Committee and using an iterative process, a random sample was initially used to examine documents supporting rigor of the PROJECT. As the audit progressed specific documents and information were requested so that audit trails could be identified and examined. During the examination of information and documentation, evidence of linkages among research design, implementation (including drawing upon expert consultation as appropriate and hiring of qualified staff) and data gathering was sought to verify that they were appropriate and met accepted and rigorous standards in current use. Specifically the Auditor was looking for compelling evidence of visibility, comprehensibility and acceptability of the initial phase of the PROJECT as it progressed from research design to data collection. The audit criteria are based in large part on earlier research by Halpern (1983) that was further developed by Akkerman et al. (2008).

The findings of the Auditor are as follows.

With regard to Visibility:

Visibility is conceptualized as the transparency of decision made through the research process—in this document up to and including data collection. Upon examination of [list documents and other materials] the Logic Model provided by the Project and guiding the study is complete and compelling. The discussions and consultation that moved the PROJECT from a proposed idea to conceptualization of the nature of the Project to the decision to hire staff to support the development of a technical proposal that would be vetted for funding and move forward to the research project are well documented.

With regard to Comprehensibility:

Comprehensibility is here conceptualized as having documentation to support the progress of the project to date; once the audit proceeds to examination of the data collected and ensuing analysis and interpretation comprehensibility will include that aspect of the project. Upon examination of the information provided to the Auditor, […].

With regard to Acceptability:

Acceptability is here conceptualized as the substantiation of decisions made by the researchers according to the standards, norms and values of qualitative research methods and educational enquiry and discipline practice and accrediting bodies. The decision to hire an Auditor stands as one decision made by the Steering Committee and supported by the Research Committee in their commitment to the accepted norms, standards and values that support the PROJECT. As mentioned in the visibility and comprehensibility sections above, the documentation of the initiation and implementation of the Project has been exacting and complete. Therefore it is clear to the Auditor that standards of rigor have been followed in this study to date. For example: […].

[…] I have agreed to disagree with the Research Team on this point, given that it is a topic of much debate in the literature.

The Research Team, supported through the excellent work of the Project Coordinator and Associate Coordinator have maintained an impressive audit trail at the same time that they have followed the standards and norms of qualitative research and upheld the values and intent of the PROJECT. Personnel who have been hired to support the PROJECT from the early stages […]. The Team has been transparent in both successes and at points where practical issues in field work have needed to be addressed and the study progressed.

I hereby attest to the visibility, comprehensibility and acceptability of the PROJECT based upon the documentation, conversations and other information provided to me.

Respectfully submitted,

[NAME]

Auditor