Human performance technology (HPT) is defined as “a systemic and systematic set of processes for assessing and analyzing performance gaps and opportunities; planning improvements in performance; designing and developing efficient, effective, and ethically justifiable interventions to close performance gaps or capitalize on opportunities; implementing the interventions; and evaluating all levels of results” (Guerra-Lopez 2016, p. 3). The International Society for Performance Improvement (ISPI) human performance technology (HPT) model is a dominant model in the HPT field, which well reflects the definition of HPT. Although the ISPI HPT model is widely known and it depicts foundational ideas about HPT and its component parts, yet to be done are rigorous and legitimate research on validating the model. Only a few studies about how the model works have been reported. However, those studies are mostly anecdotal storytelling style case explanations rather than formal research (Gilmore 2004). For example, Andrews et al. (2004) indicated that the ISPI HPT model was valid by anecdotes when two groups of people applied the model in order to define and find solutions for specific performance issues. The article, however, did not describe a research methodology for testing the validation of the model. Whereas HPT is a research-based field (Pershing 2006; Stolovitch and Keeps 1999), the field sometimes looks “craft-like” (Sugrue 2004, p. 8) in its accomplishments. Sugrue (2004) argued that usually HPT practitioners just follow the major skeleton of the ISPI HPT model and its components without a research basis, and this superficial base has often led performance improvement consultations to fail. In other words, rigorous research on the validation of major models and processes in the HPT field has not been conducted (Gilmore 2008), and this absence of model and process validation research is a significant void in the field.

Given these circumstances, it is valuable to validate the ISPI HPT model and its components, as to its utility in providing researchers and practitioners with a framework to accomplish their studies and consultations. This research validates the performance analysis process in the ISPI HPT model.

ISPI HPT Model

Among the numerous HPT models, it is one of the most frequently used and referenced models in the field (Pershing et al. 2008). Many HPT practitioners in various fields such as human resources, organizational development, and training have adopted and followed the ISPI HPT model as their main framework (Gerson 2006). This is important given that the model is endorsed by a leading professional organization and the model is widely used by novices as an introduction to the field of HPT (D. Van Tiem, Personal Communication, April 10, 2010).

The ISPI HPT model is based on the ADDIE process, the acronym for analysis, design, development, implementation, and evaluation. ADDIE is a generic process used in the fields of instructional systems design (ISD) and HPT (Molenda 2004; Bichelmeyer et al. 2006), which has been adapted as a skeletal framework for many HPT models. One of the key deviations from ADDIE in the ISPI HPT model is the expanded and elaborated analysis stage. Performance analysis is elaborated upon in the ISPI HPT model with the systematic process of several elements such as organizational analysis, environmental analysis, defining desired and actual performances based on organizational and environmental analyses respectively, gap analysis, and cause analysis.

Performance Analysis

Performance analysis identifies the organization’s performance requirements by comparing current performance to desired performance (Rothwell et al. 2006). Without identifying performance issues, it is impossible to find appropriate solutions. Starting with implementation of performance consulting without appropriate analysis ends up with failure (Kaufman 2014; Kaufman and Guerra-Lopez 2013). In several major HPT models, therefore, performance analysis is noted as the first step. The notion of performance analysis has been widely used, and similar notions and processes, such as performance discrepancy (Mager and Pipe 1997), performance diagnosis process (Swanson 2007), performance diagnosis (Ruona and Lyford-Nojima 1997, and performance audit (Rothwell 1989), have been named by various HPT professionals. Joe Harless may have been the first to use the term performance analysis. Harless (1970) proposed a Front-End Analysis including, but not limited to, goal analysis, needs analysis, environmental analysis, and learner analysis in order to define performance problems and opportunities. He claimed that this set of analyses should be performed at the beginning of the performance technology process. Since Harless’ Front-End Analysis, a series of analyses to detect performance gaps have been extensively accepted in the field, and a number of HPT practitioners and researchers have reflected these ideas in their work.

In the ISPI HPT model, performance analysis is the very first phase of the process (Van Tiem et al. 2012). According to the ISPI HPT model, the goal of performance analysis is to identify and gauge the gap between desired workforce performance and current work performance. Desired work performance and current work performance are derived from analysis of organizational factors and environmental factors respectively. Therefore, in performance analysis, both organizational and environmental factors should be thoroughly analyzed and then based on the analyses, the gap between desired workforce performance and current work performance can be identified. Figure 1 shows performance analysis is composed of four parts: organizational analysis, environmental analysis, gap analysis containing desired and actual work analyses, and cause analysis (Langdon 2006; Van Tiem et al. 2012).

Fig. 1
figure 1

Performance analysis according to the ISPI HPT model (Source: Van Tiem et al. 2012)

In HPT, because training is not the only performance solution, and HPT is “open to all means” (Stolovitch and Keeps 1999, p. 9) and all possible performance issues and causes, it is a natural consequence to require a more extensive, strengthened, and elaborated analysis process. This detailed performance analysis in HPT is also reflected in certified performance technologist (CPT) standards, which are key guidelines for HPT practices. Among ten CPT standards, two standards (needs and opportunity analysis and cause analysis) are dedicated to analysis, while design, development, implementation, and evaluation have only one related standard. While the two standards about needs and cause analyses are well adopted and widely used by field professionals, some HPT professionals believe that these two standards should be combined in a single construct because these two analyses simultaneously occur in many HPT cases (Hoard and Stefaniak 2016)

In summary, it is valuable to examine and validate the performance analysis process in the ISPI HPT model considering all the factors discussed such as research needs, importance of the ISPI model, and performance analysis process in the ISPI HPT model reflecting the uniqueness and evolution of the HPT field.

Research Questions

The overall research question in this research is whether or not the performance analysis in the ISPI HPT model adequately describes status-quo of what HPT practitioners practice. In order to answer the question, following questions were specifically examined.

  1. 1.

    Was organizational analysis conducted in each case?

  2. 2.

    Was environmental analysis conducted in each case?

  3. 3.

    Was desired performance identified?

  4. 4.

    Was desired performance derived from organizational analysis?

  5. 5.

    Was actual performance identified?

  6. 6.

    Was actual performance derived from environmental analysis?

  7. 7.

    Was the gap identified by differences between the desired performance and the actual performance?

  8. 8.

    Was cause analysis reported?

  9. 9.

    Were the identified causes prioritized?

Research Methods

For this research the researcher conducted a content analysis of HPT descriptive cases which have been reported by researchers and practitioners in the field. The researcher collected a set of HPT business cases that detail HPT projects that have been implemented in real organizations by HPT professionals and examines the efficacy of the model through a content analysis of the cases. With a variety of cases, this method allows the researcher to perform rigorous and scientific analyses. In addition, the purposes of many descriptive case presentations are to share exemplary practices with the field to help other HPT practitioners understand the field, and to contribute to the refinement of HPT skills and knowledge. Therefore, the analysis of published exemplary cases of HPT is a worthy process to investigate the validity of the performance analysis in the ISPI HPT model.

Content Analysis

Blended content analysis was adopted for this study (Sarantakos 2005). In other words, this research used the quantitative approach for content analysis and also used qualitative methods in order to extract richer data from the cases and to find patterns and insights from among and between the HPT business cases.

Procedure

The procedures used in content analysis yield clear distinctions between a scientific research methodology and just compilation and review of data; therefore, rigorous procedures of content analysis are critical. In this study, the researcher followed the steps of content analysis procedures proposed by major content analysis methodologists (e.g. Neuendorf 2002; Pershing 2002; Sarantakos 2005; Fraenkel and Wallen 2005): (1) developing a codebook and a coding form serving as research instruments, (2) conducting a pilot study, (3) sampling, (4) inter-coder training, and (5) coding, analyzing and reporting.

Codebook and Coding Form

Coding instruments, including a coding form and a codebook (the manual corresponding to the coding form), were developed. Concepts and key components of the performance analysis as defined by Van Tiem et al. (2000, 2012) were transposed into the coding instruments. Figure 2 shows an example of the coding form and codebook.

Fig. 2
figure 2

Example of coding form and codebook

Pilot Study

Prior to this research a pilot study was conducted. The team of four graduate students reviewed literature in terms of methodology and decided to employ a content analysis of HPT business cases. Subsequently, the team reviewed the ISPI HPT model and other various sources regarding explanations of the ISPI HPT model such as Van Tiem, Moseley, and Dessinger’s book (2000), the ISPI website, and other accessible resources. On the basis of a series of team member discussions, a coding form was designed and developed.

Samples and Sampling Criteria

Purposive sampling was employed in this study. The researcher searched and selected cases suitable for addressing the study’s research questions (Krippendorff 2004, p. 119). In order to produce sound research results with purposive sampling, six general criteria for sampling proposed by Miles and Huberman (1994, p. 34) were employed.

  1. 1.

    The sampling strategy should be relevant to the conceptual framework and the research questions addressed by the research.

  2. 2.

    The sample should be likely to generate rich information on the type of phenomena to be studied.

  3. 3.

    The sample should enhance the generalizability of the findings.

  4. 4.

    The sample should produce believable descriptions/explanations.

  5. 5.

    Sampling process should be ethical.

  6. 6.

    The sampling plan should be feasible.

In addition to the general criteria, six criteria specific to this research were employed.

  1. 1.

    The cases have been published in a book, journal, or monograph.

  2. 2.

    The cases must describe a real organization.

  3. 3.

    The cases must be published within the past 20 years.

  4. 4.

    The cases must describe an HPT opportunity within an organization.

  5. 5.

    Each case must indicate the performance gap(s).

  6. 6.

    The cases must contain information about the intervention(s) associated with the HPT opportunity.

Using the 12 criteria, 30 HPT cases were selected (see Appendix). Thirty HPT business cases were sourced from nine books and two major journals within the array of HPT literature. The HPT business cases represent diverse types of organizations such as manufacturing, service, transportation, and wholesale. The authors of the cases were either internal or external consultants. An external consultant means an HPT professional who has a contract for HPT consultation, whereas an internal consultant is a regular in-house employee who is involved in the HPT consultation process. Twelve HPT business cases were carried out by internal consultants. In the two cases written by a professional organization, the tone of the writing and the contents of the case showed that those two cases were written by internal consultants. Therefore, there were 14 cases written by internal consultants. Twelve cases were reported by external consultants. Four cases were reported and written by internal and external consultants working together, and those cases were categorized as external consultant cases because one or more external consultants were engaged in the case. There were 16 cases in which external HPT consultants were involved.

Inter-coder Training

In this research, two coders, including the researcher, refined the research instruments and carried out the data coding. The second coder was an advanced Ph.D. student in the Instructional Systems Technology (IST) field with an interest in HPT. At the first training session, 3 h were used to explain and discuss the purposes of the research, the content analysis procedure, the major concepts of HPT used in the codebook, and how to mark the coding form. The training session included time for questions and discussions. The first training session provided formative information regarding the instrument; consequently, the researcher revised the research instrument. The revised codebook and coding form were reviewed by a full professor at a major university in the United States with expertise in HPT.

At the second training session, a revised instrument was presented to the second coder with explanations and questions taking an hour. Each coder was asked to read and code two cases selected randomly. At the third training session, the coding results from the two cases were compared, contrasted, and discussed in order to assure the utility of the finalized coding instrument. Then, five cases were independently coded by each coder and the findings were compared in order to calculate inter-coder reliability.

Coding, Analyzing, and Reporting

As part of the final step of content analysis, all cases were coded, analyzed, and reported. The results and implications are reported later in this article.

Reliability and Validity

In this study, various efforts were made in order to strengthen the reliability, such as (1) a qualified person who has an appropriate knowledge level of HPT was hired and trained as a coder, (2) instruments and categories were carefully established and revised through the pilot study, and (3) data were carefully selected based on suitable criteria (Holsti 1969).

In addition to the effort to ensure reliability, inter-coder reliability was calculated using Krippendorf’s alpha. Five randomly selected cases were coded by two coders, and Krippendorf’s alpha was calculated. The achieved value of Krippendorf’s alpha was .84, and it is considered reliable (Krippendorff 2004).

In this research, an external reviewer, HPT professor in a research university, and the two coders assessed content validity by carefully reviewing research purposes and instruments. In addition, the instrument was examined by the dissertation committee at the proposal review, and the committee approved the research proposal including the research instrument.

Findings

Was organizational analysis conducted in each case? (Research Question (RQ) 1)

According to the model developers, organizational analysis is the very first step in the model and involves an examination of the heart of an organization (Van Tiem et al. 2004, p. 26). In organizational analysis, HPT consultants analyze vision, mission, value, goals and strategies of the organization.

In the cases, 12 cases (40.0 %) performed an organizational analysis, and 18 cases (60.0 %) did not report carrying out an organizational analysis. Further investigation was conducted in order to identify what kinds of organizational analysis elements were carried out in the 12 cases reporting an organizational analysis. In the cases which identified an organizational analysis, seven cases included analysis on business goals or strategies only while only two cases analyzed vision, mission, and values without analyzing goals and strategies. Three cases analyzed both super-ordinate (vision, mission, and values) and the subordinate concepts (goals and strategies).

Was environmental analysis conducted in each case? (RQ 2)

Environmental analysis in the ISPI HPT model has four elements: analyses of (1) world (2) work place, (3) work, and (4) workers. All 30 cases contained analysis of at least one element. The HPT consultants analyzed the world components in 12 cases, the workplace in 24 cases, work in 23 cases, and workers in 18 cases. According to the data, workplace were analyzed the most (80.0 %), while analysis on world was reported less (40.0 %) (see Fig. 3).

Fig. 3
figure 3

Frequency of elements of environmental analysis conducted

In terms of the number of environmental analyses described in each case, among the four elements, five cases described using all four environmental analyses. Twelve cases conducted three of the four environmental analyses. Eight cases used two approaches, and five cases only one approach (see Fig. 4). For example, in Case #6 all four environmental analyses elements were used.

Fig. 4
figure 4

Frequency of number of elements used in environmental analysis

  • World: Providing high-quality water supplies in remote and rural areas is still a big challenge in Brazil as well as in other countries the world over.

  • Workplace: The company employs 10,008 workers, of whom 3593 are located in the metropolitan area….

  • Work: Serving the needs of rural areas has always been a problem…. The device for testing water is easily managed…

  • Worker: The local contact for the company in this rural area is a man with perhaps a fourth or fifth-grade education. (Case #6, pp. 108–110)

Was desired performance identified (RQ 3), and was the desired performance derived from an organizational analysis? (RQ 4)

According to the model, desired performance should be derived from organizational analysis. In the research, all 30 cases identified their desired performance either manifestly or latently. For example, Case #18 identified desired performance outcomes and displayed them.

Desired Performance Outcomes;

Achieving Cost-Effectiveness

  • Establish business acumen in managing projects

  • Provide alternatives to achieve cost-effectiveness

  • Monitor budget and analyze cost components. (Case #18, p. 102)

However in only six cases was desired performance derived from organizational analysis. In the other 24 cases although the desired performance was defined, the desired performance did not follow from an organizational analysis but from other sources. The researchers investigated these sources of desired performance.

Source of Desired Performance

If desired performance was not from an organizational analysis, what was the source? The researcher noted the sources of desired performance in the 24 cases. The sources of desired performance state and frequency are displayed in Fig. 5.

Fig. 5
figure 5

Sources of desired performance in the cases in which desired performance was not from organizational analysis (Total: 24 cases)

Among the 24 cases, 15 cases reported desired performance as a performance status without current problems. For instance, in case #9, a nationwide real estate company was suffering from a high dropout rate and low productivity of new sales associates. Their goals for the HPT process were to resolve performance problems the company had, reducing dropout rates and increasing the productivity of new sale associates.

Another source of desired performance was the ideal future performance. Among the 24 cases, three cases described the ideal future state as a desired performance state. Some cases did not have performance problems identified. Rather the purpose of HPT in those cases was to bring about change in their organizations for proactive reasons. In those cases, the desired outcome was the future state a company expected. Case #20 is an example.

Although the company’s voluntary turnover rate of less than six percent was well below the industry average, HR management still believed that in order to maintain a talented workforce and remain competitive in the industry it was important to research and address potential retention problems. (Case #20, p. 18)

The third source of desired performance was performers’ possible best performance (three cases). The best possible performances in the cases were decided based on interviews with workers or reasonable suppositions. Here is an example.

Furniture Management Coalition (FMC) leaders identified exemplary field operation managers (FOMs) …. The consultant and analysts interviewed FMC leadership and all FOMs paying particular attention to given areas in which each FOM was regarded as exemplary. (Case #8, pp. 140–141).

The next sources of desired performance were top management expectations and industry standards. In two cases, expectations and goals set by the CEOs or executive directors were deemed as the desired performance for the HPT consultation. In one case, ISO-9000 standards became their desired performance.

Was actual performance identified (RQ5), and was the actual performance derived from an environmental analysis? (RQ6)

Within the model actual performance should be derived from environmental analysis. In the research it was examined as to whether the desired performance was from the environmental analysis. In the research, all 30 cases defined their actual performance either manifestly or latently. In 23 cases (76.7 %) actual performance was defined based on the results of an environmental analysis, as the ISPI HPT model suggests.

Actual performance in seven cases (23.3 %) was not derived from an environmental analysis. In those seven cases performance problems were identified by the client companies and given to HPT consultants when the HPT process started. For instance, in Case #5, one section in the case was dedicated to describe “perceived need for intervention.” In the section, “… software development managers became aware of unusually high turnover. In the first 3 months of 1996 turnover of software developers increased from 20 to 31 %” (Case #5, pp. 128–129). In the case, the actual performance, unusual high turnover rate, was identified by software managers neither by HPT practitioners nor through an HPT process.

Was the gap identified by differences between desired performance and actual performance? (RQ7)

All of the cases presented a performance gap. A criterion for case selection was that a performance gap be described in each case. In all 30 cases performance gaps were identified by comparing desired and actual workforce performance.

The ISPI HPT model does not specify categories for performance gaps noting that performance gaps should address performance issues. In explaining types of gaps, the discussions in the model commentary book by Van Tiem et al. (2000) are esoteric, and the book has no classification or specification about performance gaps in their examples. The researcher clustered and labeled gaps found in the cases. For the gap clustering, the second coder also categorized the performance gaps. The second coder was the same individual who coded previously to assist in establishing reliability for overall case coding. The inter-coder reliability was measured using Krippendorff alpha. The Krippendorff alpha for inter-coder reliability of the performance gap categorization was .83.

Performance gaps can address more than one issue; however, for this research the primary performance gap in each case was analyzed and coded. The gaps clustered into five groups: cost effectiveness, inappropriate systems or processes, lack of skills and knowledge, poor products and services, and turnover (see Table 1).

Table 1 Categorization of performance gaps

As noted in Table 1, categories of performance gaps that emerged in this research were disparate. Some performance gaps were more like the symptoms that the organizations were facing, while other performance gaps were more closely associated to the cause of the performance problem. For example, skills and knowledge issues are typically considered as performance causes. Many HPT models identify skills and knowledge as causes for HPT deficiencies (e.g., Gilbert model, 1978; Wile model, 1996; Marker model, 2007). The ISPI HPT model describes skills and knowledge as one of the potential causes for performance gaps as well; however, some business case authors in this study wrote that skills and knowledge were the performance gap which should be taken care of.

On the other hand, five cases described high turnover rate as the performance gap. Is turnover itself a cause of performance gaps? Probably not. Among numerous HPT models, the researcher could not find any model that turnover was considered as a cause of performance, which means that the academic research and actual practice in the HPT may not regard turnover as a cause of performance gap. Turnover was not a rare issue in the HPT field, and in most situations it was considered as either performance problem or symptom.

Three other categories were similar to either skills and knowledge or turnover. Inappropriate systems and processes were sometimes considered as a cause of HPT issues. In Wile’s model (1996) or the Strategic impact model (Molenda and Pershing 2004), it was categorized as a cause. In contrast, cost effectiveness and poor services and products were used more likely as symptom or problem of performance.

Summarily, based on the cases and its analysis regarding performance gaps, in actual HPT business cases, performance gaps may not be clearly described in distinguishable manners. In the 30 cases, descriptions of performance gaps were confused with notions and terms of cause analysis (e.g., skills and knowledge) or performance symptoms (e.g., turnover rate).

Was cause analysis reported? (RQ8)

Adopting Gilbert’s framework, the ISPI HPT model consists of six elements in the cause analysis: (1) data, information, and feedback, (2) environment support, resources, and tools, (3) consequences, incentives, or rewards, (4) skills and knowledge, (5) individual capacity, and (6) motivation and expectations. In 29 cases (96.7 %), cause analysis was found while only in one case (3.3 %) cause analysis was not identified. In detail, 15 cases (50.0 %) reported one cause analysis element from among the six cause analysis elements. In nine cases (30.0 %) two cause elements were found, and in five cases (16.7 %) described three cause analysis elements. No cases reported four, five, and six elements of cause analysis (see Fig. 6).

Fig. 6
figure 6

Frequency of number of cause analysis elements used

For example, in Case #16, the performance gap of the case was “engineering teams did not achieve lasting solutions to issues identified by customer returns resulting in higher costs and increased customer returns” (p. 194), and two causes were identified by the author. The causes of the performance gap were heavy “volume of work” and “no incentive” or “consequences” for their work (Case #16, pp. 196, 197).

Another consideration regarding cause analysis in this research was frequency of each cause element reported. As seen Fig. 7 data information and feedback reported most often (20 cases). Skills and knowledge were reported in 13 cases, and consequences, incentives, or rewards reported in eight cases. Environment support, resources, and tools, were found in five cases, motivation and expectations were in two cases. Individual capacity was not commented on in any of the cases.

Fig. 7
figure 7

Frequency of cause analysis element reported

The top three cause analysis elements identified in this research were (1) data, information, and feedback (20 cases), (2) skills and knowledge (13 cases), and (3) consequences, incentives, and rewards (8 cases). All 29 cases which contained a cause analysis had at least one of the top three causes identified in this research. Environment support, resources, and tools, motivation and expectations, and individual capacity were not presented in any of the cases. Those causes were always presented in conjunction with one or more of the top three causes. Based on these results, it can be inferred that these top three were dominant causes HPT practitioners detected and drew the most attention.

In summary, cause analysis was found in 96.7 % of all cases (29 cases), and based on the findings, it is a reasonable inference that cause analysis is widely adopted and applied in HPT consultation. In addition, regarding cause analysis elements that are reported the element of data, information and feedback and the element of skills and knowledge seem to be dominant causes of performance gaps.

Were the identified causes prioritized? (RQ9)

Van Tiem, Moseley, and Dessinger claimed that HPT consultants should “prioritize the causes according to high or low impact on the performance environment” (p. 49) after identifying them. In this research, the researcher examined whether causes were prioritized. Causes were coded as primary and secondary causes. The researcher adapted definitions of primary and secondary causes from Van Tiem, Moseley, and Dessinger and operationalized them. The primary causes in this research are the causes which were (1) treated with interventions directly or (2) identified by authors as ones having high impact on the performance environment. In contrast, the secondary causes in the research are operationally defined as the causes which were (1) not treated with interventions directly or (2) identified by authors as ones having low impact on the performance environment.

As reported earlier, 29 cases identified cause analysis. Among the 29 cases, in all 29 cases there was no prioritization. All causes were treated equally as primary causes in all 29 cases. In addition, there were no cases identifying high or low impacts of causes in the cases. For instance, in Case #5, the performance gap was “unusually high turnover” (p. 128), and two causes were identified by the author. The causes of the performance gap were “job descriptions with technical requirements were far above the level actually needed to perform the task,” and “high-level software developers were greatly disappointed in the type of work that they actually ended up performing” (pp. 131–132). In the case both causes were treated by interventions. There were no indications of prioritization of the causes.

Discussion

Model Epitomizing

As reported above, organizational analysis was conducted in only 12 cases. Among those 12 cases, only in six cases were desired outcomes derived from organizational analysis, although in all 30 cases desired performance was reported. Regarding environment analysis, in all 30 cases environmental analysis was reported. In 23 cases environmental analysis results became sources for actual performances. Performance gaps were identified by comparison to desired performance and actual performance.

Organizational analysis was not reported in many cases, and furthermore desired performance was not based on an organizational analysis. In the cases, organizational analysis was entwined with environmental analysis. No evidence was found for a differentiation between organizational analysis and environmental analysis. The case authors may not have perceived organizational analysis and environmental analysis as two different analyses. The case authors as consultants may have considered that they had to analyze all possible factors which were related to performance issues. Because consultants may not differentiate between organizational analysis and environmental analysis, the desired and actual performances were derived from both organizational and environmental analysis. Based on the research findings, the notion that organizational analysis and environmental analysis are sources of desired performance and actual performance separately and respective is a misleading dichotomy.

If the ISPI HPT model needs to be refined based on these findings, organizational analysis and environmental analysis may be combined. The combination of environmental analysis and organizational analysis is quite common in other HPT models. In short, other major HPT models and processes do not differentiate between organizational analysis and environmental analysis. In addition, they do not specify a connection between organizational analysis and desired performance and between environmental analysis and actual performance (e.g. Swanson (2007)’s process of diagnosing performance).

Need for a Framework for Performance Gaps

Many researchers and models, including the ISPI HPT model, indicate that defining a gap is the goal of performance analysis. Despite emphasis on the importance of gap analysis, frameworks for performance gaps in the HPT field were not found. There is a common consensus that performance gaps are derived from a comparison of desired performances and actual performances (e.g., Stolovitch and Keeps 1999; Chevalier 2008; Nickols 2011). Except for this consensus, it is difficult to find theoretical or practical research on performance gaps such as classifications, characteristics, or guidelines for ascertaining performance gaps.

In the current research, clustered performance gaps in the cases were quite different and inconsistent. Some performance gaps defined were more like performance symptoms such as high turnover rates, whereas other performance gaps were close to causes of performance problems such as skills and knowledge deficits. Due to the absence of a framework for performance gaps, the identification of performance gaps were diverged and erratic. Currently identifying performance gaps is based on an HPT consultant’s personal education and experiences rather than based on validated frameworks. This may prevent HPT consultants from seeing appropriate interventions (Wittkuhn 2004). Therefore, different understandings of performance gaps due to the absence of a performance gap framework may hinder success of HPT processes and development of the HPT field. The research results regarding diverged performance gaps indicate the need for academic and practical efforts to identify performance gap guidelines and a framework.

Cause Analysis

Some form of cause analysis was found in all 30 cases. The research findings support the notion that cause analysis as presented in the ISPI HPT model is epitomized in the HPT business cases. However, regarding the systematic relationship between performance analysis and cause analysis an important point has to be noted. The point is the systematic relationship between the two analyses is a logical sequence rather than a practical procedure.

Performance Analysis and Cause Analysis: Mental Sequence Rather Than Practical Procedure

The ISPI HPT model implies a systematic relationship among its stages, which means that output from a previous stage becomes input for the next stage. The output of performance analysis, a performance gap, should be an input for the next stage, cause analysis. The research findings support a systematic relationship between the stage of performance analysis and the stage of cause analysis. Logically, consultants examined causes of performance gaps in the stage of cause analysis.

Definitely, this is a logical procedure. However, is it a practical procedure as well? In other words, when consultants carry out the HPT process, do they interview top management, managers and employees, review documents, and observe people’s work in order to identify only performance gaps? After finding a performance gap do they conduct interviews, document reviews, and observation again for cause analysis? It seems not. When experienced HPT consultants conduct analysis, they search for performance gaps and their causes simultaneously.

The sequence of performance analysis and cause analysis as presented in the ISPI HPT model is not a practical and linear procedure but a rather a logical and mental iterative process. Throughout all of the cases this idea is supported. In the cases, cause analysis was often blended with performance analysis, and descriptions of performance gaps and causes were not always presented or identified sequentially. This supports Hoard and Stefaniak’s argument (2016) that performance and cause analyses should be merged because of their simultaneous occurrence.

When the model developers generated the model, they intended that the model was able to serve as a job aid for actual HPT processes (D. Van Tiem, Personal Communication, April 10, 2010). If the ISPI HPT model is accepted as a job aid, this causes huge confusion in what is depicted as the beginning of the HPT process. This is not only an issue of the ISPI HPT model; other process models and analysis framework do not clearly articulate possible differentiation between actual processes versus mental processes.

Delimitation and Limitation of the Study

This research does not aim to examine the effectiveness of the performance analysis process in the ISPI HPT model directly. For direct scrutiny of effectiveness of performance analysis, it would be assumed that HPT consultations in the sample cases were conducted using only the ISPI HPT model. It is not, however, assured that the cases in the research sample were carried out using the ISPI HPT model. Consequently, the primary purpose of the research could not be a direct examination of the model’s effectiveness. Rather, the purpose of this research was to examine what the consultants have done and to ascertain if this is epitomized by the ISPI HPT model regardless of model or framework they used.

A major limitation is that the results from the analysis are from self-reported cases. The researcher analyzed self-reported cases written by HPT consultants and published in books, journals, or monographs. Case authors, intentionally or not, may have exaggerated or omitted HPT processes in order to highlight what was done well or to conceal shortcomings in their cases. This may produce biases, and the biases may be reflected in the written cases. There is the possibility that as editors reviewed case content including HPT processes, for editorial reasons, edits were made that resulted in adaptations to the cases as well.

In the research, it was difficult to triangulate findings. The research reports findings only from the content analysis. The researcher made efforts to select 30 diverse cases from different sources; however, by reporting results only through content analysis excluding other methods such as interviews or direct observation, methodological triangulation was unattainable.

Conclusion: Refined Performance Analysis Process

The research findings suggest that the performance analysis process in the ISPI HPT model be revised in order to explain the actual HPT process better. As discussed above, the performance analysis process as presented in the ISPI HPT model is not epitomized well in the HPT business cases. In detail, organizational analysis is not reported in all the cases. Connections between organizational analysis and desired performance and between environmental analysis and actual performance are not specified. As the original model states, performance gaps are identified by comparison of actual and desired performances. In addition, the performance analysis and cause analysis are not an actual sequence of work but a mental and logical sequence.

The researcher refines the performance analysis process based on findings and discussions (See Fig. 8). In order to distinguish the actual and logical processes, the researcher uses the solid line to indicate the actual process, and the dotted line is used in order to show a logical flow. Therefore, HPT professionals can see both actual and mental flows through the refined performance analysis process.

Fig. 8
figure 8

Refined performance analysis process

In this research, a need for further research was identified. First, as mentioned above, in the HPT field, frameworks and guidelines for performance gaps are not well developed. In light of the importance of a performance gap, developing a framework would be a valuable research topic. In addition, the framework for performance gap analysis would clarify the distinctions between performance gaps and causes, and it would elaborate on the analysis processes. Another interesting future research topic would be to examine how HPT experts and novices conduct the performance analysis process, and who the model would serve more as a job aid to. This research will help model elaboration and provide better guides for HPT professionals at various levels in their work.