1 Introduction

This chapter presents a methodological approach to measure an organization’s technology transfer capabilities. The integrated approach is a combination of action research in the first phase and a hierarchical decision modeling (HDM) in the second phase, and rather than focusing on assessing a single technology or project/program, focuses on assessing the organization as a whole, i.e., the model brings insights on how ready the organization is in order to successfully transfer technologies from the research stage into an operational stage. The following sections bring a detailed explanation on action research as a research approach and on HDM as a decision-making method, as well as the presentation of the assessment framework with the necessary steps to build the model and to apply it.

2 Technology Transfer Assessment Method

This section introduces the two main pillars of the technology transfer assessment method: action research and hierarchical decision modeling.

The proposed method makes use of action research in the preliminary stages (helping to build the initial model) and hierarchical decision modeling (HDM) for the remaining stages. Figure 18.1 summarizes the methodological approach.

Fig. 18.1
A framework exhibits the 2 inputs of hierarchical decision modeling (H D M). 1. Action research. 2. Literature review.

Methodological approach—AR and HDM integration

As Fig. 18.1 shows, action research and analysis of the body of knowledge (literature review) are jointly used to create inputs for an HDM model. The action research component brings the experience and points of view of practitioners who deal with technology transfer on a daily basis, while the literature review components complement it with theoretical insights extracted from decades of relevant research in the field. Both components are then merged into a single hierarchical decision model, which will be then validated and quantified before its application. The following sections explain in further details both action research as an approach and HDM as a decision-making method.

2.1 Action Research

Action research is seen as a method for practitioners to get their hands dirty and actively change something in the real world, and it is also seen as an effective method to create knowledge [1]. It is a very diverse and dynamic methodology, and authors argue that there is not a unique definition or manner in conducting action research [1,2,3]. O’Brien states that action research can be thought of as “learning by doing”, and also lists several alternative names the methodology is referred to: participatory research; collaborative inquiry; emancipatory research; action learning; and contextual action research [4]. Tripp defines it as any kind or variation of action inquiry where the researcher aims to improve the practice by ways of acting upon it and later inquiring on the action’s results, in a cyclical fashion [5].

Action research brings together and leaps through the traditional divide between research and application since the methodology incurs in researching with practitioners and not researching on practitioners. It is a methodology that lies in the boundary between academia and practice [1]. In the words of Reason and Bradbury, “It seeks to bring together action and reflection, theory and practice, in participation with others, in the pursuit of practical solutions to issues of pressing concern to people, and more generally the flourishing of individual persons and their communities” [1].

According to Reason and Bradbury [1], the four steps of action research are as follows:

  • Step 1: Creating communities of inquiry within communities of practice: Shortens the distance and makes no difference between scientists and practitioners

  • Step 2: Building theories in practice: Go to the practical sphere to build theory

  • Step 3: Combining interpretation with “rigorous” testing: Tests the theory with practical applications

  • Step 4: Changing the status quo: Causes actual change to the practitioner’s systems

Similarly, Susman and Evered [6] describe the action research approach in five phases that repeat itself in a cyclical manner:

  • Phase 1: Diagnosing: Identifying the problem

  • Phase 2: Action Planning: Devising a plan on how to act on the problem

  • Phase 3: Action Taking: Executing the plan

  • Phase 4: Evaluating: Understanding the results of the actions taken

  • Phase 5: Learning: Learning from the experience and starting the cycle over

Action research (AR) is often referred to as a methodological approach, rather than a method, i.e. several methods can fit inside this methodological approach. AR transforms reality, and it is a methodological approach that has a performative perspective as one of its most important components [7]. Strengthening this view of action research as an approach rather than a method itself, Tripp argues that AR cannot be used as a single method in a dissertation, and that it will always require a second method to complement it [5]. Similarly, Dick emphasizes that AR as an approach takes advantage of several different methods and tools to achieve the desired changes [8].

Action research is a way through which researchers influence a system, and this influence (action) also creates important knowledge about the system [6]. It is an approach that creates the conditions for better decision-making about practice, since the process unfolds in a systematic way and inside the practice [9]. To conduct AR, the researcher should, at the same time, actively engage in the action and reflect on the actions taken, generating positive changes to the practical system and generating useful knowledge for the theory [10]. Action research is not only beneficial in practical aspects but also generates knowledge [8]. According to Chandler and Torbert, action research aims not only to understand a system but also to present the future conditions of the system [2]. The approach is focused on resolving real issues and is applied in real conditions and environments rather than in enclosed, controlled, and experimental ones [4]. According to Ferrance, the action research approach brings benefits by focusing on the issue to be solved and by allowing professional development of those involved [3].

Scholars also praise the iterative nature of AR. Tripp says that AR is an approach that makes use of different techniques to provoke changes in reality, and its iterative nature is possibly its most distinguishing feature where the end of a cycle is always the starting point of another and serves as an improvement opportunity [5]. AR operates in iterative cycles of action and reflection on the action, bringing desirable changes that are not easily achieved otherwise [11].

The active participation of the researcher is a very important feature of AR. As Dick summarizes it, even when the word “participatory” is not used, the active participation of the researcher brings several benefits to the approach, such as the commitment to the actions that were agreed upon, the commitment to information sharing, and the commissioning of the people involved in the effort [8]. The participation of practitioners in the process is also very important. In the words of Village et al., “in the AR approach, it is the responsibility of researchers and practitioners together to define the plan, carry out the initiatives, and monitor what is helping or not helping achieve the goal in the organization” [10, p., 1576]. Furthermore, as Dick and Greenwood puts it:

Action research rejects this pattern of behavior and organization. For action researchers a key concept is a dual commitment to both participants and action. Action research is done with rather than on, the participants – as is often stated. Ideally, the participants become equal partners and co-researchers. The research is done to provide learning and understanding (and theory) that can be used by participants to improve their situation for the benefit of all. For the most action researchers, as far as feasible these are imperatives [11, p. 195].

Notwithstanding its popularity and use in social sciences fields such as psychology, sociology, and anthropology, action research is also indicated as a good way to tackle business and management issues. Although academics may use action research in a way that is excessively theory-oriented for business purposes, there is a balance to be reached so that consultants can make use of the approach to solve management problems [12]. According to Perona et al., action research would be very useful for operations management research, particularly for modeling organizational processes [13]. The action research approach would enable a much deeper and detailed understanding of the organizations being studied when compared to other traditional management research approaches, e.g. surveys and interviews, thus being more advantageous [14]. Specifically with regard to technology management, action research has also been recommended. In his research on knowledge management using participation action research, Otosson argues that had he chosen other more traditional methods of research, the results he achieved would not have been possible [15]. The same author goes on to affirm that the approach has helped especially regarding gathering information and generating insights about new product development, innovation management, and change and project management [15].

As previously explained, action research does not belong to any particular realm or field. On the contrary, the fields of study being explored using the action research approach are numerous. Table 18.1 brings a small sample of what can be found in the literature, regarding the application areas, subjects, and issues being undertaken through action research.

Table 18.1 Action research application areas

As previously stated, AR is more of a research approach than a research method. It would frame the way the researcher regards the problem, the way the researcher interacts with the people involved in the problem and most importantly, AR would frame the way the researcher tries to solve the problem (research objective). In order to perform AR, the researcher has to actively engage with practitioners, participating in discussions and activities as a member of the team. Further, after this active participation, changes have to be proposed and implemented, aiming to change the status quo and improve the practitioners’ systems. If the researcher only describes or facilitates discussions, and if the researcher does not produce any actual changes to the system, he/she would not have used action research. Producing changes is a crucial aspect of action research, and Table 18.2 summarizes the changes sought by researchers in some of the studies available in the literature.

Table 18.2 Action research producing changes

As a research approach, AR can be combined with many different methods for data collection and data analysis. For instance, one could use AR as an approach and use focus groups or interviews as data collection methods and use statistical analysis or grounded theory data analysis methods. In truth, action research must be combined with data collection and data analysis method, because it is not a method in itself, but it is a methodological approach (as explained above). As explained earlier, action research will always require a second method to complement it [5]. This assertion is proven by the fact that the majority of AR studies have one or more “auxiliary methods” to it with the exception made to the conceptual papers that discuss action research as an approach rather than applying it. A myriad of different methods of data collection and data analysis can be effectively and successfully blended into action research, as shown in Table 18.3.

Table 18.3 Action research and auxiliary methods

It is clear that AR is a good fit to work with qualitative and quantitative methods that use expert’s opinions. Moreover, it has been argued that action research would work well with methods that use ranking and pairwise comparisons, and it should be more utilized coupled with quantitative methods [7].

2.2 Hierarchical Decision Modeling (HDM)

Hierarchical decision modeling (HDM) is an MCDM (multicriteria decision-making) method and was developed in the 1980s by Kocaoglu [35]. The basic idea of HDM is to represent the problem in a hierarchical disposition, so that the decision-makers can visualize which items (criteria and sub-criteria) can affect the objective/mission. According to Munkongsujarit et al., HDM helps the decision-maker by presenting the decision problem as a cascade of problems that are simpler to handle [36]. This model breaks down the various elements of the problem into smaller sub-problems such that the decision problem is represented as a hierarchy [37]. HDM is a tool used in decision-making to rank and evaluate the available alternatives and to determine the best among them [36]. It is a tool that helps decision-makers quantify and incorporate quantitative and qualitative judgments into a complex problem [38].

HDM has been used in a variety of cases and for several purposes, especially in technology management, trying to evaluate and tell which technology alternative is the best option in a particular setting, given the criteria established to evaluate the alternatives. According to Munkongsujarit et al., hierarchical decision models assist the decision-makers by providing a systematic way to evaluate all available alternative solutions to the problem according to the relative importance of the criteria and finally in identifying the best possible solution [36].

The basic structure of HDM can vary depending on each application needs. The most traditional structure is the MOGSA, a five-level structure containing Mission, Objectives, Goals, Strategies, and Actions. However, simpler structures can be used, such as a three-level model containing Mission, Criteria, and Alternatives or a four-level model containing Mission, Criteria, Sub-Criteria, and Alternatives. According to Sheikh et al., with HDM, multiple perspectives can be prioritized and their associated criteria can be ranked [39] so as to understand which criteria and/or perspectives are more important and to what degree.

In order to apply HDM, it is necessary to select experts (in the specific studied field) who will help create the model and evaluate the relationships between objective, criteria, sub-criteria, and alternatives. The experts make pairwise comparisons among the items in the model (criteria, sub-criteria, and alternatives) to determine its weights and relationships using the constant-sum method (dividing a total of 100 points between the items being evaluated). The results of the comparisons are then extracted into matrixes, which in turn will have their values normalized and processed in order to rank the alternatives. In the end, it is possible to determine which alternative is the best, considering the criteria and evaluations made by the experts involved. As Turan et al. state, in the HDM model, pairwise comparisons are made to express the importance of one element of the decision problem with respect to another (criteria and alternatives) [40].

As stated earlier, HDM has been applied in several different settings and fields, proving that it is, indeed, an effective method. The fields and areas that were explored using HDM are (but not limited to) computer selection [37]; agriculture [41]; university housing [36]; selection of graduate school [40]; transportation options [42]; solar photovoltaic technologies [39]; health technology assessment [43]; semiconductors industry [38]; energy [44]; and technology transfer [45]; among others.

Engineering and research managers are frequently faced with multilevel decisions under conflicting objectives and criteria. They develop technical strategies to fulfill multiple goals; allocate resources to implement multiple strategies; and evaluate their projects and programs in terms of time, cost, and performance characteristics [35]. As the world has become more complex, decision problems have followed suit and must contend with increasingly complex relationships and interactions among the decision elements. To assist decision-makers and analysts, different methods have been developed to decompose problems into hierarchical levels and formulate hierarchical decision models (HDM) [46]. As Taha et al. state, the decision process is as important as the decision itself. Thus, choosing the right method to aid in the decision process can be the difference between success and failure [37]. Still, according to the same authors, the best decision model to use when subjective judgment is needed to evaluate and select a solution with many criteria is the hierarchical decision model (HDM) [37].

The concept of desirability functions is used to calculate the technology transfer score. For each of the factors in the model, levels (or metrics) are set and experts are prompted to assign a desirability value for each of those levels between 0 and 100 (with 0 representing the least desirable situation and 100 representing the most desirable situation). The desirability values are used to plot the desirability curves, and the curves’ distribution will change depending on the very nature of each factor. The great advantage of using desirability functions is the flexibility it provides to the model. After apprehending experts’ judgments on each factor through the desirability values, one can replicate the model and apply it again and again using different alternatives, without having to go back and consult with the experts, provided the weights of perspectives and factors remain unchanged. Conversely, if one does not use desirability functions, one would have to go back to the experts and start the quantification process anew with every change in the alternatives.

According to the HDM methodology developed by Kocaoglu [35], pairwise comparisons are made between each item in every layer of the model. After conducting the pairwise comparisons, normalized matrices are generated with the expert judgments. The importance of every component of a given layer relative to the layer right above it is extracted by averaging the rows of the normalized matrices. The importance of every model component relative to the first layer (or the global importance) is calculated by multiplying its local importance (relative only to the layer above it) by the importance of its “parents” relative to the first layer. By bringing this rationale to the model, and using three layers, the calculation of the factors’ importance relative to the mission (organizational TT score) will be given by the following equation:

$$ {S}_{n, jn}^{\mathrm{TT}}=\sum \limits_{n=1}^N\sum \limits_{jn=1}^{Jn}\left({P}_n^{\mathrm{TT}}\right)\left({F}_{n, jn}^P\right) $$

where \( {S}_{n, jn}^{\mathrm{TT}} \) = relative value of the jnth factor under the nth perspective with respect to the TT score; \( {P}_n^{\mathrm{TT}} \) = relative priority of the nth perspective with respect to the TT score, n = 1, 2, 3 … N; and \( {F}_{n, jn}^P \) = relative contribution of the jnth factor under the nth perspective, jn = 1, 2, 3 … N.

After having the importance of each factor relative to the mission, the determination of the organizational TT score will be given by multiplying the global importance of each factor by its desirability value and making the total summation as shown in the following equation:

$$ \mathrm{Org}\ \mathrm{TT}\ \mathrm{Score}=\sum \limits_{n=1}^N\sum \limits_{jn=1}^{Jn}\left({S}_{n, jn}^{\mathrm{TT}}\right)\left({D}_{n, jn}\right) $$

where \( {S}_{n, jn}^{\mathrm{TT}} \) = relative value of the jnth factor under the nth perspective with respect to the TT score; Dn, jn= desirability value of the performance measure corresponding to the jnth factor under the nth perspective.

During the quantification phase, the levels of individual logical inconsistency and group disagreements are also calculated. The inconsistency level measures how logical each expert is when performing the pairwise comparisons. For instance, given three factors A, B, and C, if A is better than B and B is better than C, A must be better than C if one is to be logically consistent. The disagreement level measures how much disagreement exists between the various experts in their judgments. In the words of Phan:

For n elements, the constant sum calculation results in a vector of relative values r1, r2, …, rn for each of the n! orientations of the elements. For example, if three elements are evaluated, n is 3, and n! is 6. The 6 orientations would be ABC, ACB, BAC, BCA, CAB, and CBA. If an expert is consistent in providing pairwise comparisons, the relative values are consistent for each orientation. However, if an expert is inconsistent in providing pairwise comparisons, the relative values are inconsistent for each orientation. The inconsistency in this methodology is measured by the variance among the relative values of the elements calculated in the n! orientations. [45, 47]

The formulas to calculate the inconsistency level are as follows, adapted from [35, 47, 48]:

Let:

  • rij = relative value of the ith element in the jth orientation for an expert.

  • \( {\overline{r}}_i \)= mean relative value of the ith element for that expert.

$$ \frac{1}{n!}\sum \limits_{j=1}^{n!}{r}_{ij} $$

Inconsistency in the relative value of the ith element is

$$ \sqrt{\frac{1}{n!}\sum \limits_{j=1}^{n!}{\left({\overline{r}}_i-{r}_{ij}\right)}^2}\ \mathrm{for}\ I=1,2,3\dots n $$

Variance of the expert in providing relative values for the n elements is

$$ \mathrm{Inconsistency}=\frac{1}{n}\sum \limits_{i=1}^n\sqrt{\frac{1}{n!}\sum \limits_{j=1}^{n!}{\left({\overline{r}}_i-{r}_{ij}\right)}^2} $$

The disagreement level formula is as follows, adapted from [47, 48]:

$$ d=\sqrt{\frac{1}{m}\sum \limits_{j=1}^m\frac{1}{n}\sum \limits_{i=1}^n{\left({R}_i-{r}_{ij}\right)}^2} $$

where Ri = group relative value of the ith element; m = number of experts; n = number of decision variables; rij = mean relative value of the ith element for the jth expert.

3 Proposed Framework

Figure 18.2 is a chart illustrating the approach used.

Fig. 18.2
A flowchart exposes the steps involved in a research approach. 1. A R and literature review. 2. Initial model. 3. Expert panels formation. 4. Model validation. 5. Model quantification. 6. Model application and analysis.

Research approach

3.1 First Step: AR and Literature Review

As the first step, a literature review and an action research project are conducted. As previously explained, the literature review contributes with the concepts and factors that have been investigated and analyzed through decades of academic research in the field of technology transfer. The action research project, conversely, contributes with the points of view, expertise, and daily challenges faced by technology managers in a practical setting. It is imperative for the researcher to go through all four stages of the action research project in order for him/her to be able to better integrate the concepts and factors that arise from the practice with those found in the body of knowledge.

3.2 Second Step: Initial Model

The second step merges the concepts and factors identified in the previous step, and these are used by the researcher to build an initial HDM model, following the methodology described in previous sections.

3.3 Third Step: Expert Panel Formation

According to the Cambridge dictionary, an expert is “a person having a high level of knowledge or skill in a particular subject” [49]. Expert panels are a group of experts who are summoned together to gather and discuss a subject or provide a service, such as feedback or recommendations [50]. In the third step, experts are identified to provide input to the model, by validating the initial model and then quantifying it.

The most important challenges in working with expert opinions are the potential biases and their overconfidence in judging subjects and situations they know well [51]. In the words of Morgan, “because experts are human, there is simply no way to eliminate cognitive bias and overconfidence” [52, p., 7183]. Identifying and recruiting the best experts to the situation, and at the same time making sure the results of the panel are reliable, is also a significant challenge [48, 53]. Balancing the expert panel is also a concern, and researchers need to make sure each panel represents a robust and significant sample of the existent knowledge on the field [48].

The size of the panels is a major concern as well [48]. Commenting on the Delphi method, Phan states that the most recommended size of an expert panel would be from 10 to 15 experts [47]. Nonetheless, successful studies have been conducted utilizing sub-groups of experts as small as five members [54] or three members [53]. Since dealing with a large number of experts augments the process complexity exponentially, it has been argued that the maximum amount of experts per panel should be 12 [55]. Leveraging the work done in past dissertations [45, 48, 53, 54, 56, 57], it is safe to say that expert panels composed of 6–12 experts each are reliable and at the same time manageable.

Experts should be selected taking into account several aspects, such as how much of an expert the person is by noting what were the contributions and the significance of their contributions to the field of study; minimizing the bias as much as possible (i.e., checking if the selected experts have any special reason or personal interest that would enhance the bias potential); and noting how available or willing the experts are (i.e., not only the person should be an expert, but also he/she should be willing to fully participate in the study, as to spend enough time and attention taking care of their tasks as an expert [48, 54, 56]).

Due to the challenges aforementioned, the process of selecting the expert is not a trivial task. The most proper methods to choose experts, according to Tran, are the use of personal connections, such as if the researcher has easy access to knowledgeable people in the field; snowball sampling where the researcher starts with a small group of experts, who in turn recommend more experts and so on; social network analysis where the researcher draws on a network based on collaborations, coauthorship, or citations in order to discover the most relevant and influential actors in the field of study [53].

3.4 Fourth Step: Model Validation

The model validation step is illustrated by Fig. 18.3.

Fig. 18.3
A framework exposes the 3 steps of model validation. 1. Perspective validation. 2. Factors validation. 3. D C metrics validation.

Model validation framework

Survey instruments are created and sent to the experts along with documents explaining the objectives of the research and explaining the model. All details and steps of the model building and also the objectives of the model should be clearly and thoroughly explained, so that biases and misunderstandings are minimized. This process is repeated for validating the model’s perspectives, factors, and desirability curve metrics.

3.5 Fifth Step: Model Quantification

The model quantification step is illustrated by Fig. 18.4.

Fig. 18.4
A framework exposes the 3 steps of model quantification. 1. Perspective quantification. 2. Factors quantification. 3. D C values quantification.

Model quantification framework

Research instruments are created and sent to the experts, along with documents explaining the objectives of the research and explaining the model. Although there are software available to collect and analyze the experts’ input, the researcher could also opt to collect the data via online survey systems and then process the data afterward.

3.6 Sixth Step: Model Application and Analysis

As the last step, the model is applied in the organization for which the researcher wishes to assess the technology transfer capabilities. Internal subject matter experts and managers should be contacted in order to determine at which point of every desirability curve the organization in question is situated. After the model is applied, sensitivity analysis should be performed in order to check and understand the impact on the total TT score due to changes in the priorities (importance) of model perspectives. Following is an explanation of possible inconsistency, disagreement, and sensitivity analyses.

3.6.1 Inconsistency Analysis

The inconsistency analysis is one of the key data analysis items in applying the HDM methodology [53]. According to Estep, “generally, inconsistency can be defined as disagreement within an individual’s evaluation” [45, p. 75]. In the words of Abotah, “inconsistency is a measure that explains how reliable and homogeneous in his or her answers each expert was through the whole questionnaire” [48, p. 64]. In other words, the inconsistency of an expert can be thought of as the logical incoherence of his/her judgments. For instance, given three factors A, B and C, if A is better than B and B is better than C, A must be better than C if one is to be logically consistent (ordinal consistency). Moreover, if A is two times better than B and B is three times better than C, then A must be six times better than C, if one is to be logically consistent (cardinal consistency). Chan argues that inconsistencies in experts’ judgments are common in AHP-based studies [54]. Following the same reasoning, Gibson states that one should expect inconsistency to occur when experts face multiple decisions and have to judge items [57].

In more technical terms, in the words of Phan from his PhD dissertation in 2013:

For n elements, the constant sum calculation results in a vector of relative values r1, r2, …, rn for each of the n! orientations of the elements. For example, if three elements are evaluated, n is 3, and n! is 6. The 6 orientations would be ABC, ACB, BAC, BCA, CAB, and CBA. If an expert is consistent in providing pairwise comparisons, the relative values are consistent for each orientation. However, if an expert is inconsistent in providing pairwise comparisons, the relative values are inconsistent for each orientation. The inconsistency in this methodology is measured by the variance among the relative values of the elements calculated in the n! orientations. [45, p. 47]

The formulas to calculate the inconsistency level are the following, adapted from [35, 47, 48]:

Let

  • rij = relative value of the ith element in the jth orientation for an expert;

  • \( {\overline{r}}_i \)= mean relative value of the ith element for that expert;

$$ \frac{1}{n!}\sum \limits_{j=1}^{n!}{r}_{ij} $$

Inconsistency in the relative value of the ith element is

$$ \sqrt{\frac{1}{n!}\sum \limits_{j=1}^{n!}{\left({\overline{r}}_i-{r}_{ij}\right)}^2}\ \mathrm{for}\ I=1,2,3\dots n $$

Variance of the expert in providing relative values for the n elements is

$$ \mathrm{Inconsistency}=\frac{1}{n}\sum \limits_{i=1}^n\sqrt{\frac{1}{n!}\sum \limits_{j=1}^{n!}{\left({\overline{r}}_i-{r}_{ij}\right)}^2} $$

As noted by Kocaoglu [35] and as per the precedent established by other studies [45, 48, 53, 54, 57], the inconsistency level should not be higher than 10%, in order to be taken as acceptable. Should the inconsistency level exceed the 10% mark, a more careful consideration should be made (e.g., the most inconsistent experts should be asked to repeat the judgments, and in extreme cases the most inconsistent judgments could be deleted from the analysis) [57]. Additionally, in case of large inconsistencies, another method of calculating the inconsistency could be used to further analyze the matter, such as the root-sum of variances created by Abbas [58]. His method utilizes the root-sum of the variances (RSV) and it takes into account the number of pairwise comparisons experts are making. The following formulas depict the calculations used and are adapted from [58].

$$ \mathrm{RSV}=\sqrt{\sum \limits_{i=1}^n{\sigma}_i^2} $$

where HDM inconsistency = root of the sum of variances (RSV) and 𝜎i2 = variance of the mean of the ith decision element.

$$ {\sigma}_i=\sqrt{\frac{1}{n!}\sum \limits_{j=1}^{n!}{\left({x}_{ij}-{\overline{x}}_{ij}\right)}^2} $$

where xij = normalized relative value of the variable i for the jth orientation in n factorial orientations and \( {\overline{x}}_{ij} \) = mean of the normalized relative value of the variable i for the jth orientation.

$$ {\overline{x}}_{ij}=\frac{1}{n!}\sum \limits_{j=1}^{n!}{x}_{ij} $$

where \( {\overline{x}}_{ij} \) = mean of the normalized relative value of the variable i for the jth orientation and xij = normalized relative value of the variable i for the jth orientation in n factorial orientations.

3.6.2 Disagreement Analysis

The disagreement analysis is also noted as one of the key data analysis items in applying the HDM methodology [53]. In the words of Tran, “the agreement among the experts’ judgment is represented by a disagreement value of the expert group in a pairwise comparison procedure” [53, 65, p.]. Quoting from Abotah’s dissertation, “the disagreement of experts can be understood as the deviation of their judgments from each other” [48, p. 59]. To measure and treat the disagreement levels would be especially important in order to guarantee the significance of the results of experts’ judgments [53]. It could be problematic if researchers did not check the agreement level between the raters before making any data analysis [59].

Although disagreement would be something natural among experts, it should be treated. In case the disagreement level is greater than what is acceptable, another round of judgments could be conducted with the aim to reach a consensus or quasi-consensus situation (following the Delphi methodology). However, in cases where the vast majority of experts agree but there is one or a few outliers bringing the disagreement level up, a follow-up with the outliers should be conducted in order to check if they have correctly interpreted the components and concepts involved in the study [45], and the removal of those outliers from the pool of experts could also be contemplated as a viable option in extreme cases.

A common method of measuring the disagreement between experts is to use the PCM group disagreement index, according to the following formula, which is adapted from [47, 48, 56].

$$ d=\sqrt{\frac{1}{m}\sum \limits_{j=1}^m\frac{1}{n}\sum \limits_{i=1}^n{\left({R}_i-{r}_{ij}\right)}^2} $$

where Ri = group relative value of the ith element; m = number of experts; n = number of decision variables; rij = mean relative value of the ith element for the jth expert.

In order to use this method, precedent has it that an acceptable disagreement level would be 10% or less [45, 48, 56, 57]. Hierarchical agglomerative clustering (HAC) has also been used in previous dissertations to complement the disagreement measurement and interpretation [56, 57].

Two other very common methods of measuring the disagreement among experts are the intraclass correlation coefficient (ICC) and the F-test. Several authors have used ICC and F-test as the measurements of disagreement among experts [47, 53, 54].

The ICC would be at the same time a measure of intrarater reliability and interrater reliability, and it is extensively used across different disciplines [59, 60]. A complete agreement between experts would result in the ICC yielding the value 1.00, while a complete disagreement between experts would result in the ICC yielding the value 0. It has been argued that an ICC of 0.7 or higher would indicate an acceptable level of agreement [61]. However, there are authors reasoning that the minimum acceptable ICC would vary on a case-by-case basis, heavily depending on the research questions, objectives, and data used [59]. There are several different ways of applying ICC, and there are three different models with different options of forms and types. The researcher should be aware of his/her needs in order to choose the most proper ICC model and its features [59].

The ICC is estimated according to the following formula, adapted from [60]:

$$ \mathrm{ICC}=\frac{{\mathrm{MS}}_R-{\mathrm{MS}}_E}{{\mathrm{MS}}_R+\left(K-1\right){\mathrm{MS}}_E+\frac{K}{N}\left({\mathrm{MS}}_C-{\mathrm{MS}}_E\right)} $$

where MSR = mean square for rows (i.e., targets); MSC= mean square for columns (i.e., judges); MSE= mean square error of all obtained from a two-way ANOVA; K = number of observations (e.g., ratings or judges) for each of the N targets; and N = number of targets.

The F-test is a statistical test used to compare the ratio of two variances. Some of the assumptions of the test are that the population variances are equal (therefore, the null hypothesis will be that the variances are equal), the population has approximately a normal distribution, and the samples must be independent events [62]. The work done by Shrout and Fleiss in 1979 used ICC as a basis and F-test to check the disagreement levels between raters [63]. It tests a null hypothesis H0 : ICC = 0, meaning that there is no correlation between the values and thus there is an absolute disagreement between the experts. If the null hypothesis is rejected, the H1 : not H0 is confirmed, meaning there is not a statistically significant disagreement between experts. The F ratio is calculated by the following formula:

$$ F=\frac{{\mathrm{MS}}_R}{{\mathrm{MS}}_E} $$

The resulting ratio is then compared with the F-critical value with the degrees of freedom df1 = dfR and df2 = dfE at a specific level of confidence (usually 95% and above). If the calculated ratio is greater than the F-critical value, the null hypothesis can be rejected (at that specific level of confidence) and no significant disagreement between experts would be present.

3.6.3 Sensitivity Analysis

The impacts of potential changes in the values on the top level of a model, or any other level, for that matter, are done by ways of conducting a sensitivity analysis. This test is important because the preset priorities (or weights) of a model’s components might change over time [56, 57], and that is especially true in the realm of technology where changes occur extremely rapidly and constantly. Also, changes in the expert panels might bring new priorities (weights), and given the fact that these changes might occur, a sensitivity analysis would be appropriate [45]. The sensitivity analysis shows how strong the decisions and conclusions coming from the model are [48]. It is performed to test and assure the robustness of both the model and the results [56]. The test would also be helpful in enhancing the comprehension of how each level of the model and its components relate to each other [54].

In cases where the model’s output is the ranking of different alternatives, the sensitivity analysis is especially useful to tell if and how much that original ranking would change due to changes in the priorities of the model’s components [47, 56]. For instance, the final ranking of the alternatives might be altered if the criteria relevance is altered, and the sensitivity analysis would measure how strong or disruptive these changes would be.

Scenarios can be used to test how much the ranking would be altered in a particular setting (e.g., if one of the top-level priorities is overwhelmingly more important than the rest), as it was done in previous studies [45, 48]. Notwithstanding the usefulness of testing the sensitivity of a model through different scenarios, in order to calculate how much perturbation in its priorities a model would endure before yielding different results, a more complex method has to be applied. Such a method was created by Chen and Kocaoglu, and it calculates the tolerance of a model to changes (i.e., the allowed range of values within which a contribution can change without altering the final ranking produced by the model) [46]. The method was detailed in Chen’s dissertation [64], and has been extensively used since then [45, 47, 53, 54, 56].

The method states that the original output of the model (original ranking) will not be changed if

$$ \lambda \ge {P}_i^C.{\lambda}^C $$

for the perturbation \( {P}_{l\ast}^C \) where

$$ -{C}_{l\ast}^C\le {P}_{l\ast}^C\le 1-{C}_{l\ast}^C $$

and

$$ \lambda ={C}_r^A-{C}_{r+n}^A $$

and

$$ {\lambda}^C={C}_{r+n,l\ast}^{A-C}-{C}_{rl\ast}^A-\sum \limits_{l=1,l\ne l\ast}^L{C}_{r+n,l\ast}^{A-C}.\frac{C_l^C}{\sum \limits_{l=1,l\ne l\ast}^L{C}_l^O}+\sum \limits_{l=1,l\ne l\ast}^L\frac{C_{rl}^{A-O}}{\sum \limits_{l=1,l\ne l\ast}^L{C}_l^O} $$

The allowance range of perturbations \( {C}_i^C \) to maintain the original ranking is given by

$$ \left[{\delta}_{i-}^C,{\delta C}_{i+}^O\right] $$

and the sensitivity coefficient is given by

$$ 1/\mid {\delta}_{i+}^C,{\delta}_{i-}^C\mid $$

4 Summary

This chapter has presented a methodological approach to build a hierarchical decision model to be used to assess an organization’s technology transfer capabilities. The approach consists of utilizing both an action research component and a literature review components in order to provide input for the building of the model. The application of the model, also explained in this chapter, could lead to a higher understanding and awareness of the technology transfer capabilities of an organization, and also can be used as a starting point to promote beneficial changes in the organization’s processes, ultimately leading to better results and more benefits from research and development through a more robust technology transfer process.