Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

As described in Chap. 7, which is the integrating chapter for Part II of this book, successful coordination processes rely on team knowledge, which is defined as commonly shared knowledge that team members have about a task and about each other (e.g. Cannon-Bowers et al. 1993; Kraiger and Wenzel 1997).Footnote 1 In this way, team knowledge is thought to help team members anticipate the needs and actions of others in order to “implicitly” coordinate group behaviour and improve team effectiveness. In most of the present coordination research, team knowledge is applied in the context of work teams and to a somewhat lesser degree with regard to sports teams. Team knowledge is not a coordination process per se as is tacit behaviour or feedback. However, in the literature it is often labelled as implicit coordination because it represents a team-level knowledge structure that facilitates implicit coordination behaviours such as monitoring, anticipated backup, or dynamic adjustment (Rico et al. 2008). Thus, the concept is of interest in many different domains of group interaction, such as those occurring in families, organizations, and communities. But how can researchers capture the shared knowledge of a group? What aspects can be identified and measured, and what methods are appropriate?

In this chapter we will discuss the concepts and measurements of team knowledge as follows: In the first section we will highlight the challenges of measuring team knowledge in organizational settings compared to more controlled laboratory settings. In the second section we will give an overview of different theoretical concepts of team knowledge and thus explain what concepts of team knowledge can be measured. In the third section we will introduce specific methods to assess team knowledge in a more detailed way. These common methodological approaches to team knowledge will be explained and evaluated in terms of their usefulness in field settings. Finally, in the general discussion we will outline directions for a valid assessment of team knowledge in organizational settings, which can complement laboratory studies and enrich our understanding of implicit team coordination.

1.1 Team Knowledge and Its Current Research Status in the Literature

Although team knowledge is seen as an important prerequisite to a comprehensive understanding of coordination processes in teams, its reflection in psychological research lags behind the importance of the concept. Several empirical studies have shown that team knowledge and indicators of “explicit” team coordination and performance are clearly related (e.g. Edwards et al. 2006; Chap. 11; Lim and Klein 2006; Marks et al. 2000, 2002; Mathieu et al. 2000, 2005; Smith-Jentsch et al. 2005; Stout et al. 1999; for a review, see DeChurch and Mesmer-Magnus 2010), but it seems that when regarding team knowledge, there is much more understanding to be gained from a theoretical perspective than from manifold empirical evidence. Moreover, at present, there is only slight evidence that team knowledge directly influences implicit team coordination such as anticipation (e.g. Ellwart and Konradt 2007a; see also Chap. 5). There are at least two reasons for this lack of empirical research: First, various competing methods and tools have been developed to capture team knowledge (e.g. Cooke et al. 2000; Langan-Fox et al. 2000; Mohammed et al. 2000), which can potentially yield different facets of team knowledge and thus hinder an integrative picture of team knowledge (Mohammed et al. 2000). Second, small group/team research is mostly limited to controlled laboratory situations, as well as small and distinct groups with identical and specific tasks (Lewis 2003). This makes it difficult for applied psychological research to transfer the concepts of team knowledge into organizational teams and enrich the empirical foundation. Hence, the purpose of this chapter is to give a summary of common measurement techniques to capture team knowledge of organizational teams, with a special focus on the practicability in field settings.

1.2 Challenges to Measure Team Knowledge in Field Settings

There are different measurement approaches to capturing team knowledge. Most of them have been successfully applied in highly standardized experimental settings (Cooke et al. 2000; Langan-Fox et al. 2000; Mohammed et al. 2000). However, the majority of these measurement approaches have important limitations for assessing team knowledge in field settings due to the difficulties associated with transferring theoretical methods and tools into field settings. The first problem is that experimental methods depend on tasks being identical across teams in order to apply content-specific tools for group comparison (for detailed information, see Sect. 3). Second, the researcher needs to label (and therefore identify) the shared knowledge of interest precisely prior to the task in order to measure its specific content (Lewis 2003). But organizational teams hardly ever work on tasks that comprise such straightforward characteristics, as tasks vary across projects and teams to a large degree. Applied psychological research investigates heterogeneous teams fulfilling heterogeneous tasks to draw valid and functional conclusions about coordination processes and team knowledge. Thus, as with any other coordination entity, measurement techniques of team knowledge need to take into account different requirements in field applications: (1) methods to identify and quantify team knowledge need to be less task- and team-specific in order to allow a comparison between groups; and (2) field research tools need to be efficient with low material and effort costs to stakeholders as well as participants in order for researchers to be granted access.

To illustrate the specific needs of field measures, one can think of a scenario where the aim is to evaluate the functional relationship between planning processes and team knowledge. In a laboratory setting, the experimenter can define a task and a group that will align with the constructs of this interest. For example, Stout et al. (1999) designed a surveillance/defence mission task that lasted approximately 1.5 h with a team knowledge measure that involved 190 paired-comparison judgments. In these judgments, participants were asked to rate to what extent specific concepts were related (e.g. “Second in Command tells Mission Command what target looks like and how many miles away it is” and “Mission Command tells Second in Command what weapon to use”). Quantitative analyses lead to a team knowledge indicator of a shared mental model. Stout and colleagues were able to show a relationship between explicit planning and implicit team knowledge.

If researchers want to replicate this study in a field setting, the above-described technique for operationalizing team knowledge would not be applicable in organizational teams. First of all, the content of team knowledge needs to be known before it can be integrated into the paired-comparison measure, an impractical constraint in field research (see Chap. 6). Second, the content of team knowledge needs to be similar across different teams and their tasks in order to apply a comparable measurement approach for all teams. However, in many settings there is a lack of the statistically required number of teams necessary to compare task and team characteristics. Third, many companies (as well as their employees) refuse to participate in investigations where team members work on queries that take longer than 30 min.

An alternative measurement approach to team knowledge in field settings is represented in the team coordination study of Ellwart and Konradt (2007b). Thirty-seven project teams were investigated in a field setting using Likert scales to assess planning, team knowledge, and coordination success where measurements were taken twice during the project. The measurement of team knowledge was neither task- nor team-specific and consisted of a five-item scale that was transferred into a shared mental model index (cf. 9.3.3.1; e.g. “I have a good “idea” of the responsibilities of individual team members”). Both studies addressed a similar question and showed that shared mental models (i.e. team knowledge) mediate between planning and coordination success (Ellwart and Konradt 2007b; Stout et al. 1999).

In sum, the multifaceted nature of team knowledge dictates that different measures will yield different information about team knowledge. Moreover, different methods are more or less applicable, depending on the sample and the task. As shown previously, laboratory-based methods may be difficult to transfer into field settings because of the constraints of a common task and the team characteristics, as well as the efforts and costs of such procedures. However, for the further development of research and theory, it would be of great importance to compare the results found in the lab (mostly experimental) to the results found in the field setting (mostly correlational). This integrative approach would allow a combination of methods to benefit from the strengths and to compensate for the weaknesses of each method.

Before answering the question “How can we measure team knowledge in the field?” the following section will address the question “Which concepts of team knowledge can be measured?”

2 Concepts of Team Knowledge

In the literature there exist several definitions of team knowledge. It has been frequently referred to as shared knowledge and – in similar contexts – as shared mental models, shared cognition, and shared understanding (Blickensderfer et al. 1997; Cannon-Bowers et al. 1993; Cooke et al. 2000). Building on these distinctions from the literature, this section will introduce three types of team knowledge: (1) team mental models, which represent the shared team- and task-relevant knowledge of the group; (2) team situation models, which develop dynamically when the group is actually engaged in the task (dynamic understanding) (Cannon-Bowers et al. 1993); and (3) transactive memory systems, which represent the team's knowledge on individual expertises within the team (Wegner 1987).

In research, most conceptualizations of team knowledge refer to the first concept of team mental models (e.g. Edwards et al. 2006; Mathieu et al. 2005; Langan-Fox et al. 2000). Team mental models are the organized and shared understanding and mental representation of knowledge about central elements of the team, its tasks, and its environment (Klimoski and Mohammed 1994). Cannon-Bowers et al. (1993) defined four content domains underlying team mental models: (1) knowledge of the equipment and tools the group uses in the task (equipment model); (2) understanding of the task, such as strategies or goals (task model); (3) awareness of the team members themselves, such as roles, skills, and knowledge (team member model); and (4) understanding of effective team processes or interactions (team interaction model). This classification represents one approach to order the various content domains of team mental models and may differ from other classifications.

Team situation models emerge whenever a team is actually engaged in a specific task (Cooke et al. 2000).Footnote 2 A team situation model is the team’s collective understanding of the specific situation, and should change in alignment with modifications of the situation (dynamic understanding). Whereas the function of team mental models is embedded in a collective knowledge base that leads to common expectations, the function of team situation models is to interpret specific situations in a compatible way (Cooke et al. 2000). A shared team situation model helps to coordinate team actions according to a specific situation and to determine strategies, supporting the anticipation of other members’ needs and actions in selecting the appropriate action (e.g. backup behaviour, information exchange, actions). Team situation models are based on knowledge from existing team mental models and also include characteristics of the specific situation, the second aspect indicating the qualitative difference between the two concepts (Cooke et al. 2000; Rico et al. 2008).

The third type of team knowledge, transactive memory, is conceptualized as a set of distributed, individual memory systems that combines the knowledge possessed by particular members with shared awareness of who knows what (Wegner 1995). When each team member learns in a general sense what the other team members know, the team can draw on the detailed knowledge distributed across members. Each member keeps track of other members’ expertise, directs new information to the matching member, and uses that tracking to access needed information (Mohammed and Dumville 2001; Wegner 1987, 1995). Given the presumed distribution of specialized memories across team members, transactive memory systems reduce the individual’s cognitive load and thereby are more efficient for the individual regarding cognitive labour (Brauner and Becker 2004; Hollingshead 1998). From a theoretical perspective, the team knowledge component of transactive memory can be seen as a type of team mental model (Mohammed and Dumville 2001). Because transactive memory systems capture a shared understanding about who knows what within a team, it refers to the awareness of the team members regarding roles, skills, and knowledge – what Cannon-Bowers et al. (1993) termed “team member model” (Mohammed and Dumville 2001). However, transactive memory also underlines team processes of specialization within a team (Lewis 2003). It therefore represents a separate category of team knowledge with a strong link to team mental models.

Overall, team knowledge can be classified as team mental models, team situation models, and transactive memory systems. All three conceptualizations describe different facets of team knowledge; and their measurement approaches vary in terms of how team knowledge is defined, elicited, and analysed. Whereas team mental models describe rather long-lasting aspects of team knowledge that exist prior to the task, team situation models refer to the specific situation and change accordingly. Transactive memory, as the shared awareness of who knows what in the team, describes a kind of specific aspect of team mental models.

3 Common Measures of Team Knowledge

Methods for measuring team knowledge reported in the literature vary in terms of how team knowledge is elicited (e.g. observation, interviews) and analysed (scaling techniques, quantification of indicators) (for an overview, see Cooke et al. 2000; Langan-Fox et al. 2000; Mohammed et al. 2000). This section is oriented towards the terminology of previous reviews and focuses on the applicability of the measures in field settings, offering some updates on new developments (cf. DeChurch and Mesmer-Magnus 2010). A central distinction between different measurement techniques of team knowledge is the question of whether they capture the content (elicitation methods) and/or the structure of knowledge (representation methods). Regarding this issue, there is an inconsistent use of the terms “elicitation methods” and “representation methods” (cf. Cooke et al. 2000; Langan-Fox et al. 2000). For the purpose of this chapter, we draw on the work of Langan-Fox et al. (2000) and Cooke et al. (2000) to distinguish between qualitative content elicitation methods and quantitative concept analysis methods. Table 9.1 gives an overview of the methodical approaches for measuring team knowledge.

Table 9.1 Methodological approaches to measuring team knowledge (TK)

3.1 Content Elicitation of Team Knowledge

Content elicitation methods explicate a team’s domain-related knowledge in a qualitative way. The aim of these methods is to map out the content of team knowledge at a qualitative level, for example, to reveal exactly what team knowledge is needed for a specific task. Methods for content elicitation of team knowledge are manifold (cf. Cooke et al. 2000; Langan-Fox et al. 2000; Mohammed et al. 2000). In this chapter we briefly introduce observation, interviews and surveys, and process tracing as methods for eliciting team knowledge. Card sortingFootnote 3 represents an approach that captures the content of team knowledge but also refers to aspects of structure and representation of the team knowledge domain.

Observation of team knowledge can be applied in the field context and can be based on written, audio, and/or video forms. It provides a large amount of information on both the form and content of communication, coordination, and performance. Through deduction, it facilitates the drawing of inferences about concept domains and the relationship between them. For example, Badke-Schaub et al. (see Chap. 10) applied observation of communication patterns as an indicator of team mental model development. The authors concluded that the less communication that took place regarding specific content domains (planning, roles), the better the team mental model developed. For application in the field, observation is a very extensive method that is excellent for gaining a general understanding of the situation, as well as for generating and verifying hypotheses. However, as in other approaches, it relies on the skills of the researcher to identify important concepts of team knowledge at a qualitative level. Moreover, there might be a problem of validity when researchers deduce from observed performance (e.g. communication) a specific team knowledge, due to the questionable theoretical link between performance in a task and team knowledge structure (Langan-Fox et al. 2000).

Standardized interviews (and also written surveys) are systematic ways to elicit complete representations of individual and team knowledge. Respondents are asked to explain key elements or causal relations of specific and relevant knowledge domains. In field application, surveys and questionnaires are easier to administer and to conduct than interviews because they are independent and participants can decide when and where to fill out the forms. However, questionnaires require more preparation time than (unstructured) interviews and depend on sufficient context knowledge to adequately formulate the survey or questionnaire. Generally, interviews and surveys are a valuable starting point to clarify the content of team knowledge in the field because they offer a first explication of team knowledge such as extents, distribution, and tracking tendencies among team members. In contrast to laboratory experiments, the researcher cannot define the content domains of team knowledge a priori. In most cases, it is necessary to build a complete and comprehensive map of team knowledge and its associations. Thus, interviews are a valuable way to outline team knowledge, but they are only a starting point and are inadequate for providing detailed, complex knowledge as well as other important information that cannot be explicitly expressed.

Process tracing techniques are field methods for collecting data on team knowledge concurrently with data on task performance and can be based on verbal or non-verbal data. In verbal protocol analyses, respondents are “thinking aloud” to explain their behaviour and the teams’ behaviour during task performance (van Someren et al. 1994). These retrospective reports are useful for garnering data on intellectual tasks naturally involving verbalization (Langan-Fox et al. 2000) that do not involve physical task performance (e.g. decision making, general reasoning processes). However, in complex field applications, there is the problem of varying degrees of individual awareness regarding cognitive structures that underlie behaviour, and it is therefore difficult to compare team member protocols systematically. Process tracing based on non-verbal data includes, for example, actions, facial expressions, gestures, and general behavioural events to trace cognitive processes (Cooke et al. 2000).

Visual card sorting represents a tool that is helpful when eliciting team knowledge and developing a structure or relative representation of the team’s operative concepts. Moreover, this approach can be applied in a group context where group members develop the team knowledge structure together. Participants name all the concepts that they consider relevant to the domain of interest, and then write them on cards. When concepts have been pre-explicated by an alternative technique, the researcher can provide cards containing concepts to the participants beforehand. The participants then sort the concepts individually or as a group and arrange related aspects closer together, and less related concepts farther apart. This tool can be easily applied in a field context and provides good face validity for the team (Langan-Fox et al. 2000). No statistical procedures are needed to elicit or structure the team knowledge concepts, and this card-sorting method can also be used to measure a team mental model through group sessions. However, the application is limited to concepts that can be compared on the basis of feature matching or spatial distance. For example, to visualize a transactive memory system of a team, cards with expertise domains can be assigned to cards of team members. The expertise of team members is then indicated by a low distance between the expertise domain and the member’s name. This approach becomes difficult, for example, when concepts of interest represent complex processes or strategies that cannot be plotted visually.

3.2 Concept Analysis of Team Knowledge

Whereas content elicitation methods reveal team knowledge at a qualitative content level, concept analysis approaches probe the quantitative structural relationships of team knowledge within a team. Thus, structure elicitation methods aim at revealing how different knowledge aspects are related to each other. There are two approaches in concept analyses: The first approach models the structure and relationship of team knowledge concepts and reveals whether individual mental representations are similar (shared) between the group members. The second method ignores the relationship and structure of team knowledge concepts on the individual as well as team levels, focusing on group agreement regarding more specific characteristics of team knowledge that are interpreted as shared understanding. We will explain both approaches in the following sections.

3.2.1 Modelling Structure and “Sharedness” of Team Knowledge

The following methods are valuable for quantifying the representations of concepts and their relationships. The researcher collects similarity ratings on each possible pair of team knowledge concepts from each team member. These ratings indicate whether the concepts are related positively or negatively and to what extent they are related positively or negatively. In the next step, these relationships between the concepts are compared at a team level. The procedures are based on proximity matrices designed to capture components and organizational structures of cognitive models by applying techniques such as Pathfinder networks (Stout et al. 1999), the quadratic assignment procedure (QAP; Mathieu et al. 2005), and multidimensional scaling (see Mohammed et al. 2000; Cooke et al. 2000). For example, in a study by Lim and Klein (2006), participants were asked to rate the relatedness of various statements describing their team’s taskwork. The resulting proximity matrices of each team member were then compared to those of the other team members to assess team mental model similarity by employing Pathfinder and QAP correlations.

Multidimensional scaling (MDS) gives a pictorial representation of how items are clustered. The inputs are pairwise-similarity ratings of all concepts. The MDS analyses then search for the best placement in the space relative to their similarity or contrariness, resulting in a set of geometric models. The idea is that geometric distance represents psychological distance. In a team knowledge context, MDS can be used to illustrate relative comparisons between mental models that exist among the different team members. However, there are some methodological limitations and restrictions (Langan-Fox et al. 2000).

Pathfinder represents a computerized networking technology that displays team knowledge as an associative network based on the relationship between specific concepts of team knowledge. It results in a network structure of nodes and links, the nodes representing the concepts and the links representing the pairwise relationship between the concepts. This method offers a graphic representation of the team's knowledge structure, along with quantitative indices (e.g. spatial coordinates, dimension weights, pairwise distances between concepts). An important advantage of Pathfinder is that the complexity of the data is reduced via simplified, illustrative techniques, thus making the data more comprehensible than by any of the other techniques (e.g. multidimensional scaling). This simplification is achieved because the link between concepts in a network is eliminated if it does not represent the shortest pathway between the two concepts (Cooke et al. 2000; Schvaneveldt et al. 1989). Thus, the focus is on the closest and strongest relationship between concepts.

An alternative approach to measuring the perceived importance and similarity of team knowledge structures are quadratic assignment procedures (QAP), comparing correlations integrated by the UCINET software (Borgatti et al. 2002). QAP calculates the simple matching coefficient between corresponding cells of data matrices from two team members (this method is limited to dyads: If no match is made, no calculation is made). Several quantitative indicators give information about the individual and team mental model. For example, the centrality index for each concept is a measure of the importance of a concept to the overall network of concepts. Similar to the results of Pathfinder or MDS, this method analyses individuals’ pattern of ratings throughout the matrix (Mathieu et al. 2005) and indicates to which extent team members’ models show similar patterns of relationship.

Despite the value of these quantitative methods based on proximity matrices, there are some disadvantages regarding their application in the field. First, it is necessary that each concept can be rated relative to all other concepts of team knowledge in order to ready the matrix for calculation. For example, Mathieu et al. (2005) operationalized task mental models of teams in a flight simulator task with eight attributes: (1) diving/climbing, (2) banking/turning, (3) airspeed, (4) selecting/shooting weapons, (5) reading/interpreting radar, (6) intercepting enemy, (7) escaping enemy, and (8) dispensing chaff and flares. Team members then rated each relationship between all attributes using a nine-point scale from −4 (negatively related, a high degree of one requires a low degree of the other), 0 (unrelated), to +4 (positively related, a high degree of one requires a high degree of the other). Shared team knowledge was indicated once all team members achieved similar ratings, for example, once all agreed that airspeed is positively related to escaping from enemies. This approach is limited in a practical sense because even though the reduction to eight single attributes such as climbing or airspeed can be applied in that specific laboratory task, in complex field environments it is often difficult to extract a definitive number of concepts that represent the key elements of team knowledge that can therefore be applied across different teams and tasks.

But many organizations are interested in a more elaborate picture of the team knowledge that analyses aspects of sharedness of knowledge concepts rather than their relativity to each other or to task. For example, do team members have both a shared and an accurate understanding of how to behave in certain situations, and if so, to what degree? Therefore, proximity matrices do not seem to be adequate methods, because it is of less interest how several elements are related to each other or conceptually mapped in mental representations of team members. The approach via proximity ratings strictly focuses on the structural relationship between single tasks or team concepts. In many applied cases, teams are less interested in the deep-level analysis of the structural relationship of team knowledge concepts than they are in knowing whether team members share the same ideas about the team or the task, such as agreement on a specific goal. In this case, for example, instruments are needed that can extract the overall agreement of the team regarding the team goal. In this instance, the focus has shifted from the structure of the representation to the content of sharedness of specific concepts, regardless of their structural representation. Group agreement indices based on Likert scales may be a more suitable approach for obtaining such measurements and will be described in the next section.

3.2.2 Group Agreement as Indicator of Team Knowledge

This methodological approach measures aspects of team knowledge “sharedness” using agreement indices derived from Likert-type questionnaires. (Note: In this section the terms “shared mental model” and “sharedness” are used as synonyms for “agreement.”) Although the term “shared” is not always defined precisely and distinctively (Mohammed et al. 2000), agreement in team knowledge reflects the degree to which team members share a similar view, and can be evaluated using different team- or task-related statements in a questionnaire. For example, team members are asked to rate statements regarding the contents of a mental representation (e.g. “How useful is this strategy XY to reach the goal?”). The ratings of all group members are compared and an agreement index is computed that indicates the degree of similarity between team member ratings. In most cases, indices are based on the concept of within-group agreement (e.g. r wg by James et al. 1984) and are used to quantify team mental model agreement or similarity (e.g. Eby et al. 1999; Levesque et al. 2001; Webber et al. 2000). Eby et al. (1999), for example, developed a questionnaire to measure shared expectations regarding teamwork. Each individual team member rated 28 items on teamwork (e.g. “The team develops a task strategy”) on a five-point Likert-type scale. Webber et al. (2000) used a similar approach to measure consensus on strategic team mental models of basketball players and asked team members about the effectiveness of actions in specified situations. In a recent approach, Johnson et al. (2007) developed a rating scale instrument of 42 items that are linked to the five emergent factors of shared mental models, including general task and team knowledge (“My team knows the relationship between various task components”), general task and communication skills (“My team communicates with other teammates while performing team tasks”), attitude toward teammates and task (“My teammates take pride in their work”), team dynamics and interactions (“My team undertakes interdependent tasks”), and team resources and working environment (“There is an atmosphere of trust in my team”). Additionally, Eby et al. (1999) and Webber et al. (2000) applied within-group agreement indices to determine the similarity of the team members’ mental models using the r *wg(j) index (Lindell et al. 1999) for each team based on member responses. This index, as well as the widely used r WG index (James et al. 1984), compares the average obtained variance in a team to the expected variance under a specified distribution of random responses. High levels of agreement between obtained and expected indicate high agreement within the team. Besides the focus only on agreement, Johnson et al. (2007) discussed (1) the calculation of average ratings of team knowledge (mean scores) as well as (2) indices of agreement. To calculate shared team knowledge, also known as team agreement, the average evaluation for each item was computed for each team in order to first calculate the degree of knowledge among the team members (absolute knowledge). The standard deviation of the average score among team members represents how closely aligned each team member is on any particular item (team agreement). However, they do not discuss how average ratings of absolute knowledge and agreement indices could be combined into one single index, which would represent interesting information about team knowledge for both laboratory and field research.

Thus, in the following section, we will discuss an integrating approach to how absolute evaluation of team knowledge concepts and consequent agreement scores can be integrated into a valid similarity coefficient for field application (Ellwart and Konradt 2007a). Moreover, we discuss a statistical procedure to improve the team-specific validity of the team knowledge measurement (Biemann et al. 2009).

3.3 Further Perspectives on Field Applications

The following sections will introduce two new perspectives on the use of agreement measures in team knowledge research. First, the shared mental model index is introduced, which combines absolute knowledge in a team with the agreement between team members; second, the distinction between general and team-specific agreement is discussed.

3.3.1 Combining Absolute Team Knowledge and Agreement: The Shared Mental Model Index (SMM Index) of Expertise Location

As discussed in the previous section, indices of agreement from Likert scales are one valuable research approach to measuring the similarity of team knowledge representations in applied field settings. However, it is important to recognize that a singular focus on agreement indices gives an insufficient picture of the team knowledge model, because agreement just tells if the team shares the knowledge but not to what degree (quantitative measure) or whether they simply agree on having no shared knowledge at all (qualitative measure). Thus, we will discuss two specific conditions of team knowledge that are indicators of team or situational mental models, as well as of transactive memory systems: (1) the absolute knowledge in the team (do they or don't they know it?) and (2) agreement (to what extent do they agree in knowing or not knowing it?). In this section we want to give an empirical example of a Likert-based measurement approach that combines absolute knowledge and agreement. The example is from research on a team mental representation regarding the level of expertise within a team (Faraj and Sproull 2000; Hollingshead 1998; Lewis 2003, 2004). This type of team knowledge regarding the expertise status and specific know-how of the team relates to transactive memory systems (who knows what in the team) discussed earlier in this chapter.

Regarding team knowledge and expertise location in groups, absolute knowledge (the extent of team member knowledge re experts in different domains, who they can ask for help, etc.) and agreement (team members hold similar views on who is the expert on what) are two important indicators. At the individual level, team members need to have an accurate mental representation about the expertise domains of the other team members (Hollingshead 1998) in order to coordinate expertise efficiently. At the group level, there needs to be agreement (i.e. team agreement) with regard to individual expert representations in order for the group to be successful (Mohammed and Dumville 2001). With classical Likert scale approaches, such as the ones applied by Eby et al. (1999) or Webber et al. (2000), questions would be posed regarding team member knowledge about the expertise domains of their teammates (e.g. “We know the specific knowledge team members possess”), and then only the agreement (r WG) of these ratings between the team members would be analysed. This score, however, only gives information as to whether there is variance between team members in their ratings – not whether they do or do not know about the expertise domains within the team, or who holds these domains within the team. To illustrate this important difference, Fig. 9.1 displays “agreement” and “knowledge” exemplarily. Think of a hypothetical group with three members who are asked to rate the item “I know the expertise domains of my colleagues” between 1 and 5 (1 = I do not know; 5 = I do know). When researchers just compare agreement scores, it remains unclear whether “agreement” was in knowing the experts (case 4, all members give high ratings = high mean score) or in not knowing the experts (case 2, all members give low ratings = low mean score). The same holds true when comparing cases 1 and 3. Both cases would yield the same low agreement score because of different evaluations. However, in case 3 there are two members with high knowledge, whereas in case 1 there is only one member with knowledge. This indicates that both “agreement” (variance scores) and “knowledge” (mean scores) are necessary indicators of team mental representations to gain a comprehensive evaluation of team agreement.

Fig. 9.1
figure 1

Combinations of agreement and knowledge within a team

In sum, to our knowledge there are no field measures in team mental model research and related areas that assess and integrate agreement on knowledge about expertise location, combined with absolute team knowledge in one single index. Therefore, Ellwart and Konradt (2007a) developed the shared mental model index (SMM index), which integrates these two indicators of team agreement, adopting a scale from Faraj and Sproull (2000). The original scale asks, from a team-level perspective, whether the team has knowledge about the experts within the group (e.g. “The team has a good map of each others’ talents and skills”). Because the group-level perspective of the items does not reflect the individual representation of each member’s knowledge (cf. Klein et al. 2001), the SMM index changes the original group focus to an individual perspective (e.g. “I have a good map of other team members’ talents and skills”). The scale (four items) captures the individual meta-knowledge of expertise location: who knows what in the team. Using the four Cannon-Bowers et al. (1993) content domain labels outlined in Sect. 2 of this chapter, the SMM index addresses the concept of team mental models, specifically the awareness of the team members themselves regarding roles, skills, and knowledge (team member model). It is therefore related to the transactive memory concept (Wegner 1987, 1995), reflecting meta-knowledge within a team.

To calculate agreement between the individual scores at the team level, Ellwart and Konradt’s (2007a) SMM index uses the average deviation score (AD; Burke et al. 1999; Burke and Dunlap 2002). In comparison to other indices for estimating interrater agreement (for an overview, see Brown and Hauenstein 2005), the average absolute deviation has two major advantages: First, the AD indices do not require the determination of a null random response distribution such as r wg. Second, AD is computed relative to the mean of an item and therefore provides more direct conceptualizations in the same metric of the original measurement scale (Burke and Dunlap 2002). The same metric allows team agreement scores on expertise location (average deviation) to be related to the group members’ absolute knowledge (meta-knowledge on expertise location) in one single coefficient. The aim of the SMM index of expertise location is to integrate knowledge and agreement in a single score. Therefore, the average deviation score is subtracted from the mean score (low average deviation scores = high agreement; high average deviation scores = low agreement). This means that the team’s SMM index on expertise location reflects its absolute knowledge about its expertise minus the degree of agreement within the team. Teams that reveal high absolute knowledge but high disagreement show a lower shared mental model than teams with high agreement. To provide validity of this approach, the SMM index of expertise location should be sensitive to both (1) group differences regarding different levels of meta-knowledge, and (2) high and low team consensus.

Research results showed that teams differ regarding the relationship between agreement and knowledge, underlining the validity of the SMM index (Ellwart and Konradt 2007a). Moreover, experimental and field testing of the SMM index yielded construct validity as well as proof that it predicts team coordination success in both experimental (N = 120 students in 40 teams) and field settings (N = 130 participants in 37 project teams) (Ellwart and Konradt 2007a, b). The SMM index relates (1) to accuracy and consensus scores from objective expertise ratings, (2) to subjective coordination success (self-perceptions that processes were executed in a coordinated way) and team performance at the group level, and (3) to knowledge credibility and task-related self-efficacy at the individual level. These results provide evidence from experimental and field data that the SMM index is a useful and valid measure of expertise location in teams. The empirical data support the assumption that the SMM index is a conceptually and statistically valid measure of knowledge and agreement regarding the location of expertise within teams. Its independence from task performance – and therefore its appropriateness for field settings and comparisons between teams and tasks – are its main advantages. Convergent and criterion-related validity was shown through its relationship to established and objective measures of transactive memory accuracy and consensus introduced by Austin (2003). Moreover, these experimental and field tests of the SMM index demonstrate that it is significantly related to constructs of team coordination such as coordination success and knowledge credibility (Lewis 2003).

In sum, the SMM index of expertise location offers an economic but valid measurement approach that can be used across various types of teams in field settings. Although the SMM index cannot capture the specific underlying organizational structure of the specific knowledge domains, it can be used as a screening tool prior to more extensive investigations in specific teams. Moreover, this approach is applicable to other team knowledge concepts, for example, task-related knowledge (Ellwart and Konradt 2007b). In this particular study, team members rated statements concerning their knowledge about tasks, for example, the goals (e.g. “I know how much progress has been made towards achieving team goals”), responsibilities (e.g. “I have a good ‘idea’ of the responsibilities of individual team members”), or interdependencies (e.g. “I know how the tasks of my team members are related to each other”). Similar to the SMM index regarding expertise location, absolute knowledge and agreement were combined in a single score. Ellwart and Konradt (2007b) were able to show that the task-related mental model can predict task and team conflicts as well as coordination success over time. Moreover, task-related shared mental models mediated the relationship between explicit planning and coordination success. Applied in the field, the SMM index can help to explain, diagnose, and circumvent problems in teams, particularly in organizational teams whose performance depends on optimizing knowledge assets.

3.3.2 Identifying Team Specific Agreement: Improving the Validity of Team Knowledge Quantifications

In this section we point out an important statistical limitation of most team knowledge measurements introduced so far. In the beginning of this chapter, two sets of methodological approaches to analysing the structure and sharedness of team knowledge were introduced: The first set focuses on the analysis of the relationship between various team knowledge elements using proximity matrices (Stout et al. 1999; Mathieu et al. 2005). The second set uses agreement indices derived from Likert-type questionnaires (variance-based approaches). However, both approaches suffer from conceptual problems: Neither differentiates between team-specific and general agreement, a shortcoming that biases correct estimations of the existence and significance of shared team knowledge (Biemann et al. 2009). We argue that these are two different sources of agreement that are erroneously treated equally in most of the methods applied and should instead be separated when sharedness of team-specific knowledge is the focus of the analysis.

On the one hand, agreement can derive from team processes (e.g. planning, reflexion, trust) that are very specific to the team. On the other hand, there can be statistical agreement that is not team-specific but also results in a high within-group agreement. Consider an example from a team knowledge measurement that focuses on strategies in a team situation model. One item may be “Insulting the opponents to make them nervous”, which represents a strategy used in a competitive team-based computer game. The average r WG as well as the r *WG would show relatively high values (0.69, 0.86; cf. Biemann et al. 2009), representing high agreement within the teams. However, Biemann and colleagues showed by means of statistics that regardless of the team membership, all participants agreed that insulting the opponents is not a good strategy to avert losing the game. It is not surprising that the statistical agreement within each team is high, since the agreement among all participants across all teams is also high. The argument against this representing team-specific agreement is that the agreement scores are an expression of a common understanding that does not depend on team boundaries. Only if there is a consensus within some teams that an action is useful, while other teams disregard the same action as useful, is there a team-specific agreement. Unfortunately, this discrepancy between team-specific and general consensus is not reflected in the existing measures of group agreement.

Thus, Biemann et al. (2009) introduced random group resampling (RGR) as an easy-to-apply method to differentiate between team-specific agreement and common agreement. In prior research, the RGR was used as a post hoc significance test to estimate whether indices of sharedness were the result of group-specific variance or of a general phenomenon (Bliese and Halverson 2002; Ellwart and Konradt 2007a, b). Basically, RGR uses a random group resampling to compare within-group agreement data garnered from the actual observed groups with within-group agreement data from hypothetical simulated groups (Bliese et al. 1994; Bliese and Halverson 1996, 2002; Castro 2002). As an addition to other post hoc testing, this procedure introduced by Biemann et al. (2009) offers a direct indicator of group-specific shared variance that can be applied to variance-based approaches (e.g. Likert scales) as well as to proximity matrices (e.g. Pathfinder, MDS). The idea behind RGR is simple: Sharedness of team knowledge is only considered team-specific when it differs from the shared variance of unspecific random teams of the entire population. This random-based variance provides an unbiased statistical estimator of the population variance (Biemann et al. 2009). Thus, the actual population variance of participants can be validated before being integrated into calculations estimating team-specific within-group agreement indices, as well as calculations estimating proximity matrices.

Moreover, the indices based on RGR have an interpretable value useful for the measurement of influences related to team knowledge. RGR ratings over zero (zero = no team-specific agreement) indicate the existence of team-specific agreement and therefore an RGR rating-related potential for positive influence on team coordination and performance (Salas and Fiore 2004).

4 General Discussion: Measures of Team Knowledge in Field Research

Team knowledge represents the shared or common knowledge that team members have about a task and about each other (Cannon-Bowers et al. 1993; Kraiger and Wenzel 1997). We have introduced three types of team knowledge found in psychological research: team mental models, team situation models, and transactive memory systems. For these three types of team knowledge, we have discussed two methodological approaches that focus on either the qualitative elicitation of team knowledge or on the quantitative analysis of structure and/or agreement (cf. Table 9.1).

Methods such as observation, interviews, surveys/questionnaires, process tracing, or card sorting aim at determining the content of team knowledge on a qualitative level. They help create a starting point for understanding the knowledge that a team possesses and shares in order to perform its task, enabling researchers to begin identifying and comparing knowledge domains between or within the team(s). For field application, these methods of content elicitation are a valuable starting point for becoming aware of the specific knowledge within the team. However, most methods are time-consuming and costly because each member must be interviewed, surveyed, observed, or otherwise analysed to illicit the necessary data. Nonetheless, once assembled, the resulting qualitative data yield essential information to the researcher for further quantitative analyses such as concept analyses.

Concept analysis methods such as multidimensional scaling, Pathfinder, or UCINET reveal the structure (relationship) and sharedness of team knowledge at a quantitative level (see Sect. 3.2.1 for a description of their operabilities). Because these methods require objective and valid descriptions of team knowledge that fit to all participants in the study, they are very team- and task-specific and therefore make them difficult to apply in a field setting. Comparative investigations between organizational teams make it especially unreasonable to reduce the relevant knowledge to 10 or 15 dimensions that are meaningful to all teams and their members. Moreover, the often high numbers of pairwise ratings make these approaches very intensive and costly in the field. This might be the reason why methods such as Pathfinder or UCINET are mostly applied in laboratory studies of highly structured, well-defined team tasks involving small numbers of team members.

In field research, the quantitative analyses of team knowledge can be done reasonably effectively using Likert scale ratings. One benefit of these scale-rating approaches is that ratings of single items are less intensive and costly. Moreover, many content domains of team knowledge can be addressed by items at a task- and team-independent level (Lewis 2003). The drawback is that these qualitative indicators are solely focused on sharedness of the ratings within a team, failing to capture the underlying structure and relationship of the shared concepts. Nevertheless, the advantages for field application are significant, as these Likert-based ratings of team knowledge offer valuable indicators of commonly shared mental representations of different types of team knowledge at relatively low costs in terms of time and complicity. From our perspective, these indicators are especially useful when large numbers of teams are assessed in terms of the same knowledge concept. For example, if multi-team companies want to decide which of their teams may participate in trainings to improve knowledge exchange, a short indicator scale allows them to screen many heterogeneous teams with the same method and to then pick out noticeable groups for more specific diagnostics and treatments. More elaborated and specific techniques at this early stage in a change process would surely cause organizational and implementation-related difficulties compared to this rather economic Likert-based approach. Close attention is needed to validate indicators of sharedness with other approaches in order to apply them as valuable screening tools in organizational field research. From a methodological perspective, classical indicators of sharedness from Likert items (r WG) are limited to giving information about the extent to which team members agree or disagree with regard to specific team knowledge domains vs. the absolute level and accuracy of team knowledge. The shared mental model index (SMM index) introduced by Ellwart and Konradt (2007a) offers a practicable way to combine both types of information into a single index. But there are also limitations to this approach. First of all, the ratings given by team members represent their subjective perception regarding their knowledge of the task or the team. The instrument provides no way to prove whether they really know it or just think they know it. Another potential problem is the aspect of social desirability. Especially in field context surveys, it is problematic and therefore highly unlikely (whether true or not) for team members to disagree with statements such as, “I know how the tasks of my team members are related to each other”, Nonetheless, applications in field and laboratory settings should treat the SMM index as a screening tool and combine it with other approaches to team knowledge analysis.

In sum, there are a variety of methods for researchers to assess different types of team knowledge in laboratory and field settings. These methods differ with regard to their specific focus (knowledge elicitation, concept analyses) and their practicability, both of which depend on the specific team and task setting. This plurality of methods allows the researcher to cross-validate the instrument of choice, applying efficient but valid approaches in a field context. However, using numerous approaches means capturing different facets of team knowledge that apply very heterogeneous statistic strategies. Thus, a macro-prospective research strategy may be in order to avoid difficulties in comparing results across different studies and teams. A rather contemporary study conducted by DeChurch and Mesmer-Magnus (2010) showed in a current meta-analysis that team knowledge operationalization impacts the observed relationship between the mental models and team process. Perhaps their most important finding was that methods that model the structure or organization of knowledge are the most predictive. Even though the magnitude of the relationship differed across measurement method, indicators of team knowledge were positively related to team performance, regardless of the manner in which operationalization was performed (DeChurch and Mesmer-Magnus 2010).

As discussed in the integrating chapter by Ellwart (Chap. 7), team knowledge is an important condition for implicit coordination in groups. Team knowledge allows teams to anticipate actions of team members, to provide help and guidance without explicit communication, and represents the common understanding of the group about their task and team. In human team research, team knowledge is mostly captured by language-based approaches, limiting their application in non-human investigations. However, it is conceivable that also in non-human groups there are mental representations that coordinate the behaviour of the group (cf. Chap. 14). Although these non-human representations are not coded in language and are outside the range of self-reflection, they are comparable to the team knowledge concepts in human small group research. Both concepts represent the mental map that helps the group to behave in a coordinated fashion. If this map is not similar between the members of the group, there will be a lack of synchronicity in group behaviour, regardless of whether they are human or non-human groups.