Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

6.1 Objectives of this Chapter

After reading this chapter, you should be able to

  • Explain several techniques for supporting experience management tasks.

  • Describe in detail how the light-weight identification and documentation of experiences (LIDs) technique works.

  • Relate a given experience or knowledge management technique to the classifications and experience management tasks described in the previous chapter.

  • Evaluate a given experience and knowledge management (EKM) technique with reference to the framework described in this book.

  • Contribute to an EKM initiative by pointing out weak spots and making suggestions for improvement – with respect to the principles described above and by comparing them to the cases presented in this chapter.

  • Conceive a knowledge and experience repository that avoids known mistakes.

  • Describe to management what a repository or IT solution can do for an EKM initiative and what it cannot achieve.

This chapter provides case stories on three different levels of granularity.

In the first section, specific techniques are discussed. A technique is a concrete sequence of steps supported by tools, checklists, or the like. Techniques may work better in certain environments, but they are rather generic in nature. A technique could be transferred and used in a different environment. A core component of most experience and knowledge management (EKM) initiatives will be a knowledge repository or experience base – or both. From a computer science perspective, building such a repository may be an interesting intellectual and technical challenge. There are a number of issues that need to be taken into account when you plan to build an effective repository. In the third section, we will take a look at some large-scale initiatives and discuss their design and particularities with respect to the principles laid out above.

6.1.1 Recommended Reading for Chap. 6

  • Johannson, C., P. Hall, and M. Coquard. Talk to Paula and Peter – They are experienced. In International Conference on Software Engineering and Knowledge Engineering (SEKE´99). Workshop on Learning Software Organizations. 1999. Kaiserslautern, Germany: Springer

  • Kerth, N.L., Project Retrospectives: A Handbook for Team Reviews. 2001, New York: Dorset House Publishing Company

  • Schneider, K. LIDs: A light-weight approach to experience elicitation and reuse. In Product Focused Software Process Improvement (PROFES 2000). 2000. Oulu, Finland: Springer

  • Wenger, E., Communities of Practice – Learning, Meaning, and Identity. 1998, Cambridge, England: Cambridge University Press

6.2 Specific Experience Management Techniques

There are a number of techniques devoted to capturing knowledge or requirements. Within the discipline of knowledge management, knowledge acquisition is the term referring to that task. Similar techniques can be used for experience elicitation.

Experience tends to reside in the heads of people, where it was created by observing an interesting and emotionally moving situation or process. The challenge is to make tacit or vague experience explicit and to capture it in a format that can be reused by others. Challenges were described above. In this section, we will see different attempts to deal with the challenges.

6.2.1 Interviews, Workshops, and Post-Mortems

Glossaries are important for better understanding, but they are not a specific knowledge management tool. Along the same lines, interviews, workshops, and project post-mortems are not only used in the context of EKM, but EKM needs to use them, too. When we use them in this context, we need to take the previous section into account. A special variant of interviews, workshops, and post-mortems may better serve the needs of elicitation or knowledge acquisition.

Classical interview: Single individuals are interviewed. One or two interviewers prepare for the interview by preparing questions and conceiving the course of the dialogue: They introduce themselves, ask an open question to start the conversation, and then try to get a confirmation in the end. An interview is a more or less structured conversation. Several elements, like open or closed questions or cross-checking questions, are scheduled to elicit a certain kind of information. The initiative in a structured interview lies in the hands of the interviewer. By following a list of questions, the process is predefined, too.

Closed questions have a limited, enumerated set of possible answers: A yes/no question is the extreme form of a closed question. Asking for “the most successful project” is also a closed question when there is only a limited set of projects to choose from. Open questions, on the contrary, allow interviewees to speak freely. Open questions can usually not be answered with a single word or sentence. Asking for opinions, definitions, or suggestions leads to open questions. Open questions with an unanticipated answer transform an interview into a dialogue. However, interviewers will consult their prepared list of questions even in a semistructured interview and check whether they get enough answers while the interviewee talks freely (Fig. 6.1).

Fig. 6.1
figure 1

Basic and generic elicitation techniques compared with LIDs as a specific experience elicitation technique (as described later)

Good interviewers plan an interview like a movie: There is a story line guiding the interview. This preparation makes sure the interview is making efficient use of the resources and time devoted to it, in particular interviewee time. Deciding about closed and open questions is a means of minimizing the participants’ cognitive effort. When planning the interview or workshop, software engineers need empathy with their counterparts. If we want to capture experience in an interview or a workshop, we must enable participants to talk about their experiences.

Although it is not easy to define what makes an interview or workshop successful, it is rather easy to spot a number of mistakes that might ruin the attempt. A number of recommendations for interviews can be derived:

  • There should be two interviewers: One asking questions and maintaining the communication while the other tries to capture, document, and support the exchange. If there is a single interviewer, writing must be minimized, as it is idle time for the interviewee. On the other side, the interviewer must memorize exactly what the interviewee said. In that situation, a voice recorder is often used. It is sometimes difficult – and always time-consuming – to search or transcribe a record.

  • Interviewers must be prepared: They need to know what this interviewee has said about their topic before, what has been documented or captured before, and what others think who have been interviewed earlier.

  • Usually, there needs to be an opening and a closing phase. Interviewees need to “warm up” and must be brought up to speed in a topic. When a person is supposed to talk about his or her experiences in a field, an interviewer needs to spend some minutes introducing the respective field. Through the opening statements, interviewers will be able to set the tone and expectations for the interview. Both opening and closing statements are good opportunities to tell the interviewee about the mission of the interview and what future steps will happen to the experience he or she raises (Fig. 6.2).

Fig. 6.2
figure 2

Dramaturgy and tasks around an interview (example)

Interviews are very popular as a knowledge acquisition or experience elicitation technique. Tacit knowledge and experience needs to be made explicit. This can be achieved by questions and a story line that helps the interviewee to access unconscious knowledge. In the case of experience, it is helpful to address all three aspects (i.e., observation, emotion, hypothesis) whenever one of them is mentioned in an answer. As always, a good interviewer may extract those three aspects from an answer to an open question. In standard interviews, only and explicit conscious knowledge or experience is usually communicated. This is a severe limitation.

Workshops are used for a wide range of purposes during a project. They can be used to capture knowledge or experience from more than one or two individuals at a time. Depending on their application area, workshops will vary even more than interviews. In general, a workshop should not just be seen as a set of parallel interviews: Less attention can be paid to each participant, and speaking time must be shared by more partners. In return, statements by one person may remind others of insights they had forgotten. Tacit knowledge or experience can be activated better in a well-moderated workshop. Like an interview, a good workshop should be planned with a story line; it underlies the agenda, but is far more fine-grained. It is an internal plan the interviewers want to follow which takes into account the expected reactions of the participants. There must be a person assigned to take notes; in a surprising number of workshops, so many insights are mentioned but never get documented. A special acquisition or elicitation variant of a workshop needs to emphasize activating those insights, making them explicit – and documenting them.

Definition 6.1 (Post-Mortem)

Post-Mortem workshops are a technique for capturing project experiences for the benefit of future projects. The purpose is to elicit those experiences after the end of a project (“post-mortem”) and to document them in a form appropriate for later reuse.

In our terminology of experience exploitation, a post-mortem tries to cover activation, collection, and a part of experience engineering. Dissemination needs to be arranged separately.

6.2.1.1 Techniques for Post-Mortems

There are several specific techniques for post-mortems, as described in [19] and [63].

  • In simple variants, a post-mortem is little more than a meeting in which all participants can talk about their observations and evaluate and criticize them. This kind of unstructured post-mortem is not expected to deliver much value. However, participants often feel better when they had a chance to raise their concerns and talk about problems they have encountered. Such a meeting has a cathartic impact, which frees the minds of people for coming challenges. It provides little support for EKM.

  • Highly sophisticated post-mortems follow longer processes that include one or more workshops but also require participants to prepare or document for the purpose of reuse. Kerth [63] describes variants that require each participant to spend several days on a post-mortem. Such an amount of effort is only justified if it creates substantial benefit.

In many organizations, new project teams are assembled directly after or even while a previous project still runs. Many participants are eager to push the new project forward and perceive a lengthy post-mortem as a burden. Under these conditions, it is difficult to activate as many insights as needed for high benefit.

Obviously, post-mortems are a natural source for experiences. However, when the usual constraints of experience management are considered, the requirements for an adequate post-mortem workshop are as follows:

  • short and easy to perform (light-weight);

  • held at the end of the project or immediately after it;

  • conducted by a group of co-workers in order to cross-activate experiences;

  • has a clear perspective for reuse, maybe after a little experience engineering.

There are not many techniques that match all those requirements. For that purpose, the light-weight identification and documentation of experiences (LIDs) technique was developed during the DaimlerChrysler Software Experience Center (SEC) project. Its basics are sketched here as an example of a light-weight technique that has been carefully optimized for experience elicitation and documentation.

6.2.2 Light-Weight Identification and Documentation of Experiences

LIDs is a light-weight experience elicitation and documentation technique. After 3–5 months, a group of co-workers can use LIDs to produce an experience package within just half a day.

Definition 6.2 (Light-weight and heavy-weight techniques)

There are two ways to optimize the perceived value of a technique: Either the benefit is increased or the effort is reduced. Heavy-weight elicitation techniques invest more and expect more benefit. Light-weight techniques limit time and effort invested. In exchange, they accept limited benefits.

Software engineers can be expected to spend some additional effort as part of their job responsibilities. However, experience management is often perceived as an add-on. Keeping extra effort low is crucial. Therefore, light-weight techniques appear as an interesting alternative: They are optimized for minimal cognitive load and effort and still produce reasonable results.

LIDs stands for light-weight identification and documentation of experiences. The English word lid is also a metaphor for the desired result: A group of people who have carried out an activity together “collect their experiences in a pot and put a lid on it.” This act provides closure and releases participants from the duty of remembering what happened. Many software engineers appreciate the opportunity to talk about their observations, experiences, and recommendations. However, they do not like to write them down, review, compare, and rework them.

The LIDs technique is designed to support a small to mid-sized group of co-workers. They may have played different roles in their common activity. One facilitator is needed who knows how to carry out LIDs. The facilitator can be an EKM supporter or a group member who knows LIDs. A computer projector is required. No sophisticated tool is needed.

The results of a LIDs session are useful even without any further processing or engineering. At the same time, they are ideal material for experience engineering. Again, nothing needs to be done (low threshold), but much can be done (high ceiling) with the outcome of a LIDs session. Low threshold and high flexibility makes LIDs a good example of an optimized experience management technique.

There are some assumptions associated with LIDs:

  • During an extended activity, several documents are written, revised, and used. There are plans, checklists, deliverables, and sometimes even essential e-mails that determined the fate of a project or an activity. They are a part of the project the participants “observed” during the project. We want to save and reuse a subset of those documents when we capture experiences.

  • Some time after the activity, participants will forget what version of a document was actually used; why they made a decision at this point; who received that important e-mail, and so forth. Without this information, all follow-up comments and recommendations are decontextualized.

  • Responding in a group helps participants to remember the activity better than in a separate interview. As discussed above, a remark by one person often provokes replies by others. This is fast experience activation built into human nature.

  • People talking and the above-mentioned documents need to be related. Not all versions of all documents must be stored, however. It is better to save only those few documents and versions that made a difference and were part of the “observation” that led to some “emotion” and “conclusion.” Documents no one remembers immediately after the event or activity should not be collected.

  • Many people like talking, but not writing. A single 2- to 3-hour meeting seems acceptable to most if (and only if!) this is all they need to do for experience documentation.

  • People need help to remember events and dependencies. An elicitation session needs structure and documentation support, like templates.

There is no magic. The written experience package must be created somehow. In this case, the facilitator carries the burden of typing what people say while they are saying it. To make this task easier, the facilitator starts with a given template (Fig. 6.3).

Fig. 6.3
figure 3

LIDs template, used as storyboard and table of contents [90]

The core of the LIDs template is a table of contents that guides the group through their common activity. This table of content acts as an agenda for the LIDs elicitation session. It starts with a few questions about the reported activity. When the LIDs report is completed, answers to those early questions will help reusers (i.e., readers) to contextualize the information. Experience engineering may use it to index or classify or to attribute LIDs results. At the same time, those questions discreetly lead participants into the mood of remembering how everything started and what had initially been expected. At the end of the session, those expectations are checked again. This comparison serves as a reference for recommendations.

At the core of each LIDs, there is a chronological story of the activity. “What happened next?” is the question the facilitator will ask when the group gets stuck. The facilitator types as fast as possible. The typed “transcript” is projected, and participants may correct misunderstandings immediately. This short feedback cycle saves a correction cycle after the session, which would have taken days. Typos are not corrected during the session.

Marking and linking documents: When a document is mentioned and seems to play an important role, its name is underlined. This helps to find those document names afterward. After the LIDs session, all mentioned and underlined documents must be sent to the facilitator (only the relevant version), who will copy them all into a new directory. This is “the pot.” Then, the facilitator creates hyperlinks from underlined names to the files in the pot. In the next pass, the facilitator corrects obvious typos. The LIDs report with its hyperlinks turns into the lid that can be put on the pot. Both together are a self-sufficient experience package that can easily be transferred, e-mailed, read, and further engineered.

Results and deliverables of LIDs: A typical LIDs report is about 6–12 pages long, has 5–15 files of different kinds attached to it, and is usable for many purposes. Mainly, software engineers starting a similar activity in the future should read through related IDs reports. Often, they are inspired by an experience or by an attached (linked) document that they can reuse and adapt. In particular, checklists, hints, or plans are valuable, because they are related to a context. A LIDs report may be rather sizeable. LIDs is considered “light-weight” because it requires little effort. It produces a lot of useful results within a limited amount of time. Because little effort is invested, LIDs is a “light-weight” technique according to Definition 6.2 of light-weightness (Fig. 6.4).

Fig. 6.4
figure 4

Snapshot of using two existing LIDs reports (#1 and #2) for preparing Activity #3 (left) and for deriving additional material (right)

Some people have used a LIDs report to present the project status. Existing slides and material were reused, and experiences and insights were extracted. Others have used information for a management briefing, and so on. Experience engineering can compare LIDs reports on similar topics and derive best practices. Original reports should be kept as “rationale” for shaping the best practice.

6.2.2.1 Inherent Dilemma: Impact Versus Confidentiality

One delicate issue related to many experience activities should be mentioned: access rights and confidentiality. Obviously, not all details of a failed project should be put publicly on the World Wide Web. Participants of such a failure, on the other hand, should have an opportunity to learn from their own experiences. To remember better and draw more profound conclusions, participants should be encouraged to reflect and to externalize their insights. In accordance with Schön’s terminology [98], they need an artificial breakdown.

There are two related aspects that need to be considered:

  • Participants should provide names and details without worrying about privacy when they talk. However, the facilitator may take out names in the final pass or not even write them down.

  • There must be a clear definition of the recipients that will be allowed to access the LIDs report and the pot. It is not unusual to keep the circle very small, such as the participants and the experience engineering group. Experience engineers must carefully remove confidential elements. In extreme cases, even experience engineering will be excluded. When in doubt, recipients should be more restricted. Even in the smallest circle of participants, the value of preserving your own experiences for your own later use will exceed the effort of a 3-hour meeting.

There is an inherent dilemma between the high impact by spreading experience and a threat to confidentiality. Post-Mortems, for example, face that same dilemma. LIDs builds trust: (1) Participants see the projection and may veto anything written. It will be deleted without discussion. (2) Facilitators should not even note names, so they cannot forget to remove them later.

After a time-consuming elicitation event, management will insist on exploiting results widely. If only very little effort has been devoted (e.g., in a LIDs session), management may find it more acceptable to leave the results with the team of participants.

6.2.2.2 Fundamental Concepts in LIDs: Useful for Other Elicitation Techniques, Too

One could invent other elicitation techniques. There are a number of considerations that shaped LIDs. They should be considered in other techniques, too:

  • It is essential to capture experiences when they are still fresh. People usually like to talk about their recent adventures, so they will not consider talking too much effort. A checklist is important to avoid getting lost in war stories.

  • It is advantageous to have a facilitator writing the online report. Participants just tell their story, guided by the facilitator. When no trained facilitator is available, a participant may take over. LIDs is a simple technique and does not require long training.

  • The chronological story should be good to read and not too long. Any technical details must be deferred to attached documents. The story must stay comprehensive and should avoid inside slang.

  • Templates are among the most reusable documents. Therefore, they deserve special attention. Even documents that are not templates, but generic enough to be easily reused, should be marked as “low-hanging fruits" for reuse.

  • Putting everything together in one storage location (directory or “pot”) makes it easier to compress, copy, and transfer the material in one piece. Readers will find all related material in one place. They will not have to worry about versions and outdated garbage.

  • A LIDs report also protects the “pot contents" from being modified or deleted by others. Therefore, the LIDs must implement a restrictive access mechanism. According to Fig. 6.3, it provides searching and output operations but not modification or direct input. This makes LIDs a read-only access.

Summary: If you want to try LIDs in an appropriate situation, use the table of contents in Fig. 6.3. Guide participants through the chapters and let a facilitator take notes. Copy the documents mentioned into the “pot” directory, and hyperlink them to the report (after the session). Distribute lid and pot to the defined list of recipients.

6.2.3 Case-Based Techniques for Dissemination

Experiences and knowledge need to be prepared for matching with the needs of new projects. This step is very important for turning the potential of a knowledge and experience base into a concrete benefit.

The approach of case-based reasoning can be applied to dissemination . Tautz et al. [109] describe applications of this approach. The principle of case-based reasoning resembles pattern matching. In general, a “case” is described by a number of attributes. In experience management, a case may be an experience or an experience package. Larger experiences with a defined structure are often called “experience packages” [13, 18]. An experience package contains several interrelated experiences. Together, they represent an insight, based on an observation, with associated material.

When a project searches for related experience, case-based techniques will require that project to specify its search query in terms of the attributes describing the cases. A soft matching algorithm on the vector of attributes is supposed to deliver the “closest” matches, or best-fitting cases. A soft matching algorithm not only considers exact matches but also takes into account fuzzy or partial matches. Different closeness measures can be defined as matching criteria.

As Fig. 6.5 indicates, a case may consist of attributes and other parts like a textual description or full experience package. It is just important to select or define the subset of attributes that are considered relevant for matching. Even free text may be used for matching, using a full-text search in closeness measurement.

Fig. 6.5
figure 5

Different case descriptions with formal profiles

On a large set of attributed cases, case-based reasoning is a powerful mechanism for searching – given a good closeness definition. However, if one of those preconditions (large set, attributed cases, good closeness criteria) does not hold, case-based reasoning faces a challenge. In practice, the build-up phase in which cases need to be attributed will be more demanding, as it is not sufficient to document the experience. Someone will need to assign attribute values. This causes additional effort. It is hardly possible to impose that effort on experience owners. Case-based reasoning needs dedicated support: A person must be assigned. In addition, matching criteria need to be defined and controlled, which is again a tedious task. Like sophisticated post-mortems, case-based reasoning is a high-effort (“heavy-weight”) approach. It calls for substantial start-up investment and continued sustaining effort in an experience engineer. Under certain conditions, this investment may pay off in a software engineering environment.

A good matching algorithm will have high precision (i.e., delivering only relevant cases) and recall (i.e., delivering all relevant cases). Those two quality aspects depend on the matching function. In analogy to Fig. 5.7, cases can be small-grained or large-grained.

  • Small-grained cases will lead to more matches, but each matching case will make a smaller contribution to the reusing project. When there are many small cases with several attributes, the relative effort devoted to indexing and sorting increases. This is often perceived as bureaucracy.

  • In large-grained cases, only a few matching cases will be delivered. A large case tends to contain more relevant information but also more irrelevant information (lower precision). Fewer cases will need to be analyzed and combined, but within each case, not all aspects will be relevant.

During the SEC project, we observed a phenomenon: Initially, there were only a few dozen larger experience packages. At that size and number, a complex matching algorithm may be too much of a good thing. With large-grained experience packages, bureaucracy can be reduced at the cost of precision. A simple list of package summaries may be sufficient. Instead of developing sophisticated matching algorithms, projects may manually browse the short list of experience abstracts.

6.2.4 Expert Networks

Because most knowledge and experience still resides in the brains of people, expert networks try to connect those people. This connection can complement and support other techniques that try to manage pieces of knowledge directly.

6.2.4.1 Experience Life-Cycle Based on Communication

All above-mentioned approaches are intended to make experiences explicit and document them. However, the experience life-cycle also applies to oral communication and even to passing-on tacit knowledge. As long as information, knowledge, and experience is activated, collected, engineered, and disseminated, the cycle does not insist on written documentation.

An important class of techniques in this area are expert networks. An expert network is a defined group of knowledge workers who can reach each other and exchange their knowledge and experience. In a specific domain, the members of that group have expert status; they are trusted and appreciated for their abilities. An expert network is supported by an infrastructure that facilitates searching, contacting, and exchanging information among members.

Depending on the time invested and the organization of the expert network, the expert service might be charged to the requesting team. There must be a balance between excessive prices that prevent projects from using the network and a temptation to shift project effort onto experts that are almost free. Finding a good balance will be a learning experience.

Definition 6.3 (Principle of expert networks)

An expert network is defined by the following characteristics:

  • A team has a problem or question but no expert to answer it quickly.

  • A member of the team (who is an expert network member) searches the expert network directory to identify an expert who might help with the question at hand. This search procedure may include direct attribute matching, full-text search, or more sophisticated soft matching algorithms. In smaller expert networks, there may be a number of expert profiles for reading.

  • The directory includes all available contact information: e-mail address, phone and fax numbers, room number. The team contacts the expert and schedules a meeting. Short questions will be solved on the phone.

  • The expert usually does not write or prepare anything. The team asks and receives feedback and answers. The team is responsible for taking notes and documenting according to their needs.

The tool support typically provided for expert networks includes:

  • Web-based search facility. Depending on environment and country, there are different limitations to the possibilities of such a mechanism. In Germany, for example, trade unions and workers’ council will not accept highly detailed, personalized records of abilities and tasks carried out.

  • Yellow pages are a good metaphor for the interface of an expert network. They are often available to the entire company, not only to the expert network. As mentioned above, the search criteria may be restricted in certain environments.

  • Sophisticated tools might provide visual representations of the framework, showing certain relationships or abilities graphically. For example, all people with service-oriented architecture (SOA) experience may be colored in blue, with lines between those who are currently working on a common project. Those visual hints support searching projects to make an informed choice. This is important for receiving good advice. And it is essential for the experts who will not be bothered with demands outside their expertise.

  • If expert services are charged internally, there should be a mechanism for supporting this administrative step. It should avoid bureaucratic tasks as much as possible (like official call for tenders, contract, bill, etc.).

In contrast with seemingly similar groups, an expert network can be characterized as follows:

  • An informal group is an expert network only if there is a support structure and explicit membership.

  • The potential members of an expert network are selected with respect to an organization, a task, or a knowledge domain. For example: “All software engineers with a quality assurance role assigned in a past or current project of our company.” Generic or private networks like XING [121] do not qualify as expert networks by that criterion. Often, a company explicitly lists the members of an expert network (extensional characterization).

  • An expert network usually transcends a single project – even a large one. The expert network provides a mix of backgrounds and skills. It is hardly reasonable to call a project team an “expert network.” Only in rare cases like a research and development group will each member have deep expertise that his or her colleagues do not have. A project team is not heterogeneous enough to build a diverse and inspiring expert network.

  • A community of practice [119] slightly difers from expert networks: it will often not be supported by a dedicated infrastructure. An expert network is not an emerging group that may be facilitated by a corporate group – an expert network is an explicitly created organization with the explicit goal of supporting expertise across projects.

  • Despite its cross-project character, there may be clear goals and responsibilities associated with membership is an expert network. Mostly, members appreciate the reputation associated with selection for membership (Fig. 6.6).

Fig. 6.6
figure 6

Expert network includes members of several projects and provides support to all projects

An ontology can obviously be a good basis for the infrastructure of an expert network. It combines clearly defined attributes (slots) with reasoning mechanisms and the ability to deal with large and growing numbers of individuals. In ideal cases, the expert network could be linked to an ontology and a knowledge base of software engineering experiences and knowledge. There is potential for synergy, for example, by defining the content areas and expert domains for common use instead of creating several inconsistent definitions (Fig. 6.7).

Fig. 6.7
figure 7

Combining a software engineering knowledge base with an expert network

One key characteristic of expert networks is the reluctance to elicit or document knowledge and experience. It is considered sufficient to know in which person’s head that information is located. An expert and a team in demand can then be introduced to each other. The source of experience is disseminated first. When the expert is identified, he or she needs to be asked. The answers will usually be more or less specific to the target project, requiring neither experience engineering nor dissemination. This cycle started and ended with dissemination, and engineering is happening within the expert’s head and the group interacting with him. This mode of knowledge management can be very effective and efficient. Several companies emphasize it. Note that an expert network exceeds random individual contacts among the workforce. It is far more organized.

6.3 Experience Bases

A repository of knowledge is often called a knowledge base . In parallel, a repository of experiences is called an experience base . Both kinds of bases provide more than storage facilities: An ideal base supports all processes and activities of experience or knowledge management.

6.3.1 Example 6.1 (Protégé as a base)

In Chap. 4, we have seen Protégé as an example system for building and using knowledge bases by defining an ontology and populating it with instances. The knowledge management life-cycle resembles Fig. 6.8, following an iteration of knowledge acquisition, knowledge engineering, and knowledge reuse.

Fig. 6.8
figure 8

Life cycles of knowledge and experience: similar steps, different emphasis

The previous section on specific experience elicitation techniques can be adapted for knowledge acquisition, too: There are special interview styles, workshop variants, and combined techniques in which dissemination leads to activating more knowledge. There are more sources to extract knowledge from (documents, books, etc.), and there are some specific sources of experience. LIDs , for example, focus on experience. Case-based techniques stem from knowledge management but can be applied to experiences, too.

There are numerous commercial knowledge management tools. In contrast, experience bases are rarely advertised as such. In many cases, knowledge bases (like Protégé) may be extended to include experiences. However, there are some tools or components that support experience management specifically. This section provides an overview of related functions and features and points out some frequently made mistakes. Respective recommendations are supposed to help you avoid those mistakes and build a successful experience management infrastructure.

6.3.2 Overview of Experience Management Functionalities

Experience management is defined by the experience life-cycle and the need to organize it effectively and efficiently. A tool or technique is considered relevant and related if it either supports one or more activities in the experience life-cycle or if it provides infrastructure and links between those steps.

An ideal EKM tool environment will contain rich knowledge management support (i.e., ontology, presentation, and acquisition tools), dedicated experience management tools, and relationships between the two families of related tools.

Table 6.1 Types of tools

Dissemination should not necessarily distinguish between experience and knowledge. It is more adequate to look at EKM from a user perspective: In principle, software engineers do not care who or what helps them to perform their task at hand. In particular, they are not interested in distinguishing between experience and knowledge – and they should not be required to use two different families of tools to handle them. The success of an EKM initiative will not be measured component by component but by its overall contribution to the software engineering capabilities.

For an overview of related functionalities of experience management, the four activities of the experience life-cycle are examined in Table 6.1. Transitions from one activity to the next are added as additional categories, as experience exploitation needs to push forward from one activity to the next. The table simplifies the situation for the purpose of a better overview: For example, a forum can encourage someone who just became aware of an experience (activated it) to provide it to the community.

However, structured elicitation will not be channeled through a forum. Wikis, questionnaires, or in-depth e-mails to experience managers will be more appropriate tools. Tools in the different categories should be integrated or orchestrated to feed into each other for an ideal tool support. The same is true for all activities and transitions in Fig. 6.1.

In Table 6.1, generic kinds of tools are related to those categories. New products that fit the types of generic tools pop up and disappear at a fast rate. An Internet search provides a timely snapshot. In this book, however, the emphasis is put on understanding functionalities and concepts supported rather than on individual products that may be outdated soon. The transformation from tacit to explicit and vice versa forms an additional dimension. It is a task for experience management to stimulate those transitions in a systematic way. However, very few tools are available – despite highly general teaching and learning tools. Therefore, those transitions are placed below the core activities in Table 6.1.

Table 6.1 can be read as yet another cognitive map – providing an overview of typical support for key experience management activities. Please note that this table contains only computer-based tools. Cognitive techniques and social practices are not listed. Experience bases are constructed specifically for the purpose of managing experiences rather than knowledge. They may contain several of the above-mentioned generic tool functionalities and often try to combine a chain of tools in order to close the life-cycle loop of experience management.

Some aspects have been found to be important when an experience base is constructed. The vision is depicted in Fig. 6.9: An experience base is like the centerpiece in the experience management life-cycle. It stores information gained in each of the four activities and makes the material available in all other activities that might need them. By storing, organizing, and linking all key aspects of experience management, an experience base can turn into an active memory of the initiative. A number of lessons learned for constructing experience bases will now be presented [94].

Fig. 6.9
figure 9

Experience base as a centerpiece of the experience life-cycle [97]

6.3.2.1 Important Quality Aspects of an Experience Base

A number of quality aspects have been found to be particularly important for experience bases:

  • Usability

    • Experience organization must correspond with experience volume

    • Focus contents to a limited domain but do not restrict to experience only

  • Task-oriented

    • Seed the base

    • Link experiences to a work process or product

  • Feedback

    • Encourage feedback and double-loop learning

  • Flexibility for change

    • Use an architecture that accommodates all management functions

    • Generate as much as possible

There are issues related to each criterion in the above list. We will look a little deeper into each of those issues. They are turned into advice an best practices here, so they can be considered when a new experience base is constructed.

6.3.3 Experience Organization Must Correspond with Experience Volume

Many initiatives start building a tool to manage the anticipated experiences and experience packages. In most cases, efficient storing and search algorithms are developed to deal with a large volume of experiences. A significant amount of development time can be devoted to those issues. This focus is justified when a large number of elements need to be organized in the experience base. Most initiatives assume this will be the case; but experience needs time to be elicited, engineered and stored. At least for some time, there will be rather limited experience in the base.

We saw a similar example in knowledge management: A large number of small pieces of knowledge can be better organized via a formal structuring mechanism such as an ontology. Searching and soft-matching is facilitated by case-based reasoning techniques. They work well in large and well-attributed collections of cases.

However, experience management often deals with only a limited number of larger experience packages. Attributing or describing them precisely for the purpose of putting them into an ontology is often considered overhead. In that situation, complex classifications, index mechanisms, or search machines might not be the highest priority for a successful experience base. There are simpler mechanisms available for searching experience by association. Task-oriented mechanisms seem to be more adequate for experience bases, as will be described later.

Mnemonic 6.1 (Number of entries expected)

Do not just assume there will be thousands of entries. Experience is slow to harvest, so consider light-weight organizations and search mechanisms.

6.3.4 Focus Contents

In principle, experience bases can hold information, experience and knowledge on a wide range of topics. For example, there could be an experience base on “software engineering.” However, when someone has a task to solve, such an experience base will always contain a bit of relevant information – but there will hardly be a lot of information on any topic. Especially during the initial phase, a very broad range of topics tends to lead to shallow contents: There will be a little of everything but not much of anything.

The software engineer in need of advice or experience will probably find no or not much related material. But when should he or she stop searching? It is bad for the reputation of the experience base and the experience management initiative if many people find nothing or little. They will probably not come back.

One way of focusing contents and facilitating a search is to build narrow experience bases. In several situations, single-topic experience bases were far more successful than broad ones [97]. Focus topics were requirements engineering, review and inspection techniques, and requirements engineering in a highly specific domain. Those three topics were focused enough to lead to a “critical mass ” of experience rather soon. Because it is easier to structure stored material (according to a task at hand, see later), writing, assigning, and searching material is easier. It is easier to tell whether there is anything useful.

Users appreciate experience bases with a clear mission and scope. They do not like searching related material in different systems or repositories. Therefore, enrich and complement experiences with templates, background information, and all kinds of useful material. From the user perspective, all things that help are welcome. A narrow domain stays manageable even if some additional material is added. Experiences will mostly be turned into best practices. Best practices should contain or link to knowledge, tools, and templates that come in handy when you follow them. Experiences will also be linked as an authentic basis for the recommendations. LIDs [90] is a technique that collects experiences and related material in an appropriate way.

Mnemonic 6.2 (Narrow focus preferred)

Prefer a narrow focus of contents, maybe just one topic. Use simple structuring and search mechanisms that are motivated in the supported tasks of software engineers. Enrich experiences and best practices with different kinds of related information that will help them in practice.

6.3.5 Seed the Base

Many ambitious experience managers plan their initiative as follows:

  1. 1.

    First, there needs to be a powerful tool to hold and search the thousands of experiences.

  2. 2.

    Then, we will make this powerful tool available to the software engineers.

  3. 3.

    They will fill the base by entering experiences. Of course, they need to attribute and classify their contributions.

  4. 4.

    When sufficient experiences have been entered, future software engineers will benefit from the asset of experiences.

We have discussed above that this approach is often unrealistic: The software engineers of item 3 above would face an empty base when they first visit it. Why should they visit an empty collection? Why should they invest time for filling it? Why should they ever come back to it when they know it was empty last time?

This is one of the most frequent patterns in experience and knowledge management: Putting out a “working, but empty” repository. From the viewpoint of users (software engineers, process improvement experts, etc.), such a repository is useless – it does not work for them.

The obvious solution is to seed the base before releasing it. A seed is content that has the potential to grow: It must be useful for some purposes of its intended audience, for example review forms or review motivation slides for the review base.

6.3.5.1 Example 6.2 (Seeding a risk management base)

A company builds a risk management base. To seed it, experience managers collect risk checklists and mitigation activities that worked in the past. Those checklists help review participants and risk managers to carry out their tasks. They are a good and useful seed. But a seed is far from covering the entire topic of risk management. After receiving support, many software engineers may feel inclined to contribute in exchange. At least, many people want to correct what they found inappropriate in the checklists. This closes the experience life-cycle: It starts by dissemination checklists and leads to activation of own experiences.

Obviously, a seed will often consist of factual information, knowledge, or useful presentation of accepted information. A seed does not necessarily have to contain any experience – although experience will increase the potential to grow. When a seed starts to grow, the critical phase of an experience base is over. It is much better to seed the base first and see it grow from the beginning than to start it empty and try to jump-start it later. The first impression it makes on its “customers” is important.

Mnemonic 6.3 (Seeding)

Seed an experience base with information that is useful in practice. It must be sufficient to support practical tasks and should encourage feedback. A seed is successful if it starts to grow by the feedback of software engineers.

6.3.6 Link Experiences to a Work Process or Product

Huge collections of fine-grained and well-attributed experiences or knowledge items are accessible to sophisticated search mechanisms. They must use ontologies, case-based or similar approaches for classification and searching.

6.3.6.1 Do Not Neglect Associated Material

Experiences and related material often come in a far smaller number (below 100) of experiences or experience packages. There may be related material associated with each experience, but there are only few entry points to a search. In that situation, experiences and related material should be linked to a process or a product structure. Organizing experience packages around a given process or product structure provides an opportunity for associative search and indexing: The knowledge workers in a domain (e.g., requirements engineers) will be familiar with that domain. Using a given process or domain structure as a guiding structure often allows software engineers to locate relevant information easily – or to confirm that there is nothing relevant in the experience base.

6.3.6.2 Example 6.3 (Supported processes)

The supported process may be a software engineering activity like configuration management, testing, or risk management (see Fig. 6.10). A product structure like the architecture of the software is another useful backbone for orientation. Business processes often mediate between the processes of software development and the structure of the emerging software. A business process defines the core of a software product and its functionality. A graphical presentation of the process or product structure is highly advantageous for supporting associative searching and the authoring of new experience packages.

Fig. 6.10
figure 10

Example of an experience base organized around a process (Risk Management Process at DaimlerChrysler). Reprinted from: Schneider, K. Experience magnets – attracting experiences, not just storing them. In Conference on Product Focused Software Process Improvement PROFES 2001, Kaiserslautern, Germany, September 2001. Lecture Notes in Computer Science, vol. 2188. Berlin: Springer

Please note: The visual representation should mainly show the work process or product, not the relationship of the experiences stored. We adopt a user perspective and try to see the repository through the eyes of a process or product expert. Such a person is very familiar with the task at hand – but not necessarily with experience management. Experiences and material must be accessible from the process overview but may even be invisible on the process map.

Mnemonic 6.4 (Visual map)

Humans are good at searching in a two-dimensional space or map. Use a map of their task (a process or product architecture) and link experiences to it. For small to medium-size repositories, this organization has many practical advantages.

6.3.7 Encourage Feedback and Double-Loop Learning

Fig. 6.9 shows the experience base in the center of the experience life-cycle. This vision sets it apart from a generic database, from a mere communication tool, and from many other types of tools. One of the most important characteristics of an experience base is its support for the process of experience management. In short, the experience base should facilitate proceeding from one activity to the next. Activated experience should be easily collected; stored raw experiences should be offered for experience engineering; its results should be forwarded to dissemination.

6.3.7.1 Elicit Experience During Dissemination

As we have discussed earlier, dissemination of experience, best practices, and related information is one of the best opportunities to activate and elicit new experiences. An advanced experience base should contain features to support this transformation, too. Contextualized communication opportunities are among the most sophisticated opportunities for an experience base. If material is disseminated in a form that allows software engineers to respond with a few clicks, chances for feedback increase. Providing many contact channels is another route to feedback.

An experience initiative should expect to continually learn on the level of experience management as well as on the software engineering content level of risk management, process improvement, or testing. Therefore, experience management tools should remain flexible to react to lessons learned on the experience management level [97]. In his master’s thesis, Buchloh [22] developed a construction kit for experience bases built on standard technology. A construction kit approach facilitates fast adaptation and fosters flexible double-loop learning about experience management tools. Experience managers can learn not only about experience content but also about experience methodology and tools. And they can turn lessons learned into adapted tool structures fast.

Mnemonic 6.5 (Contextualized communication)

There are many tools facilitating communication. They can be used as experience feedback components. Providing context information makes them much more powerful.

6.4 Experience and Knowledge Management in Companies

Most large companies started a knowledge management initiative around the turn of the century. Knowledge was acknowledged as an essential asset and as a precondition for sustained success. Software engineering including process improvement was in particular demand. Maintenance, combination, and distribution of knowledge were approached from different angles, depending on the respective company priorities.

A selection of approaches is described below. They will be presented as cases that illustrate certain concepts. Discussion will focus on the remarkable aspects of each respective company.

6.4.1 The NASA Software Engineering Laboratory and its Experience Factory

Software is a key ingredient in NASA projects and missions. The NASA Software Engineering Laboratory (NASA-SEL) is a research and software development organization that provides NASA with high-quality software. Space missions depend on software, so NASA made significant efforts to improve their software engineering abilities beyond usual industry expectations.

Through a long-term collaboration with the University of Maryland [14], NASA-SEL initiated and maintained a process improvement program. At that time, the Capability Maturity Model CMM [83] and its European counterpart, SPICE (ISO 15 504) did not yet exist. It was the intention of NASA-SEL to improve project predictability, efficiency, and product quality by improving software processes. That same intention was later pursued by several process capability models. However, NASA-SEL used its own data and experience for focused process improvement.

Prof. Victor R. Basili from the University of Maryland introduced two cornerstones of the initiative: the quality improvement paradigm (QIP [10]) and the experience factory [11] concept. QIP is an iterative process of learning in the realm of software quality and software process. QIP (see Fig. 6.11) assumes an iterative process of organizational learning. Quality improvement is achieved by carefully planning improvements, by analyzing results, and by feeding insights back into the next cycle of improvement. Applying the presumed improvements in projects is an integral part of the QIP approach.

Fig. 6.11
figure 11

Quality improvement paradigm (QIP) with integrated measurement [10]

QIP is both goal-oriented and measurement-based. After exploring the situation, setting goals is a crucial step. Software quality cannot improve in an abstract and generic way. It is essential to know the direction and criteria for improvement. In step 3, a process and many related aspects are chosen. In this step, a change to the existing situation is planned. However, only future projects can show whether the planned changes will actually improve according to the predefined criteria. In step 4 of QIP, a smaller cycle is started: One or more projects execute the proposed new process and changes. Project results and behavior is measured. Feedback is attached to the process for each project. That feedback is the basis for QIP step 5, in which all project results, measurements, and experiences are analyzed and compared. The overall insights are packaged and stored for future use. It will feed directly into the next improvement cycle. It can benefit from the insights gained and refocus on newly tuned goals for the next cycle.

Definition 6.4 (Quality improvement paradigm)

QIP is a measurement-based approach to process improvement. Defining goals in the beginning is the guiding concept, and measurement takes the place of “observations.”

Some observations are not spectacular: They confirm the expectations. Others are surprising and may even trigger emotions. During the analysis, data must be interpreted. This corresponds with drawing a conclusion. All three components of an experience are involved, although measurement is an observation that is structured a priori (before the measured event), whereas accidental “observations” in a project are identified a posteriori (after the event). When you are measuring the number of failures found per day, you have a reason to do so. If you happen to find a new type of error, this is an unplanned event that can later be described as an experience.

If QIP is working well, a growing number of packaged experiences will be created. They must be stored, compared, and maybe transformed into best practices. Basili invented the experience factory as an independent team or unit within an organization (NASA-SEL). The experience factory follows QIP, stores the resulting experience packages in an experience base, and refines them by engineering these findings (Fig. 6.12).

Fig. 6.12
figure 12

The experience factory as a separate organizational unit for experience engineering and management according to Basili [11]

NASA-SEL had sufficient resources for a separate unit. The University of Maryland had experts to run the experience factory and to perform in-depth data comparison. It may be interesting to note that initially many “experience package s” were conventional paper reports with sophisticated measurement plans and data. The experience base was basically a wooden shelf.

Experience packages were sets of well-planned measurement results rather than occasional observations some software engineer happened to make. This first experience factory did not rely on those incidents. It was built on the concept of goal-driven measurement.

When the Internet took over, most organizations running an experience factory probably migrated the experience base to the Web. But the concept of an experience factory is about humans learning from experience – not about networking computers.

An experience factory requires an agreed-upon basis of understanding and of processes. It is difficult or impossible to compare data that comes from uncontrolled or different backgrounds [83]. Chances for reuse and benefit increase when there are many similar projects using similar processes.

Definition 6.5 (Experience factory)

An experience factory is a separate organizational unit dedicated to experience work.

Many knowledge management initiatives install a core group with similar jobs as the experience factory team. This type of EKM initiative requires substantial investments over an extended period of time and cannot be run “on the side” by fully booked software engineers. NASA-SEL was able to invest a lot over an extended period of time. In return, they received custom-made process improvement guidance. A number of other large companies have adopted the concept.

6.4.2 Experience Brokers at Ericsson

Many competitive advantages in mobile phones are implemented in software. Good software quality and efficient software development processes are essential for the success of a company. Fast release cycles put pressure on the software experts. In the late 1990 s, Ericsson decided to create a learning software organization. Basili’s experience factory concept was adopted as a model, but implementation would take an unusual form.

Like a classical experience factory, Ericsson created a group in charge of experience exchange within the development organization. There was also an experience base. An additional concept made the initiative unique: the roles of experience brokers and experience communicators.

The most visible role in the experience initiative was the experience broker. An experience broker would walk the aisles of the software development department. He would meet people at the coffee machine, talk to them in the cafeteria, or just say hello in the open office environment. He was invited to meetings, met people in their offices, and had a lot of coffee. His job was to match needs with existing expertise. An experience broker was a seasoned software engineer; he knew what he was listening to and talking about. However, as a broker he knew a number of experience communicators: experts in a specialized field, who also had the ability (and time assigned) to transfer their knowledge and experience. They would visit the project and simply help.

There is a paper [59] that describes this concept very well (Fig. 6.13).

Fig. 6.13
figure 13

Experience broker and communicators spread over projects

It is interesting to identify the elements and techniques in the Ericsson case:

  • The experience base was small and mainly accessed by experience communicators or the experience broker. It was not considered a self-service device for software engineers.

  • The experience communicators had a rich amount of well-organized knowledge and experience in their brains. They were a personalized variant of an experience base including engineered material. This base-in-a-brain would be filled in the traditional human way: by participating in projects, observing situations, and drawing conclusions.

  • As professional experience communicators (a part-time job that consumed only a certain percentage of an expert’s time), they were expected to reflect on their observations. Making experiences explicit was a job requirement. Effective reflection is a rare but valuable ability. Kolb [64] refers to reflection as a mandatory step for experiential learning (see Sect. 2.2).

  • The experience broker used an expert network he knew very well: the profiles of the experience communicators. His main expertise as experience broker was metaknowledge: He knew who knew what.

This realization of the learning organization is fascinating. It is light-weight, relies on communication more than on documentation – which makes it different from the experience factory approach – but it is well-planned and provides active support for the entire experience life-cycle.

6.4.2.1 Example 6.4 (Two-level support in a hardware store)

A similar principle was studied by Reeves [87] in a different domain. He observed a hardware store in Boulder, Colorado: McGuckin’s had a reputation of great customer service with remarkably knowledgeable staff. Fischer and his colleagues found a two-level human knowledge network that resembles the Ericsson setup. When customers had a question about a product or how to use it, they would easily find an employee in a McGuckin’s shirt. This person had general knowledge of the store and its departments. He or she would point to the right department, where a specialized second-level “agent” would offer all the in-depth knowledge and experience a customer might appreciate.

Not every staff member can know all the details of all products. One solution is to use two types of agents: the router/broker and the communicator (as in the McGuckin case). A variant might be communicators only who are specialized in one field and know enough overview to send customers to the right department. However, it is easy to underestimate the difficulties and demands of a router/broker job. At Ericsson, it required a full-fledged software engineer with additional expertise in experience management and distribution. This is a rare and valuable profile.

There is a price for the speed and ease of communication-based experience transfer: When an experience broker leaves the company, a backbone of the initiative disappears. This risk could be mitigated by a little more written experience documentation or by several part-time brokers. The Ericsson approach was charming in its radical design.

6.4.3 DaimlerChrysler: Electronic Books of Knowledge and the Software Experience Center

Daimler-Benz Corporate Research had started in the late 1990 s to explore the potentials of experience-based process improvement [96]. Building on Basili’s concept of an experience factory [11], the software process research group developed a family of techniques and tools to support systematic learning from experience.

Many companies started to consider the CMM capability maturity model [83] or SPICE (ISO 15 504) for process improvement. Climbing from one level to the next turned out to be a difficult and time-consuming endeavor. At Daimler-Benz (and later at DaimlerChrysler), CMM and experience-based approaches were combined in several cases. CMM represented the experiences of a large community but was rather unfocused. Experiences could be used to focus and sort improvement activities [93].

Several generations of experience bases were developed to the prototype stage. Practical experience uncovered misconceptions (cf. Sect. 5.5) and helped researchers to derive improved approaches. Those cycles of double-loop learning led to updated experience bases that were effectively used in practice.

Building experience bases was only a small fraction of supporting systematic learning from experience. In cooperation with the business units, all aspects were studied. Dedicated techniques like LIDs were developed and applied in the business units.

In 1998, Daimler-Benz started the Software Experience Center (SEC) initiative. It intensified earlier work in experience-based process improvement. Corporate research collaborated with three participating business units. Experience activation, elicitation, storage, engineering, and dissemination were supported in close collaboration with projects and teams.

SEC was embedded in the international Software Experience Center consortium. Five global companies scheduled regular experience exchange meetings on software engineering issues. On this level, strategic issues were emphasized. On the project level, operational support by experience reuse was the focus. And in between, SEC contributed to learning across business units [55] (Fig. 6.14).

Fig. 6.14
figure 14

Three levels of the SEC experiential learning initiative

As described in Ref. [92], experience exchange worked best on the company and the project levels. According to its design, SEC directly supported participating business units in a highly individual way. Findings can be generalized and applied in other environments. They substantiate many chapters of this book. This initiative gathered many insights and turned them into techniques [91], tools [94], and strategies [120].

When Daimler-Benz merged with Chrysler, a corporate-wide knowledge management initiative started. There was an exchange of ideas between SEC and that initiative, but the latter focused far more on operational goals. The knowledge management initiative was an important element in merging the company knowledge and making it available to knowledge workers. This was considered an important contribution to exploiting synergies. Merging key knowledge was considered a prerequisite to a successful merger of ideas and products.

6.4.3.1 Knowledge Management with Electronic Books of Knowledge and TechClubs

All elements of a large knowledge management initiative were installed. Two of them were highly visible within the company:

Electronic books of knowledge (EBOKs): EBOKs played the role of predefined, prestructured knowledge bases. They were defined for all important products and several further aspects. In our terminology, DaimlerChrysler experts outlined the EBOKs with respect to anticipated needs in the business units. Specialists had to fill the EBOKs, which were accessible over the intranet. However, an EBOK was not an unofficial write-up by a group of volunteers. Contents were solicited, and a lot of effort was put into defining and filling that part of the corporate knowledge base.

TechClubs were the teams assigned to chapters of an EBOK. High-level managers in a construction unit would be selected together with technical experts to identify and document relevant knowledge. Although TechClubs looked similar to communities of practice, they were different in a number of aspects:

  • Membership was not voluntary; members were assigned. This was considered an honor and a job responsibility for company integration, not an informal add-on.

  • A typical community of practice does not have a responsibility for documenting its findings, like TechClubs had for EBOKs.

  • Cross-learning among members of the TechClub was not a primary goal of the meetings (but has obviously occurred).

EBOKs were prestructured and implemented in Lotus Notes. There was no (visible) ontology or reasoning mechanism, at least not in the beginning. Maybe some tasks were later supported by more formal tools.

The knowledge management initiative organized not only TechClubs with their EBOKs but also many other activities for networking, integration, and knowledge exchange. Some targeted social ties: An internal knowledge management award was co-celebrated via video link in Germany and in the United States. This activity incurred attention as a nonmonetary incentive. It also raised awareness for knowledge management.

DaimlerChrysler had more than one initiative running at a time. They had different goals and different target groups. Nevertheless, the concepts presented in this book can be identified – in different ways – in both initiatives. When you join or even direct a similar initiative, you will be able to map your individual situation to the concepts, concerns, advice, and tools you have seen in this book.

6.5 Internet and Web 2.0

During the past decade, the Internet has changed the world. Information that used to be spread over the world is now within the reach of our fingertips. Search machines provide access to electronic libraries, personal homepages, and company Web sites. Through blogs, Wikis, and social networks, the Internet turned millions of readers into authors. It blurs the distinction between the two groups. Such a revolution should have an impact on the management of experience and knowledge.

In a way, it has. It is no longer necessary to wait for information once you know where it is or who might have it. In former times (that is, more than 15 years ago), documents were usually printed on physical paper and shipped around the world. Books and scientific publications could be ordered from a library, which would send it via mail. Even with the most elaborate and fast logistics involved, response time was measured in days rather than minutes. Today, Internet and e-mail reach almost everyone in the professional world of software engineering. Because of fast and inexpensive Internet connections, most software engineers are no longer concerned about speed or cost of transfer. If there is an electronic document you want to share, there is no technical reason why the other party should not receive it within half an hour.

6.5.1 Impact of Internet Technologies on EKM Initiatives

But how do you know what document to share with whom? And how do you find a document or a person you want to interact with? When we put those questions back in context, Fig. 1.8 serves as a map and overview one more time.

We will now walk through Fig. 6.15 and discuss differences to the generic situation depicted in Fig. 1.8. The influences of new Internet technologies on experience and knowledge management can be discussed in this context.

Fig. 6.15
figure 15

The impact of Internet technologies on experience and knowledge management. This figure was derived from Fig. 1.8 but shows more sources and better dissemination. Central activities (shaded) are affected in both positive and negative ways

Access to more bases and more people. There are more information bases and resources available over the Internet . As described above, access to those bases is much faster and cheaper. Most of those bases are not specific to software engineering or to any company. They reach from personal homepages of high school students to sophisticated information bases like Wikipedia . But also within a company, the new technology provides faster and easier access to more internal bases. To indicate that change from Fig. 1.8 to Fig. 6.15, the information base symbol has been enlarged.

The widespread use of e-mail and other collaboration media on the Internet has made far more people accessible as potential sources of knowledge and experience. Because we are only interested in knowledgeable or experienced software engineers and their partners in this book, only a small subset of all Internet users are relevant for our consideration. Nevertheless, almost every software engineer and many domain experts are now in better reach of an EKM initiative. This has been symbolized by the much higher number of people drawn in Fig. 6.15 than in Fig. 1.8.

There is a downside of this opportunity: It is no less difficult to identify who can provide reliable and competent information for any given software engineering problem. When there are only a hundred employees, identifying the most knowledgeable or experienced source may seem doable. When there are hundreds of thousands of presumed software experts out on the Web, identification becomes an even bigger challenge. The risk of “identifying” a less competent source is growing.

Identification, encouragement, and elicitation. When it comes to experiences, the interactive and collaborative nature of “Web 2.0,” such as blogs and Wikis, enables everyone to share their observations and experiences with everyone else. Contributions can simply be typed via a normal Internet browser. There is no need for expensive software that needs to be bought and installed. Many people around the world feel encouraged to share their adventures and opinions.

Software engineers and domain experts may feel encouraged to work on a Wiki in a distributed environment. This is indicated in Fig. 6.15 by the stronger arrows in the elicitation column. However, it is unlikely they will provide highly reflected experiences indicating observations, emotions, and conclusions. Although there is certainly more data and information on all kinds of raw experiences, the amount of useful and reusable software engineering experience is hard to estimate. When a company sets up its own experience exchange tool on top of an existing Internet technology, there should obviously be additional guidelines on how to use the new opportunity. Because resources and information are available outside the company, only very small portions need to be replicated and stored within the EKM initiative. Therefore, the other arrows in the elicitation column are not stronger.

Validation, structuring, and engineering. The middle column (“engineer and store”) of Fig. 6.15 has changed in a surprising way. Whereas the necessity for validation and engineering of all the additional sources and their contributions has grown dramatically, most knowledge management initiatives cannot cope with that challenge. Instead, most symbols in that column shrank instead of growing: Many knowledge engineers do not even attempt to validate or engineer external sources. Unstructured and invalid sources offered by a knowledge and experience management initiative, however, will decrease its credibility and value. It may be smarter to ignore many supposed opportunities and focus on those parts of Fig. 1.8 where the smaller but better-controlled range of the initiative can be brought to bear.

Matching. The growing amount of potential sources provides more choices for those who seek support. However, decreasing rates of validation and engineering tend to put more of that burden on the shoulders of intended users. Although knowledge management needs to provide more and bigger stores of information, it also needs to maintain quality and reliability of those common assets. This is equally true for stored information and for links to people as sources.

Adding value. The first step of adding value through knowledge and experience is dissemination. This is again much easier, faster, and quicker than it used to be without new Internet technologies. Filtered dissemination of material, mailing lists, special interest groups, and newsletters provide improved opportunity for sharing. This can be used within companies for disciplined dissemination of information. Again, using those technical options without discipline and consideration will lead to a flood of messages. It will be considered spam and ruin the reputation of a knowledge management initiative. Being technically able to send more should by no means encourage the initiative to do so without consideration.

6.5.2 Using the New Internet in an Innovative Way

The discussion in the previous section has brought up a number of challenges and risks that come with the obvious opportunities of new collaboration mechanisms on the Internet . Although some of the core tasks of an initiative have become much easier, the sheer amount of information and the “availability” of so many sources can overwhelm knowledge engineers. By offering more input, the selection, engineering, and matching tasks have grown in difficulty – and in importance.

In principle, all chapters of this book apply to Internet-based EKM initiatives. They need to be applied to those tools just the same way they were supported by phone lists, physical newsletters, and traditional meetings. The distribution of stakeholders in a current project calls for a technological complement, which can be provided by phone or video conferences or by Internet tools. It is beyond the scope of this book to discuss remote collaboration in detail; success criteria and possible misunderstandings must be considered just as in traditional settings.

A new branch of promising approaches in many companies tries to merge the general opportunities of emerging Internet technologies with the demands of a specific EKM initiative. Important examples are Wiki systems that are tailored to software engineering purposes [106]. A Wiki offers the general features for reading and editing a Web page through an Internet browser. Texts can be written, formatted, and modified. Hyperlinks to other pages or other parts of the same page can be embedded as well as figures and tables. Depending on the particular Wiki framework, more or less features are available. Wikipedia (www.wikipedia.org) is a well-known Wiki system. It is a generic encyclopedia. Other Wikis exist for organizing a meeting, writing a publication with distributed authors, or collecting experiences.

This is a great technological basis for many of the concepts presented above. To make it useful, however, there must be appropriate structure and seeding, as has been pointed out above. In the case of a software engineering project Wiki, there should be a guiding structure to the Wiki when it is first published. For example, milestones, tasks, and deliverables should be defined as sections. Maybe, templates or examples can be provided for documents, and milestone criteria should be given. If a Wiki is prepared for its future use including guidelines and best practices that were turned into templates, structures, and recommendations, the best of two worlds can come together: experience, knowledge, and advanced technology. Nevertheless, all hints and warnings provided throughout the book still need to be considered! If a well-prepared Wiki is not used, or not used in the intended way, it quickly degrades and loses its power. Therefore, processes and conventions around a Wiki make the difference between a nice technological attempt and a serious contribution to experience and knowledge management.

Among the most interesting aspects of a shared Wiki is the potential of integrating dedicated tools (e.g., for project management or for working with ontologies). Challenges of making tacit knowledge explicit and of formalizing it for an ontology remain the same. But it is very likely that the future will bring well-seeded Wikis with integrated ontologies fit for company use. As with most technologies, it will take several years before visions turn into company solutions.

6.5.3 Integrating Technology and Workplace Learning

A recurring theme of this book is the necessity to integrate technical solutions with methodologies and techniques that take human participants seriously. Cognitive limitations must be taken into account. The issue of workplace learning (including but not limited to software engineering) was investigated in the APOSDLE project at the Know-Center in Graz, Austria. Many publications and resources can be found at www.aposdle.org. In particular, a background study was presented by Kooken et al. [66]. Lindstaedt et al. [70] discuss the technical implementation of related concepts.

Software engineers are exposed to high workloads and enormous time pressure in many projects. If they are supposed to use experience and knowledge, inventing additional tasks with additional effort is almost a guarantee for failure.

Kelloway and Barling [62] claim knowledge-workers need to be able to carry out knowledge work and learning. Technical support can help in this area. They also need to have an opportunity for learning and applying knowledge in the workplace. This criterion is only fulfilled if projects are planned to allow learning and participating in experience-sharing or communities of practice. Motivation is also important. In a workplace environment, many professionals are initially self-motivated. Management and an EKM initiative must take extreme care not to demotivate those knowledge workers.

Communities of practice serve for exchanging knowledge and experience. At the same time, they serve a social purpose and may even work as an incentive for its participants. They offer social recognition in a peer group, which is one precondition for volunteer contributions. Usable and modern tools, such as a tailored Wiki with useful seed and examples, can be yet another positive motivation factor. Whatever makes software engineering work easier and shows respect and appreciation for the additional work on knowledge and experience are good steps toward successful workplace learning.

6.6 Where We Stand Today

Software engineering is a field in flux, and so is knowledge management. Agile methods and model-based code generation, model-checking, and organic computing are just a few examples of the trends that come and go. However, there are challenges and opportunities in software engineering that remain the same beyond many trends. Making good use of one’s own knowledge and experience is one of them.

6.6.1 The Age of Software

Software drives our companies and factories; it has reached our homes, our hospitals, and our schools. Developing useful software in a predictable and systematic way has become a pure necessity. However, keeping software projects on track and in sync with customer demands is not an easy task. It requires a sound background in computer science and continuous learning to stay up-to-date with emerging technologies. Processes are among the knowledge areas most essential for disciplined software engineering. They require management commitment, technical skills, and organizational learning. Agile approaches have emphasized the importance of human-to-human communication in software projects.

6.6.2 Experts in Structuring, Modeling – and Learning

In such an environment, the ability to handle knowledge and experience professionally often makes the difference between failure and success. There are many facts to know in software projects. The Software Engineering Body of Knowledge (SWEBOK) provides a good overview, but there are even more knowledge areas a professional software engineer has to handle.

This does not mean – and cannot mean – that a software engineer needs to be an expert in all areas his or her project touches on. Instead, it is a characteristic of software engineering that its experts need to be able to learn quickly and acquire a sufficient level of understanding in a short period of time. They need to be able to communicate effectively with different domain experts. Software is concerned with structure, patterns, and models. Knowledge management attempts to structure and model knowledge and experience. Therefore, software engineers should make good knowledge workers, with a deep understanding for some of its techniques.

Many problems and many opportunities in software projects are not technical in nature. It is the team or the department with its people that needs to be managed, as well as the knowledge that is spread over so many eager co-workers. At some point, knowledge management needs to provide opportunities for knowledgeable and experienced people to effectively exchange what they have. This will only work when the environment provides motivation and support for those who need or want to improve knowledge exploitation – if management allows and encourages opportunities for learning; and if individual software engineers have the ability to contribute and benefit. This book addresses all issues but emphasizes the last one.

A practitioner or student of software engineering needs to have a solid overview of and some insight into the mechanisms and techniques of organizational learning, knowledge management – and experience sharing.

We stress the importance and particularities of experience since we need to use it despite its vague and fluid nature. This is an important step in software engineering. Experience is important in all aspects of software engineering, but it is indispensable in ill-defined and evolutionary tasks, such as design, software quality, or requirements elicitation. There is no objective way to construct an ideal solution. Based on experience, a good solution must be developed, discussed, and improved.

6.6.3 A Tool Is Not Enough

There are software tools for many tasks in software engineering, and there are numerous tools on the market to support knowledge management. In large software projects, certain tools are indispensable, such as a configuration management tool and a modern integrated development environment.

For the challenge of organizational learning and knowledge management, tools come in handy, too. However, they will not do the job alone. Unlike configuration management, many issues in knowledge sharing and management are not fully understood. Human interaction is required, and human experts need to exchange experience and knowledge. For that reason, the mere technical ability to interact through a tool or the Internet is just one precondition for effective knowledge management. If knowledge workers (software engineers) are limited in their opportunities of applying interesting techniques, they will not do it. For example, a quality engineer will not invest time and effort in a “Company Quality Blog ” if management criticizes doing it during working hours or demonstrates lack of support. The lack of motivation or of opportunities are frequent show-stoppers for organizational learning in the workplace.

The Internet and social networks provide a higher-level infrastructure that can be adopted, specialized, and used within a company initiative. They target various modes of communication, collaboration, and information exchange. Even a newsgroup can be a useful addition to a community of practice. However, a newsgroup alone will not make information and knowledge flow in a smooth and elegant way. There are so many assumptions and theories, as well as techniques required to consider the highly specific situation of workplace learning.

Hopefully, this book has contributed to your understanding of the interactions between “normal software engineering work” and the concepts and goals of knowledge management. Experience was added and emphasized as a subtype of knowledge that deserves special attention, as it offers special opportunities. Not many people have an interest in and some expertise with software engineering and knowledge management. Those who have can make valuable contributions to their companies – and to their own careers.

Use what you know!

6.7 Problems for Chapter 6

Problem 6.7.1: Life cycle

A friend tells you they are using a newsgroup as experience base. Which of the typical tasks of an experience base can be performed by a newsgroup, and which cannot? Provide arguments for all examples.

Problem 6.7.2: Risk of experience brokers

A situation like the one at Ericsson is risky: An experience broker may leave the company and disrupt the exchange of knowledge and experience. What could be done to mitigate that risk? That means: If a broker actually leaves, how will your suggestion improve the situation?

Problem 6.7.3: Risk mitigation

Name three important differences between a community of practice (CoP) and an expert network. What do they have in common?

Problem 6.7.4: Compare

The LIDs technique is optimized for “cognitive aspects.” Explain what that means and provide two concrete examples within LIDs.

Problem 6.7.5: Cognitive aspects

Many knowledge management visions include the role of a knowledge manager. In the case of a software engineering knowledge base: What background should a knowledge manager have, and what existing role in a project (see Fig. 4.3) might be a good choice?

Problem 6.7.6: Seeding

You have designed a knowledge and experience base about test strategies and their effectiveness. What could you seed this knowledge base with, and where do you get that knowledge from?