Keywords

1 Introduction

Western countries today are no longer primarily industrial societies but have developed towards knowledge societies. This has deep impact on the working environment, resulting in a high increase of knowledge work [1, 2]. Thus, not only approaches of how to organize work, but also the supporting IT systems have to take this into account. No longer, only well-structured, highly repetitive tasks, typically performed by administrative staff, have to be considered, but especially unpredictable, collaborative processes.

Managing work has been an important area of research and implementation within the last decades. Approaches based on the assembly line principle introduced in manufacturing by F. W. Taylor in the late 19th century have already been applied with office automation during the 1970s, adapted with Workflow Management (WfM) in the 1990s and are now main stream with Business Process Management (BPM). Mathias Weske [3] defines a business process as “… a set of activities that are performed in coordination in an organizational and technical environment. These activities jointly realize a business goal. Each business process is enacted by a single organization, but it may interact with business processes performed by other organizations.” Thus, a business process can be well-structured, fully specified or dynamic, loosely specified, there is no constraint concerning these characteristics. However, BPM is often associated with well-structured, highly predictable, thus, predefinable processes. Such processes have a high number of repetitions, as they have already been considered with automation and are much easier to deal with than highly flexible ones. A contrasting approach concerning work management has been proposed by Peter Drucker, who introduced the term knowledge worker in 1959 [4]. Knowledge work and knowledge worker respectively, have gained much interest since then, e.g., by Thomas Davenport [2]. A knowledge worker typically does not deal with routine tasks but rather organizes his knowledge-intensive, unpredictable work to achieve a certain goal. The question is whether and how it is possible to obtain approximate productivity gains as they are evident in manual work. Drucker [5] states that improving the knowledge worker’s productivity is the most important challenge for management in the 21st century. Companies will be increasingly measured by what extent they succeed in managing knowledge work. Nowadays they are more and more regarded as living organisms rather than machines. Therefore some productivity metrics won’t work anymore. Case Management Systems (CMS) have been developed to support knowledge workers, i.e., providing tools to plan, control, improve working on a special case (cp. Sect. 3.2), including the information needed.

The Object Management Group (OMG) proposes two different standards to support these quite opposed types of business processes, the Business Process Model and Notation (BPMN) and the Case Management Model and Notation (CMMN). But reality is neither black nor white. On the one hand flexibility requirements are heavily increasing with traditional BPM [6], on the other hand best practices, business rules, etc. are used to support knowledge work. Thus, the proposed hybrid approach does not seem to be adequate, especially when taking into account the strong trend towards knowledge work.

2 Knowledge Work and Knowledge Worker

A widely accepted definition of the term “Business Process implicates an organized group of related activities that together create a result of value to customers” [7]. Starting from this general approach it is evident, that process orientation could be a promising way to improve business. And it did—especially when BPM is being considered as identifying, describing and improving well-structured, highly predictable, thus, predefinable processes with a high number of repetitions.

This way of thinking has brought huge benefits to—for instance—manufacturing processes. F. W. Taylor on the one and Henry Ford on the other hand created the scientific foundations and first implementations of this understanding of managing processes. Briefly speaking it is evident that this kind of thinking influenced to a large extent the 50-fold increase of productivity within the past century [5].

But it soon became clear, that work in this manner covers only a part of business. In 1959 Peter Drucker distinguished Manual Workers from Knowledge Workers [4]. Since then, many of his books have run commentaries on the development of knowledge work and the knowledge worker. In recent years e.g., by Thomas Davenport who defines Knowledge Work as “… workers whose main capital is knowledge”. Typical examples may include software engineers, architects, engineers, scientists and lawyers, because they are “thinking for a living” [2].

According to some empirical studies this work type is going to play the leading role in tomorrow’s work as can be seen from Fig. 1 showing the development of routine and non-routine tasks in the U.S. during the last decades.

Fig. 1
figure 1

Trends in routine and non-routine work in U.S. from 1960–2009 based on [8]

It is clearly visible that the share of the workers employed in occupations that made intensive use of non-routine analytic and non-routine interactive tasks increased dramatically during the last four decades. In contrast job losses concerned clerks, assembly-line workers, low-level accountants, customer service representatives—jobs in the lower middle of the earnings distribution that where replaced by rules-based processing of information and robotics. Savage [9] describes a knowledge-focus as the third wave (after Agricultural and Industrial Age) of human socio-economic development. In the Knowledge Age, wealth is based upon the ownership of knowledge and the ability to use it to create or improve goods and services. The point is that the future potentials to create and increase value can be achieved (mainly) within this type of work [10].

This requires a closer look at the distinction among knowledge workers. Although it is rather difficult because of a broad variety of dimensions which have an impact on knowledge work, Davenport [2] reduces them to two dimensions (see Fig. 2). The matrix uses the level of work complexity (the degree of interpretation and judgment required in the process) and the level of collaboration which influences the extent of computer mediation that’s possible in a particular job [2]. On some closer inspection it is obvious that knowledge work in a stricter sense dominates the right part of this matrix, whereas the left side can be considered as “knowledge-based” or “knowledge-applying” work [11]. This distinction is useful because this type of work can be automated or outsourced (to low wage countries, customers, suppliers, etc.) more easily.

Fig. 2
figure 2

A classification structure for knowledge-intensive processes [2]

Interpreting this term narrowly, “pure” knowledge workers have to develop something new in almost every activity in which solution and solution methods are not exactly known. They have to trust their tacit knowledge and learn (rather informally) with every attempt. To a large extent this work is strongly connected with seeking, acquiring, capturing, analyzing, disseminating, sharing, and organizing information [12, 13]. In this context Reinhardt et al. [14] worked out role-models of typical knowledge workers like controller, learner, linker, networker, organizer, retriever, etc. However these role-models usually involve various types of work in varying degrees. So even supposed pure knowledge work typically contains some routine elements. This differentiated point of view influences significantly the way in which knowledge work is being modeled and supported by IT (cp. Sect. 4).

3 Basic Paradigms in Business Process Management

In the early 1990s process orientation in enterprise modeling, e.g., value chains, business process reengineering started to get high interest in business. This heavily influenced not only the way people think about their business, but also the supporting IT systems. Vast improvements in the whole IT sector not only quickened the development of such systems, but also stimulated the research communities. Over the years many different approaches and technologies were developed [3, 6], many of them still influencing the discussion on BPM and the supporting systems.

BPM does not imply a certain level of process structure per se. For a long time the focus has been on activity-centric approaches for predefined processes, as they are easier to understand and automate by IT systems than unpredictable, flexible, rarely structured ones. In the following we will discuss two occurrences of BPM—Traditional BPM and Case Management (CM). In this context we will discuss two modeling standards, developed by the Object Management Group (OMG), an international, non-profit standards consortium in the area of computer science, also well-known for the Unified Modeling Language (UML) or CORBA. The standards relevant in our context are the Business Process Model and Notation (BPMN) in version 2.0 [15] and the Case Management Model and Notation (CMMN) in version 1.0 Beta 1 [16].

3.1 Traditional Business Process Management

Traditional BPM emerged based on the paradigm used with highly structured manufacturing processes. The intention of this formal structure was to enable improvements, such as performance throughout the whole business process, and clarification of work for business and IT people in the sense that the whole business process gets visible through explicit modeling of the process type, monitoring of the process instances, and analyzing the process runs to optimize the model.

Traditional BPM Systems have a strong focus on routine work, supporting fully specified, repeatable routine processes. The model formally predefines the sequence of activities with decisions (conditions, gateways) to direct the sequence flow (alternative, parallel, iterative routing) when describing the model during design time. At runtime a process instance is created based on the process model. Activities are enabled, i.e., available for dealing with, based on their position within the model. An activity can only be enabled after the previous activity has been finished, i.e., activity-centric execution strategy, where the focus is routing (“what should be done”) instead of “what can be done”, offering discretion to the users (cp. [17]). By using the work list paradigm, users (i.e., human actors) take over tasks by selecting from a list offered to them by the system [6]. Within the process instance only data needed to control the process execution is available, thus the context for the process instance is not directly available. van der Aalst et al. [17] describe this situation often resulting in errors and inefficiencies as “context tunneling”.

There are many approaches, which try to implement BPM. One of those is the OMG standard Business Process Model and Notation (BPMN) currently in version 2.0 [15]. This popular meta-model and graphical notation provides a way to specify business processes based on flowcharting technique, thus activity-centric, similar to Unified Modeling Language (UML) Activity Diagrams (AD). BPMN offers to model work as a process [15] to introduce structure, improve performance, identify its start, end and intermediate steps [18], clarify participants and their roles, measure the execution of such a process, etc.

The following BPMN 2.0 diagram in Fig. 3 originating from [6] is an example for a collaborative process, describing the communication between two public processes, just to give a rough idea, of what BPMN stands for.

Fig. 3
figure 3

BPMN collaboration diagram for medical order handling and result reporting (cp. [6])

Concerning the collaboration diagram in Fig. 1, each process is represented in its own pool (Ward and Radiology). The processes have a clear structure—beginning with a start event (circle with a single thin line) and finishing with an end event (circle with a single thick line). The sequence of the tasks (rectangles with rounded corners) is defined by the sequence flow indicated by arrows with continuous lines. The sequence flow can be controlled with gateways. Figure 3 shows simple examples of how to control sequence flow such as a simple sequence with the tasks “Validate Report”, “Print Report” and “Sign Printed Report”. A loop can be found between the two gateways surrounding the tasks “Create Report” and “Check Validity”.

BPMN provides many more specialized gateways and additional modeling elements so describe more sophisticated processes. The example here not only describes a single process but also the communication between two processes, which is represented by messages (dotted line with envelope icon). The example defines a clear structure, using different means to describe the possible routings. Dynamically reacting to changes, e.g., in the environment is not considered. This is why knowledge workers often deny BPM as it tends to be a limiting factor to their inconsistent, unique and creative work. As BPM has been widely accepted, there are also considerations and progress towards supporting processes which ask for more flexibility. This can be done by reducing the level of detail of the model or by using specific modeling constructs such as ad hoc subprocesses with BPMN. Still, the flexibility needed with many knowledge work scenarios cannot be satisfactorily achieved by this.

3.2 Case Management

In 2005 van der Aalst et al. [17] introduced CM as a new paradigm in BPM for supporting flexible, knowledge-intensive business processes. Since then, CM has gained much interest also in the scientific communities, resulting in different approaches. The goal is to improve the management of these processes, which are of high importance to the success of an organization. A big problem with knowledge-intensive processes is that they often only rely on the people working on them, which can be a great risk for the organization. Thus identifying best practices, guidelines, rules, providing the right information at the right time, etc. is important for knowledge transfer within the organization. Furthermore better planning, controlling, and supporting collaboration can highly increase quality, reduce working time and risk.

To specify the terms case and CM, we consider the following definitions by van der Aalst et al. [17], the Case Management Society of America [19], Forrester Consulting [20], and the CMMN development team [16].

A case describes the problem to be solved, the whole work to be done, e.g., a policy to be written, a product to be manufactured, a legal case to be handled, or a patient to be treated. The case holds all information necessary to handle the case. Thus, the information, i.e., structured and semi-structured data, documents, collaboration and communication artifacts, policies, and rules, is available throughout the whole process. The key driver for the case is not a sequence of tasks to be executed like with traditional BPM, but data needed to achieve a goal.

CM is a “… semi-structured, but also collaborative, dynamic, information-intensive process …” [20], where the case is the central concept [16]. The focus is on what can be done to achieve a business goal. CM is widely spread in the area of health care, dealing with the management of medical cases. But CM is not restricted to this single domain, it is characterized by certain criteria, such as high degree of flexibility, no predefined process structures, but knowledge intensive human decisions and task. Thus, often also the term Adaptive Case Management (ACM) is found. The proceeding of the case is typically not predefined by a sequence of activities but is human-driven, i.e., it evolves during run-time due to user decisions. The tasks are enabled based on data. Furthermore, there are means to define relationships between those tasks, controlling the enabling of activities. Examples for CM processes are [16, 20] patient care and medical diagnostics in health care, legal cases in jurisdiction, claim processing in insurance, problem resolution in call centers, or mortgage processing in banking. The benefits of CM are not only the improvements in visibility (tracking of the whole process including run-time changes) and control over previously manual processes, but also the availability of all information including its history throughout working on the case.

Even though purists like Keith Swenson, vice president of research and development at Fujitsu America Inc., and chief software architect for the Interstage Product Family [21] claimed cases to be non-deterministic, unpredictable, unrepeatable individual occurrences, the need for certain modeling at design-time, providing best practices and responses to common problems, thus supporting organizational learning, has been widely accepted by the community [16, 19, 20, 22] and is also incorporated within the emerging CMMN standard.

As many CM approaches and tools have been developed, the need for a standard like BPMN supporting traditional BPM arose. Thus, the OMG launched a request for proposal, the Case Management Process Modeling (CMPM) in 2009 [23]. In January 2013 OMG published the Beta 1 version of the CMMN [16]. Besides OMG big players like IBM, Oracle, SAP and TIBCO are involved in the standardization process. CMMN defines the meta-model and notation to represent cases, and an interchange format for the models. The building blocks of the standard are comparable with BPMN 2.0. The standard is developed on the basis of common elements in current CM products and on current research results [17, 19, 24]. We will concentrate on CMMN for the following considerations concerning CM. Managing a case consists of two phases—design-time and run-time (see Fig. 4).

Fig. 4
figure 4

Design-time phase modeling and run-time phase planning (cp. [16])

During the design the case model is defined by a case manager. Figure 4 shows plan items, which are part of the initial plan for the case instance too, and discretionary items, which build the pool of items, the case worker can select from (in his/her the discretion) when planning at run-time. Thus, the case worker can adapt the case plan to support evolving situations and take ad hoc decisions. Each case instance is represented in its case file, containing all information. Updates to the context of the case, i.e., its case file, can be made throughout the whole case handling process. When repeatable patterns, best practices can be identified, they can be integrated into the case model to improve the future outcome [16].

Thus, CMMN case models can be much more formalized than the idea of pure CM, i.e., unique cases, unpredictability, implies. However the overall case cannot be orchestrated by a predefined sequence of tasks.

In the following we outline the essential aspects of CMMN based on [16], disclaiming many details and simplifying when possible without losing essence.

A CMMN model basically contains an information model (caseFileModel), a behavior model (casePlanModel), and a set of case-specific roles. The case file holds all information about the case context, necessary for evaluating expressions, raising events and case parameter handling. It is a logical model, containing case file items, which can be defined with any information modeling language (e.g., UML, XML, CMIS). In the further we will concentrate on the behavior model. The main building blocks of a case plan model are tasks, stages, events and milestones. There are special tasks like human tasks that can be used to describe manual work, process tasks used to define calling e.g., a BPMN process instance or case task to trigger the creation of another case with its own context (case file). A stage is used as a building block with case instances, containing other plan items such as tasks, milestones or stages and their associated sentries (for event handling). Thus, stages are means to build hierarchical structures. Events are used to describe anything relevant that happens (internal or external) during the course of a case, be it transitions in CMMN-defined lifecycles, such as enabling, activation or termination of a stage or task, achieving a milestone, a timer or user event. Events are handled in a uniform way via sentries. Milestones are used to identify the progress of a case at run-time. They can be either defined by the completion of a set of tasks or by the availability of key deliverables.

Sequential constraints on handling a case can be defined via criteria for enabling (entry) and terminating (exit) plan items. These criteria are represented by sentries. A sentry is a combination of an event (on) and/or a condition (if). Thus, not only events, but also constraints are defined with sentries.

Furthermore applicability rules can be specified with CMMN (based on conditions) to evaluate based on the case context described in its case file, if items are applicable in the current situation.

The example in Fig. 5 illustrates some of the most important aspects of a CMMN case plan model. Write Document shows several tasks, most of them are discretionary (marked by dashed line). Furthermore several tasks have one entry criteria sentry marked by a shallow diamond shape. The dotted line between the tasks indicates a dependency. The stage Review Draft additionally has an exit criteria, which is connected to the entry criteria of the milestone Document Completed.

Fig. 5
figure 5

Example case plan model for writing a document incl. reviewing tasks (cp. [16])

As can be recognized, there are quite a lot of commonalities between BPMN and CMMN, one would probably not anticipated when reflecting traditional BPM and CM. In the Sect. 4 we will compare the two approaches with special respect to knowledge work support.

4 Comparison and Interaction of BPMN and CMMN in the Context of Knowledge Work

In the previous section we have discussed two approaches in the area of BPM often characterized as opposite pols including standardization efforts. In the following the two approaches will be compared with special emphasis on their ability to support knowledge work. We will concentrate on the aspects discussed in Chap. “Crossing the Boundaries: e-Invoicing /e-Procurement as native ERP features”: complexity of work, level of interdependency, flexibility, and how knowledge is used within the planning and execution phases. Table 1 gives a brief overview of the aspects and their characteristics in the two approaches.

Table 1 Comparing traditional BPM with CM with respect to the OMG standards BPMN 2.0 and CMMN 1.0 Beta 1 (the opposite pol view)

Complexity of work Highly predictable processes, i.e., highly structured and repeatable, like the ones typically implemented in traditional ERP systems, have been implicitly used to communicate best practice comparable to automatic assembly line between workers. Compared to CM, traditional BPM not only communicates the intended goal but also defines the path of how to achieve it. Both, the goal and the path are typically fixed as soon as the process is instantiated. Tasks inside the process (e.g., described in BPMN) are executed by artificial or human actors. The focus of the traditional BPM approach is on managing the lifecycle of clearly predefined processes based on a transparent structure, business rules, event handling and allowing for analytics to improve the process model. In contrast CM has its focus on supporting knowledge-centric, human-driven processes, which are hardly predictable and have rare repetitions and little common ground.

Level of interdependency Traditional BPM as well as CM cope with individual actors as well as collaborative groups. While interfaces between the actors are predefined with the traditional approach (e.g., hand-over of work in BPMN from one lane to another), CM leaves space for run-time changes.

Flexibility Another fundamental difference is the driver for the proceeding of the business process. With traditional BPM the sequence is predefined by the model (i.e., on type level). An activity can only be started after the previous one (defined in the process model) has finished. With a case, the business process is data- and human-driven, i.e., it proceeds based on user decisions at run-time on the basis of the relevant information. Information is available throughout the whole process execution, avoiding context tunneling [17]. Thus, it is not the position of the activity in the process model, but the data values that reflect progress of the process instance [6].

Theoretically BPMN models could contain all decision options by using data-based gateways. But the complexity of the resulting model would be enormous. Even in simple models a gateway will not be able to capture all possible options to respect human decisions. Not to mention that all options may not be known during design.

Besides executable models, BPMN supports the explicit definition of knowledge obtained by experience, so called best practices, in models that are usually not executable (do not claim Process Execution or BPEL Process Execution Conformance but only Process Modeling Conformance [15]). Such BPMN models could be a complement to CM models for inexperienced workers to guide them through a CM process until they acquire the required knowledge and maturity.

As the need for flexibility heavily grows [6], constructs such as ad hoc subprocesses have been introduced into BPMN to provide some level of run-time flexibility. But still these means are far more limited than within the CM approach.

To cope with the challenge of measuring progress in a data-centric system, CMMN introduces milestones, which can either be achieved by providing certain data or by finishing certain tasks [16]. Thus, the milestone concept introduced with CMMN is not only data-driven.

Flexibility also concerns data. In contrast to traditional BPM, CM allows to change data associated to the case, as long as the case has not been closed (cp. CMMN [23]).

Knowledge work An additional important difference between traditional BPM and CM is which part of the work to characterize as knowledge work. With traditional BPM knowledge work is typically involved in developing the process models while working on the process instances is routine work. With CM building the model (on type level) as well as planning the concrete process instance and working on it are knowledge work. Thus knowledge has a much broader and deeper impact with CM than with traditional BPM. Furthermore with CM not only data used with the case is captured in the case file but also the activities planned, changed, performed or omitted at run-time. This asks for a more sophisticated role concept with CM. While traditional BPM only cares about the execution of activities, CM also considers planning activities, resulting in role types to provide more flexibility at run-time.

When discussing the differences, we already recognized that few processes are either completely predictable (i.e., can be fully structured and predefined) or completely unpredictable (i.e., need to by highly dynamic). It is more a broad spectrum with different appearances in between.

In the Sect. 5 we describe our considerations of how to deal with this broad spectrum of process characteristics, with special focus on individual and organizational learning, which has not been considered in the discussion so far.

5 BPM for Knowledge Workers

BPM for knowledge workers needs to take a broad spectrum of process characteristics into account, dealing with predictable, well-defined processes as well as unpredictable, rarely-structured ones, which need a high degree of flexibility.

The OMG argues for a hybrid approach already in its request for standard in 2009 [23] and adhere to it with the CMMN standard [16]. The idea is to combine traditional business processes (e.g., expressed as BPMN diagrams) with case models (e.g., a CMMN model) to introduce more flexibility into pre-specified processes on the one hand and more structure to cases on the other. However a hybrid approach is not really suitable, especially when considering knowledge work. In the following we reveal why an integrated approach is essential.

Conceptual gap Following a hybrid approach leads to a conceptual gap whenever shifting from one paradigm to the other. The different paradigms intrinsic in BPMN and CMMN have deep impact on the way modelers think about processes. Traditional BPM focuses on a predefined sequence of activities, i.e., the control flow, where each activity has to be finished before the next one can start. CMMN in contrast follows a data-centric approach, which means to keep the focus on information and behavior in an integrated manner. This allows for data-driven activation of tasks, leading to completely different run-time behavior. E.g., someone can still be working on task A, having already supplied the data (precondition) to start with task B. Thus, someone can already start working on B. The conceptual gap is not only perceptible by the modelers but especially by the users of the resulting IT system. With traditional BPM tasks are offered to users via the work list paradigm, i.e., a list of available tasks is offered by the system. Data-centric systems allow the user to search for tasks, leaving space to take personal strengths, interests, etc. into account.

Knowledge workers typically ask for flexible and predefined parts of the process model, as well. Thus, following a hybrid approach results in conceptual gaps, harming the user experience throughout the system, leading to the problems discussed above.

We consider an integrated model, based on a data-centric approach as an appropriate way to provide the needed flexibility and to allow for guidance through prespecified sequence flows, as activity-centric activation of tasks can also be built with data-centric systems (cp. [25]).

Organizational Learning Up to now we have been discussing the approaches with focus on the snap-shot perspective, i.e., the process model describes a certain situation, not taking changes into account. When considering organizational learning, changes of the process models, representing key knowledge assets within an organization, are self-evident. Especially changes in the level of predictability are relevant. On the one hand working within a new domain or on a new problem can start with completely unpredictable processes and enhance towards well-structured ones, while on the other hand changes in methodology, technology and/or working staff (knowledge, people) require more process flexibility. For both directions, fast response to the changing environment is often crucial for business success [20].

Dealing with these constant changes in a hybrid way, especially concerning the level of predictability, would even intensify the already discussed problems concerning the conceptual gap. Not only that there are gaps in the initial model, but they are steadily changing. This results in a challenging task for the modelers, but an even unsustainable situation for the users. User interaction has to follow the basic paradigm and thus would be steadily changing. However a consistent interaction design is crucial for the success of an IT system.

Therefore, with our integrated model, we aim to provide a consistent user interaction paradigm, which allows the users to feel comfortable when using the system, even though the system is changing dynamically in other aspects such as available forms, form details, or query parameters.

Besides organizational learning also the individuals and their learning curve have to be taken into account.

Individual learning and heterogenous staff We did not consider the individuals so far. Rarely each member of a team working in the same area has the same qualification, experience, and other individual characteristics relevant for successful work. Thus, different levels of support are needed, e.g., via predefined processes, but also the information available (at once) should be adjustable to the people. Therefore the basic concept needs to be able to support processes that shift dynamically between more or less specified borders, depending on the user’s roles. This also means that it needs to be possible for an expert user to assist a novice using additional information sources, ignoring the predefined process, etc. When working on the same process instance. So the solution of some hybrid products to start a subprocess either in a BPMN engine or to continue with a sub-case within the same engine is too restrictive.

We try to overcome this restriction by integrating process, information and users, respectively their roles. The big challenge is to provide the needed flexibility without making the whole system too complicated, to be manageable, i.e., to describe, understand, or validate it.

Monitoring and analysis To support working on flexible processes and to steadily improve them, monitoring and analysis are crucial. Monitoring of process instances allows for observing progress. While activity-centric approaches can provide information about the execution states of the activities handled so far, this information will not be sufficient with data-centric approaches. Furthermore CMMN introduces the concept of milestones. How do we then specify progress for the whole process instance? Even though a hybrid solution is conceivable, we do not regard it to be recommendable, as too many different concepts need to be combined to estimate the progress. Especially concerning analysis of the overall process, it is much harder to determine improvement suggestions, as they will not only concern one system, but several systems with potentially changing intersection points.

With our integrated model, we base on the data-centric paradigm, but also consider the milestone extension integrated into CMMN, which already integrates activity-centric and data-centric properties. However we did not consider many things common to activity-centric solutions so far.

Thus, many questions are still open. We will continue our work with a more detailed review of the requirements identified so far, and existing approaches, before going into more details concerning our integrated model. CMMN 1.0 Beta 1 needs further investigation, especially as no case studies using this approach in real world scenarios are available for so far. Furthermore Manfred Reichert and his team provide an interesting advancement of the data-centric approach, the so-called object-aware approach (cp. [6]). They are having a very similar focus as we have, concerning the integration of data, process and roles, and also take aspects such as fine-grained control of data access into account, including progress. This approach thus, needs to be studied in more details.

Furthermore the demand for business-side control, i.e., not the IT but the business staff cares for the process models (types) and instances to be adapted, is increasing. Forrester Consulting [20] shows in 2010 that with CM systems IT still leads the change process also in aspects such as business rules, integrating new data, tailoring of screens, analytics or model change, and that changes typically need between 40 to 50 days to be realized. An agile environment needs shorter reaction times. Is the planning role proposed with CMMN sufficient, or how does it need to be designed.

So far we did not explicitly deal with the implementation model. It is obvious that also the implementation model has to support our integrated vision.

6 Summary

After a short characterization of knowledge work and knowledge workers we presented an overview of the two OMG standards dealing with BPM—BPMN version 2.0 and CMMN version 1.0 Beta 1, and compared the two standards, with special emphasis on knowledge worker support. We then argued that it is necessary to provide a consistent, integrated business process model to support well-structured and highly flexible processes as well. A hybrid approach as proposed by the OMG is not suitable to support the smooth transition within the spectrum from highly predefined to highly flexible processes without paradigm shifts. Furthermore a couple of additional requirements, e.g., necessary for supporting different levels of knowledge workers on the same process (instance), asks for additional flexibility not yet available with the existing approaches.