1 Introduction

An infrastructure for model-driven development has a high potential for accelerating the development of software applications. While just modeling the application-specific data structures, processes and layouts, runnable software systems can be generated. Hence, MDD does not concentrate on technical details but lifts software development to a higher abstraction level. Moreover, the amount of standardization in code as well as in user interfaces is increased. A high-quality MDD infrastructure can considerably reduce the time to market in consequence.

Fig. 1
figure 1

Generation of role-based mobile apps for different platforms (1), single deploy (2), and runtime interpretation (3) of provider models

Mobile application development faces several specific challenges that come on top of commonplace software production problems. Popular platforms differ widely in hardware and software aspects and typically show short life and innovation cycles with considerable changes. The market often requires that apps must be available for several platforms which makes a very time and cost-intensive multiple platform development a necessity. Available solutions try to circumvent this problem by using Web-based approaches, often struggling with restricted access to the technical equipment of the phone and making less efficient use of the devices compared to native apps. Furthermore, Web-based solutions require an app to stay online more or less permanently which may cause considerable costs and restricted usability. MDD has a high potential for accelerating the development process of native mobile apps and provides easier adaptability to newly developed characteristics of underlying platforms.

Although there are already approaches to model-driven development of mobile apps as, for example, MD\(^2\) [30], our contribution differs considerably in design and purpose of the language. Our approach focuses on data-driven apps with role-based variants (see Fig. 1). A conference app for managing sessions, talks, and presenters at a conference, for example, distinguishes two roles: conference managers who provide all relevant data and conference participants who just read data. Our approach distinguishes an app model for designing the app with all its behavior, data structures, and user interfaces from one or more provider models for different roles. Each provider model contains role-specific information about data, behavior, and user interface. The entire approach contains three user roles: App developers who create the application, optional providing users who can configure the application, and the mandatory end users who will use the mobile app.

The heart and soul of model-driven development is the domain-specific modeling language. Our language covers three aspects of mobile software applications: its data entities and relations, its behavior including data management, access of sensors, use of other apps, etc., and its user interface. In general, model-driven development means modeling software on a higher abstraction level and generating code that implements the software on a lower abstraction level. This leads to the question how abstract a model can be. With the design of our modeling language, we follow the credo: “Model as abstract as possible and as concrete as needed.” This means the following: Standard solutions are modeled very abstractly, while more specific solutions are modeled in more details. For example, data management by the usual CRUD functionality (using create, read, update, and delete operations) may be modeled by a pre-defined model element type, while application-specific behavior is specified on the level of usual control structures.

To efficiently work with the domain-specific modeling language, we provide an Eclipse-based tool environment consisting of a graphical editor with three different views for data, behavior, and user interface models as well as two code generators to Android [9, 21, 38] and iOS [7, 24]. Before developing these two code generators, we studied the design principles of mobile apps in Android and iOS and found a lot of commonalities. Hence, generated apps get the same overall architecture independent of the chosen platform.

We demonstrate the potentials and limits of our MDD infrastructure at a variety of example apps. They include a conference app which guides participants through conferences, a museum app presenting mathematics from an entertaining perspective, a pictorial dictionary helping migrants in learning German in a job-related setting, a TV Reminder app informing about the latest broadcast schedule, as well as a SmartPlug, an app for controlling external hardware devices.

This article is an extended version of [48, 50]. While that paper introduces to the core ideas of our approach, this paper additionally comprises a detailed domain analysis along a feature model for mobile apps, a more detailed description of the MDD infrastructure, code generation not only to Android but also to iOS, and a discussion of example apps to illustrate the approach.

This paper is structured as follows: In the next section, the considered domain of mobile apps is presented along a feature model. In Sect. 3, we present our language design and discuss it along design guidelines for domain-specific languages as presented in [33]. Section 4 presents the developed MDD infrastructure consisting of a graphical model editor and two code generators to Android and iOS. Section 5 presents on several example apps. Finally, Sects. 6 and 7 discuss related work and conclude this paper.

2 The mobile applications domain

Mobile apps are developed for diverse purposes—from mere entertainment, to serious business applications. While focusing mainly on data-oriented business apps, they may be enriched by entertainment and educational elements, or sensor and external hardware access as well.

Requirements elicitation. First, we consider the requirements for a reference app being a conference app (see screen shots in Sect. 5.1). A conference app can support participants, during their visit to a scientific conference. The app should be the digital counterpart of a printed conference program, but support additional use cases for searching and bookmarking of events, and navigating through the venue. Thus, such an application holds information about the venue, available rooms, planned sessions, the presented papers and talks, the speakers, and the authors of presented papers. The conference participants can browse session overviews and find detailed information about the presented papers and speakers. Since the mentioned data are mostly static, except of favorite sessions saved by participants, the app should be delivered with an initial set of data and be able to work permanently off-line. Participants can create their personal schedule by selecting events from (parallel) sessions, ahead of their participation. The conference app would add a selected event to the installed calendar app, and the user can easily check (or receive a notification) before the next event takes place. The conference app can also include multimedia files. For example, a venue can have different rooms at different floors, or different buildings. To find these locations, a participant needs a map integrated with the app. Finally, the conference app should have a layout, which conforms to conference-specific look and feel, and is optimized for smartphones (e.g., smaller display sizes). As we claimed earlier, the considered apps should support role-based app variability. Conference organizers may also be in the role of providing users. Conference organizers can use a different instantiation of the conference app to create the initial data set (rooms, session, papers, author, etc.). Thus, they have additional use cases, which provide write access to the mentioned entities. The organizers’ version may use a different style model, because this user group prefers different devices like tablets/TV sticks with an external keyboard, to enter the data more comfortably.

Beside these requirements coming from data-oriented apps, there are further requirements, which will be revisited later when presenting the case studies (see Sect. 5.2f): Mobile devices can offer access to a lot of sensors that can be accessed by a mobile application. A particular case of this is using the built-in camera to recognize objects and augment live images with different types of virtual objects, which is called augmented reality (AR) [29]. This is useful for industry applications (e.g., in the case of issuing work instructions), as well as, for education and tourism sectors. Finally, mobile applications are often not self-contained systems, rather they interact with different types of hardware and software. Thus, our apps should be able to access external hardware, via standard interfaces (e.g., WiFi, Bluetooth).

Fig. 2
figure 2

Feature model to classify features that are particularly relevant for the model-driven development of mobile apps (shaded features are supported by our approach)

Feature model. Based on given taxonomies [15, 26, 34] to classify properties that are particularly relevant for the development of mobile apps, we propose the feature model shown in Fig. 2 as a combined result of existing work and properties discovered in our domain analysis.

To summarize, we are heading toward model-driven development of native mobile apps for different platforms (Android, iOS) that support runtime configuration (runtime model) of mobile apps for different user and system contexts. According to the model-driven approach, the app developer specifies the mobile app by creating an app model at design time (design model). The app developer can either use pre-defined model elements (e.g., CRUD) or compose different modeling elements to create more detailed functionality, which better matches to the requirements. This aims at rapid prototyping with an ensuing refinement. Due to the lack of knowledge about device characteristics (e.g., screen size) or device types of the host device of the application and targeted user group at design time, the app can be flexibly configured at runtime. Different user roles (e.g., provider and end user) create the configuration of a mobile application concerning functionality, user interfaces, and data. Otherwise, a default runtime configuration will be derived from the design model. We focus on navigation, education, and production domains rather than gaming and communication applications. The generated apps support also the integration of other services (e.g., call other apps, using Web services). Given that such apps should be runtime adaptable in terms of user and system contexts, their architecture must support a dynamic instantiation of user interfaces and processes. Different runtime models support this context-aware modeling approach, as described by Degrandsart et al. [25]. Finally, the requirement of working off-line leads to a local data management. In order to support different network conditions (as part of the system context) in a dynamic manner, the generated app architecture support a hybrid data management. In particular, the app can work off- and online according to changing network conditions.

Throughout our project work, the conference app described above were used as reference applications for the development of an MDD infrastructure. In the beginning, we analyzed and optimized these apps in order to approximate a best practice solution in prototypical re-implementations. Thereafter, we used them to test the developed MDD infrastructure by modeling them and comparing the generated apps with the original ones [49]. Due to space limitations, we have chosen a smaller example to be used as demonstration object throughout this paper. This example shows a number of important features of our approach:

Fig. 3
figure 3

Screen shots of phone book app. a Main Menu with Manage Persons Process (CRUD functionality). b Main Menu with individual Create, Edit, Delete Person Process. c Near to me Process d Call Person Process

Running example. One of the core apps for smartphones is phone books for managing personal contacts. In the following, we show a simple phone book app for adding, editing, and searching contact information about persons. Figure 3 shows selected screenshots of the phone book app, already generated by our infrastructure. Little arrows indicate the order of views shown. The first sub-figure (Fig. 3a) shows the main menu containing a standard CRUD process (Manage Person) to create, edit, and delete persons. The standard behavior and user interface for this task have been generated from a simple CRUD process (see Fig. 9b), which the app modeler has created before. Instead of using standard CRUD processes, the app modeler may create more individual processes (see Fig. 9c) for an individual creation process) to cover further requirements (e.g., individual navigation and styles). Figure 3b shows individual CRUD processes from the user’s perspective. These variants can be instantiated at runtime. Moreover, the app considers the user location context to find contacts near the current location of the user (Fig. 3c). Phone numbers are connected with the phone app in such a manner that whenever the user selects a phone number, it automatically starts dialing (Fig. 3d).

Fig. 4
figure 4

Modeling approach

3 Language design

The core of an MDD infrastructure is the domain-specific modeling language. In the following, we first present the main design decisions that guided us to our modeling language for mobile applications. Thereafter, we present the defining meta-model including selected well-formedness rules restricting allowed model structures. To illustrate the language, we show selected parts of a simple phone book app model. Finally, the presented modeling language is discussed along design guidelines for domain-specific languages.

3.1 Design decisions

Due to our domain analysis, we want to support the generation of mobile apps that can be flexibly configured by providing users. This requirement is reflected in our modeling approach by distinguishing two kinds of models: app models specifying all potential facilities of apps and provider models defining the actual app variants. In Fig. 4, this general modeling approach is illustrated. While app models are used to generate Android and iOS projects (1) to be deployed afterward (2), provider models are interpreted by generated Android and iOS apps (3). Provider models can therefore be used to change functionality without redeployment of the app. Provider models can be executed in two ways: Usually, a provider model is interpreted at runtime, since it does not and even does not have to exist at build time. But it is also possible to make it available at build time, by adding it to the resources of its generated app projects. It will then be considered in the build process.

The general approach to the modeling language is component based: An app model consists of a data model defining the underlying class structure, a GUI model containing the definition of pages and style settings for the graphical user interface, and a process model which defines the behavior facilities of an app in form of processes and tasks. The data and the GUI model do not have a direct link. But the process model depends on both sub-models (see Fig. 4) by referring to their elements. The GUI model just contains an abstract definition of pages (see Fig. 7), instead of detailed layout descriptions. The data model indirectly defines the structure and the default layout of the user interface. (Fig. 20 shows an example of the automatically generated user interface [42].)

A provider model contains an object model defining an object structure as instance of the data model. A style model defining explicit styles and pages for customized graphical user interfaces. Finally, it contains a process instance model selecting interesting processes and providing them with actual arguments to specify the customized behavior of the intended app variant. Similarly to the app model, object and style models are independent of each other but used by the process instance model.

For the design of the modeling language, we follow the overall credo: “Model your app as abstract as possible and as concrete as needed.” This means that standard design and behavior of an app can be modeled pretty abstractly. This may be very useful for rapid prototyping mobile apps. The more individual the design and behavior of the intended app shall be, the more details have to be given in the app model. In particular, all special styles, pages, and processes that may be used in the intended app have to be defined in the app model. Since the provider model shall be defined by domain experts, they are already completely domain specific and follow the pre-defined app model. Provider models support the development of software product lines in the sense that a set of common features are shared and some role-based variability is supported. Differences in considered app variants are modeled separately by different provider models.

We reuse existing modeling languages as far as possible. This applies, for example, to the definition of data structures. Data modeling has become mature and is well supported by the Eclipse modeling framework (EMF) [46]. Hence, it is also used here to define the data model of an app. Specific information to the code generator (which is little up to now) is given by annotations.

Fig. 5
figure 5

Data model of simple phone book app

The GUI model specifies views along their purposes as, for example, viewing and editing an object, searching objects from a list, and showing search results, doing a login, and choosing a use case from a set. A GUI model is usually not intended to specify the inherent hierarchical structure of UI components as done in rich layout editors like the Interface Builder [41], Android Common XML Editor [27] and Android Studio [55]. However, the model can be gradually refined to obtain more specificity in the generated app. Style settings are specified independently of views and follow the same design idea, i.e., the more default look and feel is used, the more abstract the model is.

Processes and tasks are modeled similarly along their purposes, i.e., different kinds of processes are available covering usual purposes such as CRUD functionality (create an object, read all objects, update or edit an object, delete an object) including searching, choosing processes, as well as invoking GUI components, operations, and processes. More specific purposes may be covered by the well-known concept of libraries, i.e., a basic language is extended by language components for different purposes as done for LabView [20].

To support the security and permission concepts of mobile platforms, the process model includes platform-independent permission levels. The permission concept is fine granular (i.e., on the level of single tasks); nevertheless, some platforms like Android support only coarse granular permissions (i.e., on the level of applications). Another security-related feature is the user-specific instantiation of processes. Potentially, features of an application can be disabled by a restricted process instance model.

3.2 Language definition

After having presented the main design decisions for our modeling language, we focus on its meta-model now. It is defined on the basis of EMF and consists of three separated Ecore models bundled in one resource set. While the data model is defined by the original Ecore model, two new Ecore models have been defined to model behavior and user interfaces of mobile apps. In addition, we present an instance model of the language meta-model being the app model of the simple phone book app introduced in Sect. 2. We concentrate on selected model parts; the whole app model is presented at [14].

Data model. A given data model is equipped with domain-specific semantics. Data models are not only used to generate the underlying object access but also influence the presentation of data through the user interface. Sub-objects, for example, lead to a tabbed presentation of objects, attribute names are shown as labels (if not redefined), and attribute types define the appropriate kind of edit element being text fields, check boxes, spinners, etc. Furthermore, data models determine the behavior of pre-defined CRUD processes in the obvious way (see Fig. 20). Class and attribute names are not always well suited to be viewed in the final app. For example, an attribute name has to be a string without blanks and other separators, while labels in app views may consist of several words, e.g., “Mobile number.” In such a case, an attribute may be annotated by the intended label.

Example

(Data model of simple phone book app). A simple data model is an Ecore model depicted in Fig. 5. The structuring of contact data in classes Person and Address seems to have advantages, since not too much information is presented in one view. Phone book is just a container for Persons and not intended to be viewed.

figure e

Listing 1 shows a custom code annotation, which implements the method callMobileNumber() of the class Person. The annotated code is adopted automatically by the code generator.

Fig. 6
figure 6

Ecore model for defining graphical user interfaces of mobile apps

Graphical user interface model. The main part of the meta-model for graphical user interface models is shown in Fig. 6. Different views in user interfaces of mobile apps are modeled by different kinds of pages (e.g., ViewPage, EditPage, MapPage) each having a pre-defined (generic) structure of UI components and following a purpose. For example, the purpose of the EditPage is to edit an object (e.g., Address). Our GUI model reuses the existing data model which holds a description of the objects. Additionally, app modelers can set style and presentation properties. The different style setting elements of our GUI model provide this aspect of presentation. These elements influence the style of Pages (PageStyleSetting) in general and in particular of Menus (MenuStyleSettings), Lists (ListStyleSettings), and Selections (SelectionStyleSettings). A dialog sub-model does not exist in our modeling approach. This conversational aspect of the user interface is covered by pages that contain implicitly necessary dialogs.

Example

(User interface model of simple phone book app). The user interface of our phone book app is modeled in Fig. 7. This part of the app model is pretty simple; it just contains a style setting, a menu, and five pages, namely a ProcessSelectorPage, an EditPage, a ViewPage and a SelectableListPage for Person objects, and a MapPage for Address objects. Note that we just add these pages to the model and use them to specify behavior but do not specify their structures.

Fig. 7
figure 7

User interface model of simple phone book app. a Pages and style settings of the simple phone book app. b Properties of ProcessSelectorPage. c Properties of EditPage

Fig. 8
figure 8

Ecore model for defining mobile app behavior

Behavior model. Figure 8 shows the main part of the meta-model for behavior models of mobile apps. This meta-model is influenced by the language design of BPMN [2] and (WS)-BPEL [1]. Since BPMN does not itself provide a built-in model for describing data structures, we have to reuse the data model provided by EMF. Thus, it is natural to adopt the required BPMN/(WS)-BPEL model elements to our own EMF-based modeling language. Many of the BPMN/(WS)-BPEL language elements are dropped (e.g., the error handling of (WS)-BPEL and the events of BPMN). The standard set of behavior constructs is extended by CRUD functionality on the data model, input/output facilities referencing the GUI model, platform-independent permissions, and CRUD privileges, where we do not find any adequate constructs in BPMN and (WS)-BPEL.

Fig. 9
figure 9

Process model of simple phone book app. a Main process. b CRUD process. c Create person process. d Search person process. e Call person process

The main ingredients of a behavior model are processes which may be defined in a compositional way. In particular, the composition of existing processes promises a scalable effort for process modeling. The model element InvokeProcess calls a sub-process. When invoking a process, the kind of invocation—synchronous or asynchronous—has to be specified. Long-lasting processes (e.g., processor-intensive or network-intensive processes) should be marked as asynchronous. These process running in the background. Each process has a name and a number of variables that may also function as parameters. A parameter is modeled as a variable with a global scope, contrary to locally scoped variables. The body of a process defines the actual behavior consisting of a set of tasks ordered by typical control structures and potentially equipped with permissions. Permissions indicate the required rights (e.g., network, file access, GPS) of the app. There is a number of pre-defined tasks covering basic CRUD functionality on objects (e.g., Create, Read, and Delete), control structures (e.g., If, If else, and While), the invocation of an external operation (InvokeOperation) or an already defined processes (InvokeProcess), as well as the view of a page (InvokeGUI). While task CrudGui covers the whole CRUD functionality with corresponding views, Create, Read, and Delete just cover single internal CRUD functionalities. Privileges can limit the object access (e.g., Read only, Create, Update) of the element CrudGui. An InvokeGUI task refers to a page defined in the user interface model. The ProcessSelector points to all processes which should be available in the main menu of the app (see screen shots Fig. 3a, b first screen from left).

Example

(Behavior model of simple phone book app). The behavior of the phone book app is modeled by a process selector as main process that contains processes for all use cases provided. Fig. 9a, b show processes Main being a process selector and CRUDPerson covering the whole CRUD functionality for contacts. Figure 9c shows an individual process to create persons. Figure 9d shows the definition of a search process where first a search pattern is created that may be edited in an EditPage, and then, it is passed to a Read task resulting in a list of persons being viewed in a SelectableListPage. If a person is selected from that list, its details are shown in a ViewPage. Figure 9e shows how to connect to a phone app to call a person. After searching for a person, operation callMobileNumber() is invoked on the selected Person object. Just a few lines of code are needed to start the corresponding Android activity or iOS service, i.e., the operation is implemented manually. At [14], process NearToMe is shown defining situation-dependent behavior in the sense that all persons of my phone book with an address near to my current position are displayed.

Since all three meta-model parts are Ecore models, each model element can be annotated to cover additional generator-relevant information or just comments.

Provider model. In order to reconfigure mobile applications at runtime, we use provider models (e.g., object model, process instance model, and style model), which are interpreted at runtime by the mobile applications. Given that the providing user does not model such a provider model, the app creates an (empty) default providing model from the app model. It makes no difference whether the providing models are created at design time (and bundled with the generated mobile app) or imported after installation of the mobile application. In turn, the generated mobile apps does not need to be reinstalled to cover devices or user-specific requirements. It is sufficient to load a respective configuration in the shape of a provider model.

Fig. 10
figure 10

Provider models of simple phone book app. a Object model. b Process instance model. c Style model. d Object model properties (Person). e Object model properties (Address). f Process instance model properties (ProcessSelector)

Example

(Provider model for simple phone book app model). The object model of an initial provider model contains an empty phone book only; the process instance model just contains the main process. The object model changes whenever the list of contacts is modified by the user. Figure 10a,d,e show a non-empty object model as an instance of the data model (see Fig. 5). Figure 10b, f shows the runtime configurations of available processes, which result in the configuration seen in Fig. 3a. Particularly, Fig. 10f shows the processes that are available at runtime for a certain user group. This set can be rearranged in any way. The dynamic processing of a provider model is a crucial issue to meet several requirements that arise with different contexts of use.

Well-formedness rules. To get consistent app models, we need a number of well-formedness rules in addition. In particular, the consistency between model components has to be taken into account. The main ones are listed below formulated in natural language. The complete list of rules formalized as OCL constraints can be found at [14].

  1. 1.

    There is exactly one process with name Main. This process is the first one to be executed.

  2. 2.

    There is at least one task of type ProcessSelector in the Main process.

  3. 3.

    A Process being registered in a ProcessSelector contains—potentially transitively—at least one task of type InvokeGUI or CrudGui.

  4. 4.

    Invoking a process, the list of arguments has to be consistent with the list of parameters defined for that process w.r.t. number, ordering, and types.

  5. 5.

    Considering task InvokeGUI, number, ordering, and types of input and output data as well as output actions have to be consistent with the type of page invoked. To invoke, for example, a MapPage, two double values are needed as output data, for a LoginPage, two strings are needed to show the user name and password, while a Boolean value as output data represents the result of a login trial.

figure f
figure g

The rules in Listings 2 and 3 show the corresponding OCL constraints, for the page type SelectableListPage (instance of ListablePage). A SelectableListPage provides the selection of one element (e.g., a person) from a list of the same type (e.g., persons). Figure 3d shows such a dialog for choosing a person who should be called. The constraint is divided in the output (Listing 2) and input (Listing 3) constraint. The output constraint describes the parameters to be passed to the page. In this case, the OCL constraints requires a single argument (size()=1), which has to be a list (upperBound=-1). The input constraint describes the returning parameters from the page. As expected, the OCL constraints requires a single return value (size()=1), which is a list element (upperBound=1). It conforms with the type of list elements (eType) shown by the page.

Example

(Well formedness of simple phone book app model). Based on the excerpt of the process model, as shown in Fig. 9 and the GUI model depicted in Fig. 7, we show the well formedness of the process SearchPerson in Fig. 9d, particularly of the task ChoosePersonFromResultList being an InvokeGUI task. This task offers a list of results from prior search tasks. The user can choose one result for a detailed view offered by a SelectableListPage (defined in the GUI model).

The mentioned constraints (Listings 2 and  3) check whether the input type of the InvokeGUI task ChoosePersonFromResultList is a list (of Persons) and the output type is a single object of the same type Person. In this case, this constraint is satisfied and the code generator can generate a valid initialization of the SelectableListPage.

3.3 Discussion

After having presented the main features of our modeling language for mobile applications, we now discuss it along with the design guidelines for domain-specific languages stated in [33], the design guidelines for user interface description languages stated in [39, 45] as well as w.r.t. the feature model introduced in Sect. 2.

Design guidelines for DSMLs. The main purpose of our language is code generation. It shall be used mainly by software developers, perhaps together with domain experts and content providing users. The language is designed to be platform independent, i.e., independent of Android, iOS and other mobile platforms.

A decision whether to use a textual or graphical concrete syntax does not have to be taken since we design the language with EMF and therefore have the possibility to add a textual syntax with, for example, Xtext [19] or a graphical one with, for example, the graphical modeling framework (GMF) [28, 44]. Currently, a graphical editor is provided as presented in the next section. The development of a textual one is less work and shall be added in the near future. We decided to reuse EMF for data modeling since it is very mature. Since we define our language with EMF, the Ecore meta-model can also be reused, together with its type system.

Next, we discuss the choice of language elements. Since all generated mobile apps shall share the same architecture design (being detailed in the next section), the modeling language does not need to reflect the architecture. However, data structures, behavior and user interface design are covered. Since we want to raise the abstraction level of the modeling language as high as possible, we have discussed each specific feature of mobile apps carefully to decide if it can be set automatically by the generator or if the modeler should care about it. For example, asynchronous execution of an operation is decided indirectly if the operation is classified as long lasting but can also be set directly. Permissions are completely in the hand of the modeler since they depend on the operations executed. The authors of [33] emphasize the simplicity of a language to be useful. Our language follows this guideline by avoiding unnecessary elements and conceptual redundancy, having a very limited number of elements in the core language and avoiding elements that lead to inefficient code.

The concrete syntax has to be chosen carefully: For data modeling, we adopt the usual notion of class diagrams since it has proven to be very useful. Process models adopt the activity modeling style to define control structures on tasks since well-structured activity diagrams map usual control structures very well. Notations for pages and tasks use typical forms and icons to increase their descriptiveness and make them easily distinguishable. Models are organized in three separate sub-models w.r.t. different system aspects, i.e., data model, process model and GUI model. Moreover, data structures can be organized in packages and processes can be structured hierarchically. However, processes and pages cannot be packaged yet. Not many modeling conventions have been fixed up to now (except of some naming conventions) but will be considered in the future.

There is especially one part where the abstract and the concrete syntax of our language diverge, the definition of control structures for task execution. While the concrete syntax follows the notion of activity diagrams, the abstract syntax contains binary or ternary operations such as while loops and if clauses. This allows an easier handling of operations for code generation; however, they are unhandy during the modeling process. There are no places where the chosen layout has any effect on the translation to abstract syntax. Our language provides the usual modularity and interface concepts known from other languages: packages and interface classes in data models as well as processes and process invocations in behavior models.

Design guidelines for user interface description languages. According to the design and comparison criteria of user interface description languages (UIDLs) given in [39, 45], our graphical user interface definition reflects most of the stated criteria.

In order to position our GUI model with respect to the existing work, we discuss the following criteria: First, the component model criteria requires a separation of the UIDL into sub-models (or aspects) such as a task model, domain model, presentation model, and dialog model. Our app model is structured per design in this way. The task model describes different tasks to be accomplished by the user. Within our approach we describe these tasks (e.g., filling out a form, taking a picture, searching some record) by corresponding page types (such as ViewPage, EditPage, and ARPage). Each page has a pre-defined, generic structure of UI components and follows a purpose. For example, the purpose of the ARPage is to compare the live image with a pre-defined pattern, and to augment it with additional information such as text and images. The ARPage hides the technical details of the AR functionality from the modeler and reuses already existing AR frameworks (e.g., MetaioSDK [4]).

The methodology criteria differentiate between (1) the specification of UIs for each of the different contexts of use and (2) the specification of a generic (or abstract) UI description for all the different contexts of use. Our approach provides both variants. A user interface (here Page) can be reused in different contexts by style models being instances of GUI models. Alternatively user interfaces (i.e., Pages) can be created for each context of use to, for example, support an individual layout or modify generated code.

Fig. 11
figure 11

Graphical editor for app models (process model being edited)

The tool criteria describe the existence of a translation tool. Such a tool translates a user interface description to a specific language or platform [42]. Our user interface description is automatically translated by the code generator. Related to this, the criteria of platforms and supported languages are also met by our code generators supporting different platforms (Android, iOS) and different languages (XML, Java, ObjectiveC).

The target criteria describe the ability of the user interface description to express variations according to the desired platform, user group, and environment. Our modeling approach supports user-specific targeting of processes and related user interfaces specified by provider models [48]. Thus, our GUI model has multi-user capability. Finally, the expressiveness of our GUI model is discussed: According to Navarre et al. [39], the expressiveness includes data description, state representation, event representation, representation of time, concurrent behavior, and dynamic instantiation. The most criteria of the expressiveness cluster are provided at the code level or have not been in the focus of our design. To sum up, according to our credo “Model as abstract as possible..,” our GUI model is admittedly minimalistic describing a user interface that is focused to purpose.

Feature model. Finally, we want to discuss how the modeling language reflects the features present in our feature model in Fig. 2. The developed modeling language primarily allows the model-based development of mobile applications, which is a key feature. An additional runtime model supports the dynamic instantiation of user interfaces. Through this multi-level modeling approach, the features of user-specific contexts and device-specific adaptability are covered. The generated mobile application can be configured with runtime models for different types of devices (e.g., smartphones and tablets). The modeling language is appropriate for domains of business, production, navigation, and education. Since the modeling language is platform independent, the available code generators compile the models to different platforms (currently Android and iOS). According to the model-driven approach, which does not focuses on technical details, the code generator implies the technical details and the architecture of the mobile applications (e.g., native applications with runtime model interpreter). Dynamic data management (local or central) and responsiveness to changing network connection are reflected on the level of code generation.

4 MDD infrastructure for mobile applications

Infrastructures for model-driven software development mainly involve editors and code generators. In the following section, we present an MDD infrastructure for mobile applications as a prototypical implementation of the presented modeling language, along with a multi-view graphical editor and code generators for Android and iOS. While the language itself is based on EMF, the graphical editor is based on GMF [28]. Both code generators are written in Xtend [19]. The editor and the code generators are designed as separate Eclipse plug-ins. They use the common implementation of the abstract language syntax including model validation, captured again in plug-ins. While the generated Android projects can be directly built-in Eclipse (using the Android development tools for Eclipse [10]), the generated iOS project folder must be initially shared by the XCode IDE [8] before it can be built. Afterward all changes are published automatically to the XCode IDE after every run of the iOS code generator.

Table 1 Mapping between the core process model elements and platform-specific constructs

4.1 Graphical editor for app models

The graphical editor for app models is designed as a graphical editor comprising three different views for data modeling, process modeling (see Fig. 11), and GUI modeling. The existing Ecore diagram editor has been integrated for data modeling. Figures 7 and 9 show screen shots of edited processes and pages. As expected, changes in one view are immediately propagated to the other ones. While the concrete syntax of control structures for task execution follows the notion of activity diagrams, the abstract syntax contains binary or ternary operations such as while loops and if clauses. This diversion between the abstract and the concrete syntax of our language cannot be covered directly by mapping concrete model elements to abstract ones. Therefore, a slight extension of the modeling language has been defined and is handled by the editor, i.e., concrete models are mapped to extended abstract models that are translated into non-extended ones by a simple model transformation. Through the application of the well-known generation gap pattern [52, pp. 85–101], the standard presentation of GMF-based editors has been adapted to special needs such as special labels and icons.

According to our modeling approach (see Figs. 1, 4), the app modeler can generate runnable projects from the platform-independent app model for different platforms. Having edited an app model, it has to be validated before code generation since the code generator is designed for correct models only. The modeler can either validate the model manually before activating the generator, or the generator validates the input model automatically before starting the generation.

4.2 Architecture of generated apps

Before presenting the mapping of the modeling elements to the constructs of the different target platforms and the code generation to Android and iOS in detail, we describe the overall, platform-independent architecture of generated mobile apps.

According to the separation of data, process, and GUI aspects in app models (as shown in Fig. 4), the architecture of generated mobile apps reflects this separation. Since we generate data-oriented mobile applications, the architecture of each generated mobile app has a data layer. This layer contains the modeled data entities (e.g., persons, addresses, etc.) and provides functionality to serialize and deserialize these objects. The data layer forms the model of the application.

The controller layer implements the behavior specified by the process model, e.g., it holds the application logic. It is the intermediary layer between the model and the view. The controller invokes the interactive user dialogs and processes events returning from user dialogs. Since it should be possible to take the process instance model into account at runtime, the controller layer contains an interpreter part for process instance models.

The presentation or view layer provides all interactive dialogs. This is the most platform-specific layer because it uses the graphical components provided by the envisioned platform. Additionally, the graphical user interface must be dynamically configurable (e.g., according to device-specific features) at runtime. The dialogs in the view layer do not contain any calculations and navigation logics (except of input validation) and return all events to the controller.

Finally, the architecture of a generated mobile app implements a transaction concept. A process invoked by the initial process selector dialog opens a transaction on the model. If the user returns to this initial process selector dialog by confirming all steps, this transaction is committed. Otherwise, the user can cancel the transaction at any passing dialog, and the changes made are consequently lost.

4.3 Mapping of model elements to platform-specific types

Prior to the presentation of the code generators, we explain our mapping between the platform-independent modeling elements and the platform-specific types and technologies.

Mapping of the data model. The mapping of the data model is not platform specific since specific technologies (e.g., platform-independent relational databases) are available for different platforms. We may use different technologies—for example, a file-based system (storing data, for example, in XML/XMI format) or a database system (using, for example, SQLite or MySQL) to map the data model. Consequently, the mapping of the object-oriented data model follows well-known concepts (e.g., the object–relation mapping [17, Chapt. 14]). It is not further regarded in this discussion.

Mapping of the process model. Table 1 shows the important model elements of the process model (see Fig. 8) and the corresponding counterparts in the targeted platforms. We mapped the Process element to Services. Services have no graphical user interface and can be started and stopped. An InvokeGUI task calls a graphical user interface. Thus, it is mapped to platform-specific user interface constructs (e.g., Activity or UIViewController). The Create, Delete, Read tasks are mapped to simple classes of the platform-specific programming language. An InvokeOperation task calls a method and does not differ from a normal method access. The mapping of InvokeProcess task is slightly different for the supported platforms. While Android provides the Intent construct to call Services (or other apps), iOS has no counterpart. Thus, the InvokeProcess task in iOS is mapped to an instantiation of the corresponding service class. This mapping documents the main design decisions w.r.t. code generation based on the domain experience of app developers and suggestions provided in the relevant literature. As we have used a set of reference applications (and re-implemented prototypes) to develop our language as well as get appropriate code snippets for the code templates, we reuse this bottom-up mapping from code to language elements in reverse.

Mapping of the GUI model. In contrast to the clear mapping of the main process model elements, GUI model elements (see Fig. 6) cannot be mapped to platform-specific types in such a straightforward way. There are two reasons for this: First, the GUI and the style model (if available) are mainly interpreted by the mobile application at runtime. Thus, a mapping of modeling elements and platform-specific constructs is hardly possible. The generated apps contain generic code to interpret the model information; thus, neither the generated declarative descriptions of the GUIs nor the generated code contains hard-coded information about the GUI modeled earlier. Second, the generic code relates to several components of the user interface (e.g., labels, text views). The amount of user interface components depends on the type of Page (e.g., ViewPage, EditPage, etc.) and the data model (see Fig. 20). Thus, the amount and location of generic GUI interpreting code are indirectly affected by the app model.

Processing a provider model The runtime configuration of processes is of particular interest. Some device types such as an Android-TV stick or a tablet indicate different usages compared with other device types as, for example, smartphones and wearables, due to the screen size and auxiliary devices (such as keyboard, pointer device). Given the difficulties in adapting the modeled processes to different usage contexts, we propose to configure them in a flexible way.

For example, content providers of a conference application need to do a lot of editing to provide data. They would use an Android stick or a tablet for editing. This configuration would make no sense for other device types (e.g., with small screen size) or other user groups (e.g., conference participants). For this reason, the content provider can hide these edit processes from the end user by configuring the process instance model.

Based on the heterogeneity of the target devices (e.g., screen size), it is not possible to generate “one” version of the user interface at design time (known as predictive context) which would fit all contexts of use. To support this multi-target capability, the user interfaces must be adaptable at runtime (also known as effective context). With regard to the unifying reference framework for multi-target user interfaces [22], our generated apps are adaptive to effective contexts through their style model. Another, no longer pursued, approach is to generate all effective variants of the style model, as proposed in [43, 47].

4.4 Code generation for Android

The code generation process begins automatically after changing and saving a model (auto-generation). To avoid the processing of temporarily invalid models while editing them, the modeler can deactivate the auto-generation for the time being. The code generator produces at least two projects (see Fig. 12): an Android project <Project>.Android) (e.g., Phonebook.Android) containing the Android app and an Android library project <Project>.Lib (e.g., Phonebook.Lib) which contains the data layer code. The Android library project is created by reusing the existing EMF generator that generates code for the EMF runtime. The generated code and the EMF runtime are directly applicable on the Android platform. The EMF generator becomes a sub-generator of the complete code generator and processes the Ecore data model separately. Alternatively, it is possible to use SQLite instead of EMF. The process and GUI models are translated by separate sub-generators written in Xtend.

Fig. 12
figure 12

Architecture of generated Android apps

The main Android project follows the usual model-view-controller [36, 47] [37, 171 ff.] architecture of Android apps. View components are mostly generated as app resources. The controllers contain the modeled application logic and occur in the form of activities (gui, gui.dialog), fragments (gui.<...>), adapters (adapter), services (gui), asynchronous tasks (asynctask), and simple Java classes (crud). While entity class interfaces are widely dispersed in these controllers, the access to runtime models is done exclusively via the model package. The model package acts as a data access layer and ensures that Android activities do not access runtime models directly. Therefore, the generated application architecture can be adapted easily to other technologies (i.e., relational databases, Web services) for (de)serialization of provider models by just changing this layer (cf. code generation to iOS).

Further library projects (whose use optional) encapsulate the utility of map services (e.g., Google Play Service  [11]) and AR functionality (e.g., MetaioSDK  [4]).

All these projects are immediately compiled and then ready to start. By default, the memory card of the mobile device contains an initial provider model containing an empty object model, an initial style setting, and an initial process instance model containing the main process with all those processes assigned to the main process in the app model. This provider model may be extended during runtime. After app model changes (which result in regeneration of code), it might become partly invalid, depending on the kind of changes. If, for example, the process model has changed but the data model has not, the object model is still readable, but the process instance model is not. It is left up to future research, to support automated migration of provider models.

4.5 Code generation for iOS

The workflow to generate iOS code is nearly the same as for Android. A slight difference, however, is that the generated project must be exported from Eclipse and imported to the XCode IDE in order to build an iOS app.

Fig. 13
figure 13

Architecture of generated iOS apps

In contrast to the Android generator, the iOS generator currently creates only one project (Fig. 13). The code generator for iOS cannot reuse the EMF generator to process the Ecore data model since EMF-generated code is not applicable on the iOS platform. The ceasing functionality must be covered by the code generator for iOS in addition. Hence, the EMF-equivalent code for iOS comprises entity classes deduced from the data model (ecoreI, ecoreI.impl), and corresponding data access objects (DAOs [54, 154]) to (de)serialize objects. As the model package indicates, generated iOS applications use a relational database (SQLite [12]) to store runtime models. An initial database is created based on the data model using Teneo (being based on Hibernate). This database contains an initial provider model. Similar to the Android platform, the database might become partly invalid after app model changes and regeneration of code. The remaining packages follow the same architectural design as presented for Android app. In this sense, they are platform-specific equivalents to previously described Android packages.

To show maps, the generated iOS applications use the built-in Apple Maps Service. A library project such as google-play-services.Lib is not necessary on the iOS platform, but basically possible. Since Apple provides more and more similar services compared to Google (e.g., iCloud/Google Drive, iMessage/Google Messenger), additional third-party libraries are less important to integrate a certain service. The only exceptions to this are third-party libraries which provide specialized functionality (e.g., AR functionality). Basic AR functionality will be covered by the iOS generator in the near future.

4.6 Discussing generated Android and iOS apps

First, we will discuss how far the generated applications cover the features defined in Sect. 2. Then, we review the generated applications—Android and iOS version—and compare the generated projects and user interfaces.

Feature model. While having code generators to create Android and iOS apps, we can discuss how generated apps reflect the features presented in our feature model (Fig. 2). As we have stated earlier in the discussion of the modeling language, some features are only covered implicitly by the code generators. The platform feature is determined by selecting the corresponding generator (for Android or iOS) because both code generators works independently. Both generators cover the mandatory architecture feature while generating native apps. Apps of both platforms include a runtime model interpreter. The code generators cover the mandatory data management feature in different ways. Although apps of both platforms provide local and central data management, Android apps can switch during runtime. Moreover, Android can use more frameworks (e.g., file-based XMI/XML persistence), while iOS is more limited and uses mostly relational persistence (e.g., SQLite).

Fig. 14
figure 14

Sitemaps of the process CRUD-Person. a User interaction in Android. b User interaction in iOS

Similarity of the generated apps. The review of the generated prototypes is based on the commonly used app model for the phone book example. Both the Android and the iOS generators process the same app model. The process model includes 12 processes and 47 tasks. The data model contains three classes (see Fig. 5). Finally, the user interface model given in Fig. 7 contains five pages of different types and appropriate style settings. From the structural point of view, we can confirm that both generated apps follows the presented architecture based on a review of a selected app model. Besides this technological deviation, we always find technical and logical counterparts in both architectures. From a dynamic point of view, we compare the site map of the prototypes, the runtime artifacts (i.e., object, process, and style model), and the transactional behaviors. Figure 14 shows an excerpt of the site map of the prototypes. For the 10 user-interacting processes, we can find equal site maps, as demonstrated in Fig. 14 for the process CRUD-person. The transactional behavior of both prototypes is nearly the same. The only difference is the underlying persistence framework. The Android prototype uses EMF libraries to load and store runtime models (XML/XMI), while the iOS prototype uses a SQLite database to store and load data.

5 Case studies

In this section, we demonstrate potentials and limits of our MDD infrastructure on the basis of a variety of mobile applications. The Conference App guides participants through conferences. It was made available to all participants of the MoDELS 2014 conference in Valencia. The Mathematikum App also is a guide that has been developed for the Mathematikum, a museum which presents mathematics from an entertaining perspective. Both apps are largely generated from given app models. Highly specific behavior that can hardly be abstracted to be included in the models is incorporated by manually written code extending modeled EOperations. In addition, the second app is an example that incorporates AR functionality. The pictorial dictionary app (Wörter für den Beruf) that helps migrants in learning German in a job-related setting is an example for a multimedia app integrating text, photos, and sound. The TV Reminder app visualizes the latest broadcast schedule by communicating with a Web server. In contrast to the data-driven apps introduced above, the SmartPlug is an app to control external hardware devices (e.g., manageable power distribution units).

Fig. 15
figure 15

Conference app data model

5.1 Conference app

The Conference App is a typical example for an app where basic functionality and navigation can be reused independently from specific content. Given the basic conference app as generated from an app model, it can be filled with data, layout style, and behavior to serve for a specific conference. A provider model for participants of the MoDELS conference realizes read only access to papers, sessions, rooms, etc., of this conference. Furthermore, sessions can be marked as favorites. For not forgetting a favorite session, a new entry in the user’s calendar is inserted. Participants use this native app as a conference guide with a conference-specific look and feel. The app can be reused for other conferences (e.g., MoDELS 2016) without changing the code but only by replacing data and potentially the layout style.

Figure 15 shows the complete data model of the conference app. The basic elements of a conference, i.e., sessions, presented papers, and of course persons in their roles as authors or session chairs, are modeled straightforwardly in the Ecore data model. Class Conference is the overall container. It contains Sessions, Persons, Papers, Rooms and Venue. A Session is connected to the Room where it takes place and to all the Papers to be presented in that Session. Moreover, Persons are indicated as session chairs and authors. Note that there are a few operations such as getPlanFilename, initializeNotFavoredSearchPattern, and addToFavorites that are modeled as EOperations and have an Ecore annotation containing the code in their bodies. All other operations are generated automatically.

Fig. 16
figure 16

Main menu with default CRUD processes (for entities of types Institute, Paper, Person, Room, Session and Venue) and the standard navigation of CRUD processes for entities of type Paper

Fig. 17
figure 17

Main menu with default read processes (for entities of types Paper, Person, Room, and Session) and the standard navigation reading an entity of type Session

The CRUD use cases for creating, reading, updating and deleting institutes, papers (see Fig. 16), persons, rooms, sessions (see Fig. 17), and venues process as follows: An entity can be selected from a list of entities (SelectableListPage). Simple selection of an entity just views its details in a non-editable form (ViewPage). In this case, associations with other entities are shown in a tabular form. Before editing (updating) an entity, the user has to choose an entity from the list of available ones.

A long tab on the screen opens it in the edit mode. To edit an entity (create or update), a single view is shown as well, allowing to edit all details (EditPage). Associations between entities can be set in a drop-down list (1:1 cardinality) or a list of checkboxes (1:n cardinality).

The app model contains the necessary processes for all app variants. The actual configuration of processes is done in the provider models, for example, for conference administrators and participants. The provider model is read at runtime and has no influence on the generated code. The generated code (i.e., the app) is exactly the same.

Fig. 18
figure 18

Process AddFavorite

Fig. 19
figure 19

Add favorite

Fig. 20
figure 20

Generated graphical user interface of the process CRUDPaper

The diagram in Fig. 18 shows the addFavorite process in detail. Figure 19 shows that all existing sessions are displayed in a list (SelectableListPage). The user can select a session that is added to favorites. Furthermore, an entry (with selected session data) is created in the calendar.

Figure 20 shows the generated activity layout for EditPage EditPaper. As mentioned earlier, the app modeler does not define the inherent hierarchical structure of the UI components. This information is deduced from the data model. The app modeler only specifies that EditPage EditPaper displays an object of type Paper (see Fig. 15) to modify. The code generator produces a standard layout for this task. The complete source code including the generated layout files is available at [14].

5.2 Mathematikum app

Mathematikum is a museum with a huge collection of exhibits that explain complex mathematical topics in a very playful manner making them accessible for laypeople and children. To follow this philosophy, the app requires a very intuitive and highly interactive user interface and should not only present mere text-based information statically. To meet this requirement, two game-like activities were implemented as EOperations. One of them is concerned with an exhibit where a ciphered text has to be decoded by clever guessing. Successful guesses are shown to the user. Moreover, they see the text that still has to be deciphered. The second game-like activity uses Caesar’s cipher for ciphering short texts and sending them by SMS or a messaging app like WhatsApp.

Fig. 21
figure 21

Object detection “Deutschlandtour” (Germany Tour)

Fig. 22
figure 22

Object detection “PI-Kreis” (PI-Circle)

By augmented reality, users can get interactive guidance to perceive exhibits. This functionality is used here to interactively explain the traveling salesman problem. When visitors try to solve this problem building a tour through Germany hands-on, they may get tips by the app to find the right solution (see Fig. 21). To demonstrate how \(\pi \) can be approximated, augmented reality features are used to create animated explanation scenarios (see Fig. 22).

5.3 Pictorial dictionary (Wörter für den Beruf)

The aim of this app is to support migrant learners with little or no knowledge of German and little literacy skills to learn vocational-based words. The content is supported by photos; the app provides pronunciation as well as various means to test learner’s knowledge. Here again we need two role variants. The content providing language teachers (words, photos, and sound) and the end user who uses the app to improve job-specific language skills. Figure 23 shows the learner’s version of the app.

Fig. 23
figure 23

Screen shots of learning and test modes

5.4 SmartPlug

The SmartPlug app provides wireless remote control of home appliances and electronics. Users can turn off and on electronic devices which are attached to a manageable power distribution unit (e.g., NETIO 230B [3]) controlled by SmartPlug. SmartPlug logs the electricity usage and can forecast the costs for electrical energy based on the switching intervals. Figure 24 shows the main use cases of the app. The app uses manually written code (extending EOperations) and existing libraries (e.g., java.net.Socket) to establish the connection to the external device.

Fig. 24
figure 24

Main menu a with use cases Configure device, Switch devices, Show protocol, and Energy consumption. View Switch devices b shows the registered devices and current states (in brackets). View Energy consumption c shows the calculated energy consumption broken down to the devices

5.5 TV Reminder

The TV Reminder app organizes the broadcast times of the user’s favorite TV shows. By browsing the list of all upcoming broadcasts, the user can view and select favorites. The selected favorites are shown in a separate list in ascending order to get a quick overview of the upcoming broadcasts. Additionally, the user may import the selected favorites in the calendar to get a notification when a TV show begins. The app creates a calendar entry in accordance with the broadcast time and duration by using the calendar provider. Contrary to the apps presented above, the TV Reminder does not provide any means to create, update, or delete broadcasting elements. The (read only) object model containing the latest broadcast schedule is obtained from a Web server every time when the application starts. Figure 25 shows the main use cases of the app.

Fig. 25
figure 25

TV Reminder app

5.6 Potentials and limits

Considering the example apps above, we learned that our domain-specific modeling language can be applied best if the app focuses on data management as, for example, the conference app does. The modeling of data structures, their UI representation, and the CRUD functionality can be modeled on a high abstraction level. In addition, our case examples show that individual behavior may be added, sensors may be used, external devices may be controlled, etc. Relatively small numbers of model elements compared to huge numbers of generated code manifest a boost of productivity for this kind of mobile apps. The situation may differ for apps that focus less on data management. The Mathematikum app, for example, has a larger amount of game-like behavior not being generated but manually coded. In particular, this app shows that code generation and manual coding can be seamlessly integrated. It also shows the limitations of our MDD approach in its current form. Although some high-level behavior can be specified by abstract modeling elements (e.g., CRUD processes) and simple logic can be modeled using the control structures (e.g., If else, While, Assign), it is preferable to hand-code more complex behavior. The reason for this does not necessarily lie in the limited expressiveness of the modeling language. Rather, it is a matter of convenience to code complex algorithms by hand instead of modeling them on a similar abstraction level. Future work is needed to investigate manual code fragments with respect to schematically recurring parts and to extend the modeling approach accordingly.

6 Related work

The model-driven development of mobile applications is an innovative subject which has not been tackled much in the literature. Nevertheless, there are already some approaches which we compare to ours in the following.

MD\(^2\) [30] is an approach to cross-platform model-driven development [16] of mobile applications. As in our approach, purely native apps for Android and iOS are generated. The domain of data-driven business apps differs slightly from ours: While MD\(^2\)-generated apps are based on a kind of app model only, our approach offers provider models in addition. Moreover, the underlying modeling languages differ in various aspects: The view specification by MD\(^2\) is structure oriented and pretty detailed, i.e., views are specified on an abstraction layer similar to UI editors. In contrast, the GUI language of our approach is purpose oriented and thus lifted to a higher abstraction level. MD\(^2\)-controller specifications show some similarities and some differences to our process specification. Similarly to our approach, action types for CRUD operations are provided. But it is not clear how additional operations (differing from CRUD functionality) can be invoked, as, for example, starting a phone call by selecting a phone number. The generated mobile apps follow the MVC architecture pattern as well. While the data model is translated to plain (old) Java objects (POJOs) in MD\(^2\) with serialization facilities for server communication as well as a JEE application to be run on a server, our approach also supports off-line execution.

Two further MDD approaches focusing on data-centric apps are applause [6, 18] and ModAgile [13]. Both support cross-platform development for mainly Android and iOS. In contrast to our approach, behavior is nearly not modeled and user interfaces are modeled rather fine grained.

Another kind of development tools for Android apps is the event-driven approach App Inventor [5] providing a kind of graphical programming language based on building blocks. Building blocks represent input or output by simple components like Canvas, Buttons. Components of building blocks can be enriched with parameters and used in logical structures. The development is concerned with the app behavior. The language does not support app development on a higher abstraction level. For example, pure CRUD functionality has to be designed step-by-step with every detail while our approach can model CRUD functionality directly for a data model. A further event-driven approach is Arctis [35] being based on activity diagrams. Like App Inventor, Arctis also focuses on rather fine-grained behavior and/or UI specification and largely neglects the modeling of data structures.

Besides the generation of native apps, there are several approaches to the model-driven development of mobile Web apps being originated in the generation of Web applications. Although Web apps show platform independence by running in a Web environment, they have to face some limitations w.r.t. device-specific features, due to the use of HTML5 [40, 53]. There are several approaches to MDD of Web apps, such as mobl [31, 32] and a WebML-based solution by WebRatio [23]. Since we are heading toward apps being off-line most of the time as demanded by the domain analysis, Web apps are not well suited for that domain. Apps generated with our infrastucture can potentially work online and off-line: online with the meaning of usage a central installed database, calling Web services, etc., or completely off-line with a directly integrated decentral database like SQLite.

To summarize, our approach supports the model-driven development of native apps by high-level modeling of data structures, behavior, and user interfaces while supporting the role-based configuration of app variants and thus differs considerably from approaches aiming at comparable goals.

7 Conclusion

Model-driven development of mobile apps is a promising approach to face fast emerging technology development for several mobile platforms as well as short time to market with support for several if not all noteworthy platforms. In this paper, a modeling language for mobile applications is presented that allows to model mobile apps as abstract as possible and as concrete as needed. Different user roles are not necessarily covered by one mobile app but may lead to several app variants. They may be configured at runtime, i.e., by content providing domain experts, for (different kinds of) end users. The considered domains are business apps enriched by specific behavior elements such as interaction with sensors and other apps as well as entertainment elements as, for example, little games. Example apps are tourist and conference guides as well as SmartPlugs.

Future work shall cover code generators to further platforms and language extensions toward new features such as flexible sensor handling and advanced augmented reality. To realize apps with transactional processes (e.g., manufacturing processes), further work will also consider the bidirectional communication between generated apps and backend servers. Moreover, generated apps shall be evaluated w.r.t. software quality criteria, especially usability, data and transaction management [51], energy efficiency, and security.