1 Introduction

Technological enhancements have enriched mobile devices with the attributes of autonomous operating systems and customized user application support. SMDs are high featured and multifunctional computing and communication devices which have become universal replacements for Personal Digital Assistants (PDA’s). SMDs incorporate the computing capabilities of a handheld computer and the communication capabilities of an ordinary cellular phone by providing multimodal connectivity and user applications support [1]. SMDs are predicted to become the primary computing devices for various computational intensive applications [2]. Examples of such applications include natural language translators [3, 4], speech recognizers [5, 6], optical character recognizers, image processors [7, 8], online computational intensive games, video processing [9] and wearable devices for patients [10]. However, such applications require higher computing power, storage capacity, and battery lifetime on SMDs [11]. However, SMDs currently have limitations in memory capacity, processing potential, battery lifetime and wireless network access medium.

MCC is a new advancement which aims at extending cloud services to SMDs for augmenting the computing potentials of resources constraint SMDs [12, 13]. MCC addresses the issue of resources limitations in SMDs by employing resources and services of computational clouds.

Successful practices of cloud computing for stationary computers motivate to leverage cloud resources and services for SMDs. Cloud computing employs different service provision models for the provisioning of cloud resources and services to SMDs [14]. Several online file storage services are available on cloud server for augmenting the storage potential of client devices, such as Amazon S3 [15], Google Docs [16], MobileMe [17], and Drobox [18].

Applications which are partitioned at different granularity level into independent components are called elastic applications. Application partitioning is employed either statically or dynamically [21] for enabling intensive applications on SMDs. However, a number of issues are raised [2, 3] in the development, deployment and management of current cloud based distributed application deployment frameworks for intensive applications which obstruct optimization goals of distributed application processing in MCC [1922]. Current offloading frameworks lack in the deployment of distributed architecture for intensive mobile application, which results in the runtime partitioning of mobile applications for the establishment of ad-hoc distributed processing platform for application processing. However, runtime partitioning of elastic mobile applications involves additional computing resources utilization in implementing application profiling and solving [23]. The deficiency of distributed architecture and runtime partitioning of the application for outsourcing intensive components of the application make the distributed platform resources intensive and time consuming. However, the limited resources nature of SMDs necessitates lightweight procedures for the distributed deployment of intensive mobile applications which require minimal resources utilization in outsourcing computational load to cloud server node [21].

This paper proposes a Distributed and Elastic Application Processing (DEAP) model which addresses the issues of traditional application offloading frameworks [2426]. The availability of centralized resources and centralized monitoring in cloud datacenters motivates the need for explicitly configured client/server applications as an alternative for complex and resources starving runtime application offloading of intensive mobile applications [20]. DEAP incorporates distributed architecture for separating the intensive components of mobile application at design time, which reduces the instances of runtime application partitioning and therefore minimizes the additional overhead of runtime partitioning of the application. The graphical and mathematical models of DEAP are presented.

We formulate the performance of the elasticity attributes of DEAP by modeling the computational resources utilization in runtime application profiling and optimization. The performance gains of DEAP are evaluated by quantitative analysis of the intensive mobile application using the mathematical model. Analysis of the results shows the optimal and distributed nature of DEAP model for the intensive application deployment for MCC. Intensive mobile applications which are distributed by design and elastic by nature are promising alternatives of current offloading frameworks for intensive mobile applications. The development of distributed applications on the basis of DEAP framework results in substantial performance gains and enhancement in overall performance of application development, deployment and processing in MCC. The paper is organized as follows.

Section 2, discusses traditional application offloading frameworks. Section 3 presents the DEAP model in graphical model and mathematical form to clearly explain the idea of the proposed solution. Section 4 evaluates DEAP model through a quantitative analysis. Finally, simulation results are given in Sect. 5 followed by conclusions and future directives in Sect. 6.

2 Related Work

A number of related distributed and elastic application processing models are proposed for offloading intensive applications such as: (a) decentralized virtual cloud computing environment for mobile devices [27], (b) local surrogate based distributed computing platform [28], (c) centralized cloud computing environment for mobile devices [29], and (d) centralized cloud computing datacenters based cloud computing environment [26]. Current computational offloading frameworks employ diverse technique for the establishment of runtime distributed application execution platform. For example, virtual machine (VM) migration based Cloudclone framework [30] seamlessly offloads cloned image of the running applications on SMD to the nearby computer. The framework exploits various augmentation strategies for different types of applications and reduces the dynamic transmission overhead of application code by deploying a simple approach for synchronization. Similarly, VM based Cloudlets architecture [22] is deployed for process offloading. The framework exploits the tactic of copying the entire processing environment of the mobile device into remote cloudlet. A cloudlet is a trustable remote computer which provides the services of outsourced processing of application to SMD. A VM based CloneCloud framework [31] is based on partitioning of the application on thread basis. The framework is based on cloning mobile device application processing environment on to remote host which involves the issues of privacy and access control. Mirror server [32] augments SMDs by providing different types of services; security (file scanning), storage (file caching) and computation offloading. A mirror server is a powerful server configured in Telecommunication Server Provider (TSP) that maintains VM template for each of the different types of mobile devices platform. The VM template for each mobile device is kept with default company settings and a new VM instance is created for offloaded component of the mobile application. Cloud datacenter based framework [33] employs virtual machine based application offloading procedure. The framework deploys application level process migration on the Android platform. Fresh VM instance is created on the cloud server and state of the application is cloned to the newly created VM instance on the cloud server node.

Similarly, AIDE [34] is a dynamic distributed framework for resources constrained devices. An application profiling component establishes the feasibility of offloading. The partitioning component of the AIDE partitions the application dynamically by following a partitioning policy. In [35] a middleware framework is proposed for sharing the application processing dynamically between cloud server and mobile client. The framework implements both static partitioning and dynamic partitioning strategies. MAUI [25] uses dynamic application profiling and partitioning approach to maximize energy saving for mobile devices. The framework is based upon method state migration instead of method code migration. MAUI involves the overhead of application profiling, dynamic partitioning, migration, and reintegration on mobile device which requires computing resources abundantly. MAUI lacks of supporting remotely executing virtualized methods calling native functions and requires programmers to annotate methods as REMOTABLE, which is an additional effort from the development perspective. Similarly, elastic application model [26] is a middleware framework for elastic mobile applications. The framework is based upon dynamic distributed processing platform at application level. Weblets are deployed for dynamic partitioning and migration at runtime. The critical aspects are that the framework requires extensive overhead of application profiling, dynamic runtime partitioning, migration, reintegration, and rigorous synchronization on mobile devices for offload processing.

Existing computational offloading frameworks for MCC [2426] are the analogous extensions of pervasive computing [27] models or local distributed [28] application execution models for distributed application processing. Therefore, current offloading frameworks are deficient in the deployment of distributed system architecture for the development and deployment of applications for MCC. Current frameworks focus on the establishment of runtime distributed platform which utilizes additional resources in the deployment and management of distributed application processing platform.

Computing resources of the SMDs are utilized in arbitration with cloud servers for the selection of remote server node, dynamic application profiling, runtime solving of critical condition, application migration and reintegration and rigorous synchronization with cloud servers for the entire duration of distributed platform [23]. Consequently, current distributed application deployment models employ heavyweight procedures in leveraging application processing services of computational clouds for SMDs. The mobile nature, compact design, limited computing potential and wireless medium attributes of SMDs necessitate optimal and lightweight procedures for distributed application deployment in MCC.

3 Proposed Distributed and Elastic Application Processing (DEAP) Model

We propose DEAP model for enabling computationally intensive applications on SMDs. DEAP is distributed from the development and deployment perspectives and elastic in nature. Distributed features include explicitly defined distributed architecture for the deployment of distributed processing environment. Whereas, considering the importance, versatility and robustness of current elastic application frameworks [25, 26], DEAP is attributed with the elasticity features. The attributes of elastic mobile applications include ad-hoc platform creation, partitioning of intensive components, adaptive offloading, and transparency in distributed application deployment [30]. DEAP is based on the design time separation of the loosely coupled and tightly coherent components (as represented in set S in Sect. 4) and tightly coupled and loosely coherent components (as represented in set C in Sect. 4) of the mobile application and incorporation of the elasticity attributes for dynamic processing management on SMD.

The centralized management and availability of services in cloud datacenters motivate for employing explicitly designed client and server components of the intensive mobile applications for MCC. Therefore, DEAP model is based on explicitly defined client and server components of the intensive mobile applications. DEAP addresses the issue of deficiency of distributed architecture in remote processing of intensive components of mobile applications. DEAP is designed with two objectives: (1) Incorporation of standardized distributed architecture for the design, development and implementation of intensive applications in MCC. (2) Deployment of elasticity attributes for coping with dynamic application processing load on SMDs. DEAP employs two tiered architecture by explicitly defining the client and server components of the intensive mobile application at design time. DEAP Client is composed of location aware, tightly coupled or slight intensive components of the mobile application. DEAP Server is composed of maximum possible loosely coupled, tightly cohesive, location unaware and intensive components of the mobile application.

The client component of DEAP architecture is composed of SMD synchronizer, distributed middleware, profiler, optimizer, migrator and mobile node manager. DEAP Client is designed with the objective to include location aware, slight intensive or tightly coupled components of the intensive mobile application. DEAP Client utilizes the services of explicitly defined DEAP Server by using inter-process communication (IPC) mechanism. Traditionally, a large number of distributed applications are deployed over the internet such as Email, Facebook and Web application which provide thin client framework for client devices.

Client’s devices provide the user interface and the processing logic and implement on the server component of the application. However, DEAP Client is distinct from traditional thin clients for the reason of the enrichment of smartness of services on client component of the application. DEAP is distributed for the reason of having explicitly defined client and server components and elastic for the reason of runtime component offloading to cloud server node.

Considering the mobile nature and intrinsic limitations of wireless medium, DEAP proposes to process the slight intensive or tightly coupled components of the mobile application on SMD which contributes to the richness and smartness of local services and offline usability of the mobile application. DEAP proposes two independent operating procedures for the implementation of distributed platform of intensive mobile applications in MCC: Primary Operating Procedure (POP) and Secondary Operating Procedure (SOP).

DEAP Client is capable to implement processing logic on SMD for enhancing the smartness of local services on SMD. Figure 1 shows the sequence diagram for POP of DEAP client. In the POP of DEAP client application invokes the preconfigured services on the server application running on the cloud server node. Once the execution of remotely invoked services is completed, results are returned to the client application running on mobile device. The synchronizer component of DEAP model enables synchronization between application running on the mobile device and cloud server node. The primary communication mechanism in the POP is to invoke the processing of remote services by using IPC mechanism rather than send and receive messages. However, whenever data transmission is required between DEAP client application and DEAP server application, appropriate logic can be implemented for sending and receiving data. The POP of DEAP client is a simple and optimal procedure for remote processing of intensive components of the mobile application. POP utilizes the significance of explicitly designed DEAP server running on cloud node and does not involve the overhead of runtime optimization such as profiling, optimization and migration.

Fig. 1
figure 1

Sequence diagram for primary operating procedure of DEAP model

Figure 2 shows sequence diagram for the SOP of DEAP framework. The SOP of DEAP is employed for coping with the dynamic application processing load on the SMDs. DEAP client employs SOP at times when the computing requirements of the client application cannot be fulfilled on SMD. In order to cope with such critical situations, DEAP client activates the Profiler mechanism to identify the larger intensive components of the application; whereas, the Optimizer component resolves the problem of critical situation on the basis of input provided by Profiler. Similarly, Migrator component arbitrates with cloud servers for the migration of intensive components of the DEAP client application at runtime. It is important to highlight that the SOP does not utilize the services of explicitly defined DEAP server; instead, the migrator component arbitrates with cloud servers to facilitate remote execution services on casual basis.

Fig. 2
figure 2

Sequence diagram for secondary operating procedure of DEAP model

Considering the availability of centralized resources and management of centralized datacenters in computational clouds, explicitly configured DEAP server is an appropriate alternative for ad-hoc surrogates based remote servers of traditional offloading frameworks [27, 28]. Figure 3 shows the architecture of DEAP model. DEAP Server is deployed on cloud server node which eliminates the overhead of arbitration for the selection of appropriate remote host and migration of intensive partitions of the mobile application. The server component of the DEAP architecture is composed of cloud synchronizer and distributed middleware and cloud server node manager. The services of DEAP server are accessible to DEAP Client in POP of DEAP framework. DEAP server utilizes the services of distributed middleware for communication with DEAP client. The Synchronizer component on both SMD and cloud server is responsible for the synchronization between DEAP client application and DEAP server application. Synchronizer provides different types of services in the POP and SOP. In POP synchronizer coordinates for the synchronization between DEAP client application and DEAP server application. Whereas, the intensive components of the application are offloaded to cloud server node by employing application level partitioning in the same way as traditional offloading frameworks [2426].

Fig. 3
figure 3

Architecture of DEAP model

The offloaded components of the application are executed on temporarily created server node of cloud datacenters. In such scenario the role of synchronizer is to coordinate between elastic DEAP client mobile application and remote offloaded components of the application. The responsibility of the Synchronizer is to ensure the consistency of transmissions between DEAP client and DEAP sever in the POP and DEAP client application and temporarily allocated cloud server in the SOP. Mobile node manager and Server manager are responsible for communication between SMD and cloud datacenters. Mobile node manager copes with the challenges of mobility in the wireless environment and cloud server manager is responsible for ensuring the provision of cloud services to SMD.

4 Analytical Model to Evaluate DEAP Model

In this section we present an analytical model to evaluate traditional elastic mobile applications and DEAP framework. We model the overhead of runtime partitioning by formulating the computational resources utilization in application profiling and partitioning.

Mobile applications have distinct framework which is different from the framework of traditional applications for personal computers. However, application frameworks differ for different mobile operating systems [36]. For instance, applications for Android platform are composed of different components such as activity, services, content provider and broadcast receiver [37]. These application components are the fundamental building blocks of Android applications. Each component has a different point through which the system can enter the application. Each component is a unique building block that helps to define the overall behavior of the application. The following notations are used in the proposed analytical model for DEAP framework.

Let \({\text{X}}\) be an intensive mobile application consists of \({\text{n}} \in {\text{N }}\) operations or components such that \({\text{X}} \in {\text{U}}\) and \(\left| {\text{X}} \right| = {\text{n}}\), where \({\text{U}}\) represents the universal set and \({\text{N}}\) represents the set of natural numbers. Let \({\text{x}}_{\text{i}} \left( {{\text{i}} = 1,2, \ldots ,{\text{n}}} \right)\) be any single component in \({\text{X}}\), then \({\text{x}}_{\text{i}}\) can be defined as \(\left\{ {{\text{x}}_{\text{i}} \in {\text{U }}: \forall {\text{i}} \in {\text{N}}, \left| {\text{U}} \right| \ge 1} \right\}\) for \({\text{X}} = \left\{ {{\text{x}}_{1} ,{\text{x}}_{2} , \ldots , {\text{x}}_{\text{n}} } \right\}\).

The computational intensity of the application comprises the sum of computational requirements of all the components in \({\text{X}}\). CloudSim [38] is a simulation toolkit for modeling such a problem in cloud computing environments and evaluation of resource provisioning algorithms. This study adopts a similar means to model the memory (RAM) requirement and processing intensity i.e., amount of CPU requirement for millions of instructions (MI) to accomplish the application.

L and Tp be the total memory and processing requirements respectively, and Y and Z be the corresponding sets of memory size and processing length for X. Then, \({\text{Y}} = \{ {\text{y}}_{\text{i}} \in {\text{N}}: {\text{y}}_{\text{i}} > 0, \forall {\text{i}}\}\) when is the memory (RAM) size of the \({\text{ith}}\) single component \({\text{x}}_{\text{i}} \left( {\forall {\text{i}}} \right)\). And the total memory requirement of the application can be defined as,

$${\text{T}}_{\text{m}} = \mathop \sum \limits_{{{\text{i}} = 1}}^{\text{n}} {\text{y}}_{\text{i}} ,\quad {\rm for}\quad {\text{y}}_{\text{i}} \in {\text{Y}}\quad {\rm and}\quad \left| {\text{Y}} \right| \ge 1.$$
(1)

Similarly, if \({\text{z}}_{\text{i}}\) represents the processing length (amount of CPU) of the \({\text{ith}}\) component xi, then \({\text{Z}} = \{ {\text{z}}_{\text{i}} \in {\text{R}}: {\text{z}}_{\text{i}} > 0, \forall {\text{i}}\}\). when \({\text{R}}\) represents the set of real numbers. Hence the total processing requirement of the application can be defined as,

$${\text{T}}_{\text{p}} = \mathop \sum \limits_{{{\text{i}} = 1}}^{\text{n}} {\text{z}}_{\text{i}} ,\quad {\text{for}}\quad {\text{z}}_{\text{i}} \in {\text{Z}}\quad {\text{and}}\quad \left| {\text{Z}} \right| \ge 1.$$
(2)

4.1 DEAP Model Based Mobile Application

AP model is based on the design time separation of the components of the application. A simplified depiction of the DEAP model based intensive mobile application is presented in Fig. 4. In DEAP model, the overall intensive mobile application \(\left( {\text{X}} \right)\) is classified into two defined categories which are—client application (Xc) and server application (Xs). For each category, the model organises the components of the application on the basis of the computational requirements—either memory size (i.e.,\({\text{Y}}_{\text{c}}\) for Xc and Ys for Xs) or processing length (i.e., Zc for Xc and Zs for Xs). Typically, the client application is composed of the small intensive and tightly coupled components, whereas the server application is composed of big intensive and loosely coupled components of X.

Fig. 4
figure 4

A typical depiction of DEAP model based mobile application

Let there are \({\text{n}}_{1}\) components in \({\text{X}}_{\text{c}}\) and \({\text{n}}_{2}\) components in \({\text{X}}_{\text{s}}\) such that \({\text{n}}_{1} + {\text{n}}_{2} = {\text{n}}\), \({\text{X}}_{\text{c}} \cup {\text{X}}_{\text{s}} = {\text{X}} \in {\text{U}}\) and \({\text{X}}_{\text{c}} \cap {\text{X}}_{\text{s}} = \emptyset ,\) then by using the respective notations the following expressions can be easily defined with respect to different aspects of DEAP model.

For the DEAP client mobile application, let \({\text{T}}_{\text{m}}^{\text{c}}\) and \({\text{T}}_{\text{p}}^{\text{c}}\) be the total memory and processing requirements respectively, and \({\text{y}}_{\text{k}}\) and \({\text{z}}_{\text{k}}\) be the respective memory size and processing length of the \({\text{kth}}\) single component \({\text{x}}_{\text{k}} \in {\text{X}}_{\text{c}}\). Then, \({\text{Y}}_{\text{c}} = \{ {\text{y}}_{\text{k}} \in {\text{N}}: {\text{y}}_{\text{k}} > 0, 1 \le {\text{k}} \le {\text{n}}_{1} \}\) and \({\text{Z}}_{\text{c}} = \left\{ {{\text{z}}_{\text{k}} \in {\text{R}}: {\text{z}}_{\text{k}} > 0, 1 \le {\text{k}} \le {\text{n}}_{1} } \right\}\). So, \({\text{T}}_{\text{m}}^{\text{c}}\) and \({\text{T}}_{\text{p}}^{\text{c}}\) are determined as,

$${\text{T}}_{\text{m}}^{\text{c}} = \mathop \sum \limits_{{{\text{k}} = 1}}^{{{\text{n}}_{1} }} {\text{y}}_{\text{k}} ,\quad {\text{for}}\quad {\text{y}}_{\text{k}} \in {\text{Y}}_{\text{c}} \quad {\text{and}}\quad \left| {{\text{Y}}_{\text{c}} } \right| \ge 1,$$
(3)

and

$${\text{T}}_{\text{p}}^{\text{c}} = \mathop \sum \limits_{{{\text{k}} = 1}}^{{{\text{n}}_{1} }} {\text{z}}_{\text{k}} ,{\text{ for}}\quad {\text{z}}_{\text{k}} \in {\text{Z}}_{\text{c}} \quad {\text{and}}\quad \left| {{\text{Z}}_{\text{c}} } \right| \ge 1.$$
(4)

Similarly for the DEAP server mobile application, if \({\text{T}}_{\text{m}}^{\text{s}}\) and \({\text{T}}_{\text{p}}^{\text{s}}\) represent the total memory and processing requirements respectively, then they can be determined as,

$${\text{T}}_{\text{m}}^{\text{s}} = \mathop \sum \limits_{{{\text{l}} = 1}}^{{{\text{n}}_{2} }} {\text{y}}_{\text{l}} , \, \quad {\text{for}}\quad {\text{y}}_{\text{l}} \in {\text{Y}}_{\text{s}} \quad {\text{and}}\quad \left| {{\text{Y}}_{\text{s}} } \right| \ge 1$$
(5)

and

$${\text{T}}_{\text{p}}^{\text{s}} = \mathop \sum \limits_{{{\text{l}} = 1}}^{{{\text{n}}_{2} }} {\text{z}}_{\text{l}} ,\quad {\text{for}}\quad {\text{z}}_{\text{l}} \in {\text{Z}}_{\text{s}} \quad {\text{and}}\quad \left| {{\text{Z}}_{\text{s}} } \right| \ge 1.$$
(6)

where given that, \({\text{Y}}_{\text{s}} = \{ {\text{y}}_{\text{l}} \in {\text{N}}: {\text{y}}_{\text{l}} > 0, 1 \le {\text{l}} \le {\text{n}}_{2} \}\) for the memory size \({\text{y}}_{\text{l}}\) of the \({\text{lth}}\) single component \({\text{x}}_{\text{l}} \in {\text{X}}_{\text{s}}\), and \({\text{Z}}_{\text{s}} = \left\{ {{\text{z}}_{\text{l}} \in {\text{R}}: {\text{z}}_{\text{l}} > 0, 1 \le {\text{l}} \le {\text{n}}_{2} } \right\}\) for the processing length \({\text{z}}_{\text{l}}\) of the \({\text{lth}}\) single component \({\text{x}}_{\text{l}} \in {\text{X}}_{\text{s}}\).

As \({\text{X}}_{\text{c}}\) \(\subset\) \({\text{X}} \in {\text{U}}\), \({\text{X}}_{\text{s}}\) \(\subset\) \({\text{X}} \in {\text{U}}\), \({\text{X}}_{\text{c}} \cup {\text{X}}_{\text{s}} = {\text{X}} \in {\text{U}}\) and \({\text{X}}_{\text{c}} \cap {\text{X}}_{\text{s}} = \emptyset\), the memory size of the standalone mobile application is equal to the sum of memory size of client application and the memory size of server application. Now the Eq. (1) can be expressed as,

$${\text{T}}_{\text{m}} = \mathop \sum \limits_{{{\text{i}} = 1}}^{\text{n}} {\text{y}}_{\text{i}} = \mathop \sum \limits_{{{\text{k}} = 1}}^{{{\text{n}}_{1} }} {\text{y}}_{\text{k}} + \mathop \sum \limits_{{{\text{l}} = 1}}^{{{\text{n}}_{2} }} {\text{y}}_{\text{l }} (as{\text{n}} = {\text{n}}_{1} + {\text{n}}_{2} ).$$

Hence, by employing Eqs. (3) and (5), the overall total memory size of the standalone intensive mobile application is determined as,

$${\text{T}}_{\text{m}} = {\text{T}}_{\text{m}}^{\text{c}} + {\text{T}}_{\text{m}}^{\text{s}}$$
(7)

This standard Eq. (7) is useful for validating the total memory size of client application (i.e., \({\text{T}}_{\text{m}}^{\text{c}} = {\text{T}}_{\text{m}} - {\text{T}}_{\text{m}}^{\text{s}}\)) and the total memory size of server application (i.e., \({\text{T}}_{\text{m}}^{\text{s}} = {\text{T}}_{\text{m}} - {\text{T}}_{\text{m}}^{\text{c}}\)).

Moreover, if \({\text{u}}\) represents the percentage of memory saved by outsourcing intensive components of the application at design time. Then by using (7), \({\text{u}}\) can be calculated as,

$${\text{u}} = \frac{{{\text{T}}_{\text{m}}^{\text{s}} }}{{{\text{T}}_{\text{m}} }} \times 100 .$$
(8)

Similarly, by using Eqs. (2), (4) and (6) the overall total processing length of the standalone mobile application can be determined by the following standard equation,

$${\text{T}}_{\text{p}} = {\text{T}}_{\text{p}}^{\text{c}} + {\text{T}}_{\text{p}}^{\text{s}}$$
(9)

The Eq. (9) is helpful for validating the total processing length of client application (i.e., \({\text{T}}_{\text{p}}^{\text{c}} = {\text{T}}_{\text{p}} - {\text{T}}_{\text{p}}^{\text{s}}\)) and the total processing length of server application (i.e., \({\text{T}}_{\text{p}}^{\text{s}} = {\text{T}}_{\text{p}} - {\text{T}}_{\text{p}}^{\text{c}}\)).

Now, if \({\text{v}}\) represents the percentage of CPU saved by outsourcing intensive components of the application at design time. Then by using (9), \({\text{v}}\) can be easily estimated as,

$${\text{v}} = \frac{{{\text{T}}_{\text{p}}^{\text{s}} }}{{{\text{T}}_{\text{p}} }} \times 100 .$$
(10)

4.2 Modeling Profiler Overhead

The elasticity aspect of DEAP framework employs runtime component offloading for coping with the dynamic processing loads on SMD. However, runtime component offloading is implemented as the secondary operation procedure. Investigation of the application processing on Android devices indicates that the resources utilization of application profiling depends on the number of components in the mobile application and the computational intensity of the components of the mobile application [23]. It means that profiling larger intensive component of the mobile application results in higher resources utilization for a longer period of time. The overhead of the single instance of profiler activation can be modeled on the basis of the number of components in the mobile application with a runtime profiling rate.

Let \({\text{a}}_{\text{i}}\) be the overhead of profiler activation at runtime of the \({\text{ith}} \left( {\forall {\text{i}}} \right)\) single instance for the \({\text{n}}_{\text{i}}\) number of components in the mobile application, and \({\text{r}}_{1}\) be an objective rate of the average overhead of runtime application profiling. Then \({\text{a}}_{\text{i}}\) can be defined as, \({\text{a}}_{\text{i}} = {\text{n}}_{\text{i}} \times {\text{r}}_{1}\) for \({\text{n}}_{\text{i}} \ge 1\) and \(0 < {\text{r}}_{1} \le 1\). Now the total overhead of profiler activation at runtime can be obtained as,

$${\text{a}} = \mathop \sum \limits_{{{\text{i}} = 1}}^{\text{n}} {\text{a}}_{\text{i}} ,\quad {\text{where}}\quad {\text{a}}_{\text{i}} > 0, \forall {\text{i}}.$$
(11)

4.3 Modeling Application Partitioning Overhead

The overhead of runtime application partitioning depends on the computational intensity of the components being partitioned at runtime. Typically optimizer uses an additional arbitrary rate of CPU as per the intensity of the component for each instance of optimizer activation. Also the overhead of each instance of the optimizer activation vary with the memory size or processing length of the intensive component of the mobile application. Thus, the application partitioning overhead runtime for a single instance can be modeled on the basis of the number of components in the mobile application with a runtime application partitioning rate.

Let \({\text{b}}_{\text{i}}\) be the respective processing overhead of the \({\text{ith}}\) single instance, and \({\text{r}}_{2}\) be an objective rate of processing overhead of optimizer activation down to the computational intensity of the operation. In this case \({\text{b}}_{\text{i}}\) can be expressed as \({\text{b}}_{\text{i}} = { \hbox{max} }\left\{ {{\text{y}}_{\text{i}} ,{\text{z}}_{\text{i}} } \right\} \times {\text{r}}_{2}\) for \({\text{y}}_{\text{i}} > 0, {\text{z}}_{\text{i}} > 0 \left( {\forall {\text{i}}} \right)\) and \(0 < {\text{r}}_{2} \le 1\). Hence, the total overhead of optimizer activation at runtime can be determined as,

$${\text{b}} = \mathop \sum \limits_{{{\text{i}} = 1}}^{\text{n}} {\text{b}}_{\text{i}} \quad {\text{for}}\quad 0 < {\text{a}}_{\text{i}} \le 1 .$$
(12)

Moreover, the total overhead of runtime optimization is the sum of the total profiler overhead \({\text{a}}\) and total optimizer overhead \({\text{b}}\) for resolving the problems of n number of intensive components of the mobile application. Now if \({\text{t}}\) represents the total overhead of runtime partitioning for the mobile application, then the total overhead of runtime optimization for \({\text{X}}\) can be estimated as,

$${\text{t}} = {\text{a}} + {\text{b}}$$
(13)

5 Evaluation and Validation

The intensive mobile application is represented in terms of computational requirements of the application. The computational requirements of the application are represented in terms of memory size and processing length of the components of the application. We investigate computing resources requirement for the same mobile application in two different scenarios A and B. Scenario A describes computational offloading by employing the contemporary offloading techniques [24, 39] wherein runtime partitioning is deployed for outsourcing computational load to the remote server node. Scenario B explains leveraging application processing services of computational clouds while employing the proposed DEAP model.

Scenario A represents the partitioning of the elastic application X at runtime in which the application is partitioned at runtime for coping the critical condition of excessive computational requirements of the application. However, scenario B represents distributed elastic application wherein the mobile application is organized on the basis of DEAP model. In scenario B the application is organized between explicitly defined DEAP client X c and DEAP server X s application. In order to achieve the objective of richness of services and enhance offline usability, DEAP client is enriched with application processing logic of slight intensive components of the application whereas DEAP server is configured with the highly intensive components of the application. Therefore, DEAP client is a distinct framework from traditional thin client applications.

Let τp denotes the CPU speed and τm denotes the RAM capacity of SMD. The application is tested for Android devices with (ARMv7 CPU having 600 MHz Processor, (470.22 BogoMIPS speed), and 512 MB RAM capacity. Therefore, τp = 470.22 MIPS and τm= 512 MB. The computational intensity of mobile application is determined by using prototype application for Android devices. The mobile application is composed of five intensive service components. The intensive components of mobile application are as follows:

  1. 1.

    A sorting service component which implements the logic of bubble sorting for linear list of 20,000 integer type values. The computational requirement of the sorting service includes 7.8 KB memory and 402.7 MI processing length.

  2. 2.

    A matrix multiplication service component which implements the logic of computing the product of 300 × 300 size two dimensional array of integer data type values. The computational requirements of the matrix multiplication service include 8.4 KB RAM and 286.4 MI processing length.

  3. 3.

    The power computing service which implements the logic of power computing be (is it power) where b = 2 and e = 108. The computational requirements of the power compute service requires 7.8 KB RAM and 279.9 MI processing length.

  4. 4.

    The factorial compute service computes factorial of n, whereas n = 40. The computational requirement of the factorial compute service includes 7.9 KB RAM and 51.7 MI processing length.

  5. 5.

    The searching service component of the mobile application searches a linear list of length n, whereas n = 10,000. The computational requirements of the searching service component include 8.2 KB RAM and 61.135 MI CPU.

By using Eq. 1 the total memory size (T m ) of the mobile application is 40.1 KB. Similarly, by using Eq. 2 the total processing length (T p ) of the mobile application is 1081.9 MI. As, T p  > τp, therefore component offloading is required to reduce the processing load on SMD. Profiler and Optimizer are required to be activated three times for separating the larger intensive components (x 3x 5) from the mobile application. By employing profiling on the Android platform it is found that profiler overhead (a) is 37% and Optimizer overhead is 5% of the computational intensity of the component being partitioned at runtime. By using Eq. 11 the total Profiler overhead is computed as a = 358.5 MI and by using Eq. 12 total Optimizer overhead is computed as b = 48.45 MI. Equation 13 indicates that the runtime partitioning overhead is the sum of total profiling overhead and total optimizing overhead. Hence, by using Eq. 13 total runtime partitioning overhead t is =407 MI. Runtime partitioning of the application reduces application processing load of the application to 112.9 MI and memory allocation is reduced to 16.1 KB. However, runtime partitioning requires additional 407 MI for application partitioning which shows additional resources utilization in the computational load distribution process.

By employing DEAP model the intensive mobile application is classified into DEAP client application Xc and DEAP server application Xs. Let the client application is composed of 3 components (x1–x3) and the server application is composed of 2 resources intensive components (x4–x5) which are separated at design time. Let set B is the finite set of components in DEAP client application and set C is the finite set of components in DEAP server application. I am not sure about the following notations, please revise it. We know that \({\text{X}}_{\text{c}} \cup {\text{X}}_{\text{s}} = {\text{X}} \in {\text{U}}\) and \({\text{X}}_{\text{c}} \cap {\text{X}}_{\text{s}} = \emptyset ,\). By using Eq. 3 the total memory size of the DEAP client application is computed as \({\text{T}}_{\text{m}}^{\text{c}}\) = 23.9 KB and by Eq. 4 the total processing length of the DEAP client application is computed as \({\text{T}}_{\text{p}}^{\text{c}}\) = 392.8 MI.

Let a single component of the mobile application is offloaded at runtime; therefore the profiler mechanism is activated for partitioning the intensive component of the mobile application. In this scenario, the overhead of runtime profiling is 103.6 MI and the overhead of Optimization is 13.9 MI by partitioning a single component of mobile application. By using Eq. 13 the total overhead (φ) of runtime partitioning is 117.6 MI. As \({\text{X}}_{\text{s}}\) is the finite set of components configured in the DEAP server application; therefore by using Eq. 5 the memory size (\({\text{T}}_{\text{m}}^{\text{s}}\)) of the DEAP server application is computed as 16.2 KB and by using Eq. 6 the processing length (\({\text{T}}_{\text{p}}^{\text{s}}\)) of the DEAP server application is computed as 689.1 MI.

Asis indicates that in scenario A and scenario B partitioning of the application is employed for reducing computation load on SMD. By using Eq. 7 the total memory size of the elastic application is equal to the sum of the memory size of DEAP client and DEAP server application. As, the total memory size of the elastic application is \({\text{T}}_{\text{m}}\) = 40.1 KB, whereas the total memory size of DEAP client application is \({\text{T}}_{\text{m}}^{\text{c}}\) = 23.9 KB and the total memory size of DEAP server is \({\text{T}}_{\text{m}}^{\text{s}}\) = 16.2 KB. Since, \({\text{T}}_{\text{m}} = {\text{T}}_{\text{m}}^{\text{c}} + {\text{T}}_{\text{m}}^{\text{s}}\) which validates Eqs. 7, 8, 9 and 10.

Similarly, by using Eq. 9 the total processing length of elastic application is equal to the sum of processing length of DEAP client and DEAP server application. We know that the total processing length of the elastic application is \({\text{T}}_{\text{p}}\) = 1081.9 MI whereas, the total processing length of DEAP client application is \({\text{T}}_{\text{p}}^{\text{c}}\) = 392.8 MI and the total processing length of DEAP server application is \({\text{T}}_{\text{p}}^{\text{s}}\) = 689.1 MI. DEAP based deployment of the mobile application reduces: Memory allocation on SMD 40.4% and CPU load 63.7%. In both the scenarios (A&B) the computational load offloaded to cloud server node is identical. However, additional resources utilization in computational offloading reduces 74.6% in scenario B for the reason of eliminating the overhead of runtime partitioning of the intensive component of the mobile application, which indicates the lightweight nature of DEAP framework.

Figure 5 shows the total CPU allocation in scenario A, wherein computational offloading is employed by using the existing techniques [24, 39] and scenario B, wherein DEAP model is employed for computational offloading. The analysis indicates that in scenario A the processing load of the application processing on SMD is 1488.9 MI whereas, in scenario B the processing load of the DEAP client application is 496.4 MI. However, in both scenarios two components of the application are executed on SMD, whereas three components of the application are outsourced to cloud server nodes. It shows 66.7% reduction in the CPU utilization for processing the identical workload on SMD in scenario B as compared to scenario A. The decrease in CPU utilization in DEAP model based computational offloading is for the reason of minimizing the instances of runtime component offloading which reduces the overhead of additional CPU utilization in runtime application profiling and optimization.

Fig. 5
figure 5

Comparison of total CPU utilization on SMD in different scenarios

Figure 6 compares the Profiler overhead in scenario A and B. Profiler overhead is found 358.5 MI in scenario A, wherein three intensive components of the application are offloaded at runtime. The profiler overhead is found 103.6 MI in scenario B, wherein a single intensive component of the application is offloaded at runtime. It shows that the profiler overhead is 28.8% higher in scenario A, as compared to B for the reason of larger number of components in the profiling of elastic application in scenario A.

Fig. 6
figure 6

Comparison of runtime profiling overhead

Similarly, Fig. 7 compares additional overhead of CPU utilization in application partitioning in scenario A and B. Additional CPU utilization in application partitioning is observed 48.5 MI in scenario A; whereas, CPU utilization is found 14 MI in scenario B. It shows that runtime application partitioning involves 28.8% additional CPU utilization. Optimizer overhead is larger in scenario A as compared to scenario B for the reason of partitioning larger number of components with higher computational intensity in scenario A.

Fig. 7
figure 7

Comparison of runtime optimization overhead

Moreover, in [23] we investigated the overhead of application profiling and partitioning. It is concluded that offloading the intensive components of the application at runtime places additional load on mobile device which increases the utilization of computing resources on SMD. As a result, the execution time of the locally executing components of the mobile application is affected adversely. It is found that CPU utilization increases 8% and the execution time of the locally executing services increases 32.6%. It is also found that during the component offloading process at runtime CPU utilization increases 9.8% which shows the additional overhead of application partitioning and component offloading at runtime.

We tested a prototype application on Android platform in the real MCC environment for evaluating the size data transmission and significance of DEAP framework. The prototype application is composed of sort service, matrix multiplication service and power compute service components. Each component of the application is evaluated with 30 different computational intensities. Figure 8 shows the comparison of the size of data transmission in employing the latest existing techniques [24, 39] and DEAP framework for computational offloading. Data transmission in runtime component offloading for the applications on Android platform involves application package file (.apk) and preferences file (.xml). Application package is the complied binary file of the application whereas, preferences file contains data saved while interrupting the running instance of the application. Preferences files are transmitted between mobile device and remote server node for exchanging data. It is found that by employing runtime component offloading 13 MB data is transmitted in offloading all the three components of the application. However while employing DEAP framework for computational offloading, 2 MB data is transmitted over the network. It shows that by reducing the instances of runtime components offloading the size of data transmission is reduced up to 84% in DEAP framework.

Fig. 8
figure 8

Comparison of the size of data transmission in traditional and DEAP framework based computational offloading

For evaluating the significance of DEAP framework, the prototype mobile application is executed on local mobile device, traditional offloading techniques [24, 39] and DEAP framework based computational offloading. Figure 9 compares allocation memory on mobile device in local and remote execution of the application. It is found that 31.8 MB RAM is allocated for the processing of application on local mobile device, 45 MB RAM is allocated in employing runtime component offloading, and 9.1 MB RAM is allocated in employing DEAP framework for computational offloading. It shows that by employing DEAP framework for computational offloading memory utilization is reduced by 79.8 and 71.5% as compared to existing techniques [24, 39] and the execution of application on local mobile device respectively.

Fig. 9
figure 9

Comparison of memory allocation on mobile device in local and remote execution of the application

Figure 10 shows the comparison of CPU utilization on mobile device for different components of the prototype application in local and DEAP based remote execution. It shows that by employing computational offloading the application processing load on the mobile device is decreased, which results in reduction of CPU utilization on mobile device. In executing the prototype application on mobile device, CPU utilization for different components of the application is found 48.7% for sort service, 45.5% for matrix multiplication service and 48% for power computing service. However, by deploying DEAP framework for computational offloading CPU utilization for different components is found as 25.5% for sort service, 35.4% for matrix multiplication service and 3% for power computer service. It shows that in deploying DEAP for computational offloading CPU utilization on mobile device decreases 47% for sort service, 22% for matrix multiplication service and 93.8% for power compute service. Similarly, the duration of CPU utilization on mobile device is decreased 55.8% for sort service, 9.95% for matrix multiplication service and 94% for power computing service. Deploying DEAP framework based computational offloading decreases average CPU utilization mobile by 55.5% and the duration of CPU utilization decreases by 54% which signifies the usefulness of computational offloading for MCC.

Fig. 10
figure 10

Comparison of CPU utilization on mobile device in local and DEAP based remote execution

There are following three fold advantages of the configuration of preconfigured server components:

  1. 1.

    The separation of intensive components of the mobile application (which are feasible for cloud based execution) at design time minimizes the occurrences of runtime application partitioning. Therefore, the overhead of additional CPU utilization in runtime component partitioning is reduced.

  2. 2.

    The configuration of intensive services/component in cloud server nodes results in the reduction of the data transmission overhead. Since, the intensive components are preconfigured; therefore, client mobile application requires activating remote services by using IPC mechanism instead of runtime component migration. As result, the overhead of component migration is reduced.

  3. 3.

    The services of cloud server can be utilized anytime and anywhere for mobile client applications on demand basis. Therefore, the availability of distributed services is increased.

6 Conclusion

We have proposed a distributed and elastic mobile application as a lightweight alternative for elastic mobile applications. DEAP framework mitigates the deficiencies of current offloading frameworks which are elastic in nature and lack in distributed design perspective which makes the development and deployment of distributed platform resources intensive on SMD. The incorporation of distributed application architecture facilitates in the simple application developmental procedures and lightweight mechanism for accessing the preconfigured services in cloud datacenters on demand basis. The dual operating nature of DEAP framework contributes to the versatility and robustness of the distributed and elastic model for intensive mobile application for MCC. DEAP implements the access of preconfigured services in cloud datacenters as the primary operation procedure, whereas runtime component offloading is implemented as the secondary operating procedure for coping with the dynamic processing load on smart mobile devices. The elasticity attributes of DEAP client enables resources constraint SMDs to cope with the challenges of dynamic application processing loads. Further, the elastic nature of DEAP client contributes to the objectives of offline usability, smart client and rich internet applications for MCC. Analysis of the DEAP framework shows that separation of intensive components of the application at design time is significant approach for reducing additional resources utilization in computational offloading for MCC. We conclude that DEAP framework is a lightweight and optimal model for leveraging application processing services of computational clouds in MCC.