Keywords

1 Introduction

Cloud computing success is mainly due to capabilities such as the provisioning of a wide range of flexible, customizable, resilient and cost effective infrastructure, platforms and applications, on-demand, QoS guaranteed, as a service.

Cloud is already an effective technology, plenty of real (business) applications and services have been developed so far and a new digital economy has arisen from it, focused on the utility perspective considering IT resources as commodities, to be provided as a service according to customer needs and wills [12]. Despite the overwhelming success and widespread adoption of Cloud and similar/related paradigms such as service oriented engineering, software defined and virtualized ecosystems, ubiquitous and autonomic computing, there is still room for improving and extending them with novel ideas, towards new directions.

One such avenue is crowdsourcing, as well as resource sharing or volunteer approaches, aimed at exploiting the power and the wisdom of crowds, involving people that may (voluntarily) contribute to a given IT-related project or application by sharing their own (computing, storage, network, sensing, data) resources. Specifically, to adopt this idea in Cloud computing contexts, we have to think about a Cloud system as a collector of resources shared by single contributors, companies, and/or communities contributing to the assembly of an IT infrastructure, bottom-up, following the volunteer computing approach [1, 9]. This is at the basis of Cloud@Home [7], a project aimed at implementing a volunteer Cloud infrastructure on top of resources shared (for free or by charge) by their owners or administrators and to provide those to users through an on-demand, service-oriented interface.

The main challenge of Cloud@Home is to deal with volunteer contributions, requiring mechanisms both at node and at Cloud-infrastructure level for engaging, enrolling, indexing, discovering and managing the contributed nodes as a whole. In particular, the node churn issue due to random and unpredictable join and leave of contributors should be properly addressed. The goal of this paper is to propose a solution to deal with all the above issues at different levels. To this purpose, a reference architecture including the modules of the Cloud@Home software stack providing mechanisms to address these issues is specified. Then, in the design of the Cloud@Home stack, we start from existing solutions already in place and working. In particular we base our implementation on a well-know open source software for creating and managing private and public Clouds, OpenStack [17], a de-facto standard Cloud management software. This way, existing and effective solutions for most of Cloud computing issues, such as security, privacy, accounting, indexing, could be used as they are or anyway extended to the problem at hand, i.e. dealing with the contributor dynamics.

With regard to the existing literature, Cloud@Home was the first and foremost attempt towards a volunteer-based Cloud infrastructure. After that, the idea of volunteer Clouds emerged as one of the most interesting topic in Cloud computing, as for example in [6] where a form of dispersed Cloud or “nebulas” based on voluntarily shared resources is proposed mainly for testing environments. Another interesting attempt in such direction is BoincVM [16], a platform based on BOINC to harness volunteer resources into a computing infrastructure for CPU intensive scientific applications. In P2P Cloud [3, 11] a totally distributed, peer-to-peer paradigm is proposed to gather and manage a volunteer Cloud infrastructure. Similarly, Crowd computing [13, 14] aims at implementing a computing infrastructure involving “crowds” of contributors, in a crowdsourcing approach. They mainly share their “power”, i.e., their computing nodes as in volunteer computing [1] or devices as in participatory, opportunistic and mobile crowdsensing approaches [10]. Anyway, the client-server model is a limitation on the suitability of these paradigms to Cloud computing scenarios.

These works mainly propose new approaches, based on resource sharing, contribution and crowdsourcing, providing some high level directions for dealing with related issues. None of them gets into implementation details. In this paper we go a step beyond towards a real and effective implementation of a volunteer, P2P, desktop Cloud infrastructure, proposing a software stack based on OpenStack, properly customized and extended to deal with contribution issues. To the best of our knowledge this is the first actual attempt in such direction, e.g., a Cloud@Home OpenStack.

Details on that are provided in the remainder of the paper, organized as follows: Sect. 2 provides an overview of preliminary concepts, i.e. Cloud@Home and Openstack. Then, Sect. 3 proposes the Cloud@Home reference architecture identifying main functionalities and modules. The Cloud@Home software stack design though OpenStack is described in Sect. 4, while details on its preliminary implementation are provided in Sect. 5. Some remarks and objectives for future work are reported in Sect. 6.

2 Overview

2.1 Cloud@Home

The Cloud@Home goal is to use “domestic” computing resources to build desktop Clouds made of voluntarily contributed resources. Therefore, following the volunteer computing wave [1], across Grid computing and desktop Grids [2, 5], we think about desktop Cloud platforms able to engage and retain contributors for providing virtual (processing, storage, networking, sensing) resources as a service, in the Infrastructure as a Service (IaaS) fashion. This novel, revised view of Cloud computing could perfectly fit with private and community needs, but our real, long-term challenge is to exploit it in hybrid and especially in business contexts towards public deployment models.

Fig. 1.
figure 1

Cloud@Home scenario.

On this premise, the overall scenario we have in mind is highlighted in Fig. 1. Three actors are identified: the C@H service provider, building up the Cloud infrastructure by engaging contributing nodes; contributors that share their resources; and end users, interacting with the Cloud as customers, submitting requests to “rent” (virtual) resources from C@H.

This way, the infrastructure is mainly made of contributing nodes shared by their owners or admins, acting as Cloud@Home contributors. The C@H provider gathers and collects these resources, managing, abstracting and virtualizing them to be provided as a service, through a specific C@H management system. These resources are therefore provided to end users as virtual ones (machines, storage) by the management system. There are no specific boundaries between contributor and end user roles, just different duties, and it is possible a contributor is at the same time end user and vice-versa.

Fig. 2.
figure 2

OpenStack: conceptual architecture.

2.2 OpenStack

OpenStack is an OpenSource Cloud computing platform mostly used for development and deployment of IaaS solutions, managed by a non-profit foundation and supported by more than 200 companies and 10,000 community members in nearly 100 countries. OpenStack is highly flexible since it supports most of the existing hypervisors thus enabling a variety of virtualization modes and usage scenarios. OpenStack features a growing number of components to build up its services, a core subset of which is depicted in Fig. 2, where arrows describe the relationships and interactions among subsystems. To implement a fully working Cloud environment, to deploy each and every of the depicted components is not strictly required. In particular Swift and Cinder are needed only if object and block storage services are required somehow for the users of the cloud. Same considerations apply for Horizon, if users may do without a Web UI and may recur to a CLI instead of a graphical dashboard. Heat is needed insofar orchestration services are required, and Ceilometer only for billing purposes or other monitoring duties. That leaves only Keystone, Glance, Nova and Neutron as core services, where Keystone provides authentication and authorization facilities, Neutron is in charge of all the networking mechanisms and Nova of the instantiation and lifecycle management of compute virtual machines. Glance provides Nova VMs with the requested images.

3 Cloud@Home Reference Architecture

To implement the Cloud@Home vision, a specific solution framework is required. It has to deal with the interactions among the volunteered resources as well as with the desktop Cloud management and other (federated) Clouds, taking into account their contribution dynamics. Indeed, a contributor can join and leave the system randomly and unpredictably, thus implying node churning. Therefore, a solution should be adaptive to such random events, providing elastic mechanisms promptly reacting to the latter in a transparent way for the end users. Furthermore, interoperability [4] is one of the issue to address in Cloud@Home, as well as placement and orchestration also taking into account end-users QoS requirements on resource provisioning.

Apart the basic fuctionalities, the main (non-functional) properties Cloud@Home services have to provide are:

  • scalability - the impact of a variation in the number of contributing resources on the performance and the other QoS requirements agreed by the parties has to be hided by the system to the customer;

  • adaptability - churn management: the algorithm has to be able to detect changes in the logic organization of nodes and to react to these changes in real-time;

  • elasticity - the algorithm has to specify reconfiguration policies to optimize the Cloud@Home infrastructure after contributors’ join and leave;

  • dependability, resilience, fault tolerance, security and privacy - to deal with the degradation of performance and availability of the whole infrastructure due to unpredictable join and leave of contributors, redundancy techniques and job status tracking and monitoring have to be developed, as well as security and privacy mechanisms since virtualization provides isolation of services, but does not provide protection from local access, i.e. insider threats and abuses.

Fig. 3.
figure 3

The Cloud@Home stack reference architecture.

According to these (functional and non-functional) requirements, and based on the well-known Cloud service layering [8, 15], the Cloud@Home stack reference architecture shown in Fig. 3 has been identified. Specifically, the Cloud@Home physical layer is composed of a “Cloud” of geographically distributed contributing nodes. They provide to the upper layers the (physical and virtual) resources for implementing the Cloud@Home infrastructure services. It usually includes the operating system, protocols, packages, libraries, compilers, programming and development environments, etc. Moreover, to adequately manage physical resources a virtual resource manager has to be installed into the contributing node for virtualizing physical resources. Abstraction and virtualization of physical resources provide a uniform, interoperable and customizable view of Cloud services and resources. This way, at infrastructure layer, the Cloud@Home stack groups mechanisms and tools for virtualizing physical (computing, storage, sensing, networking, etc.) resources into virtual resources (VR).

The infrastructure layer provides mechanisms, policies and tools for locally and globally managing the Cloud resources to implement the Cloud@Home service. It mainly provides end users with facilities to manage the Cloud@Home system. Indeed, the infrastructure layer is in charge of the resource and service management (enrolling, discovery, allocation, coordination, monitoring, placement, scheduling, etc.) from the Cloud perspective. It also provides enhanced mechanisms and policies for improving the quality of service (QoS), dealing with churning, managing complex/multiple resource request in an orchestrated way, interacting and brokering with other Clouds.

Specifically, this implements a two-level Software Defined Infrastructure (SDI) model, where basic mechanisms and tools at “data plane” are provided to the “control plane” for implementing advanced management facilities, ranging from orchestration to QoS and churn management.

The core modules of the Cloud@Home Management System are reported in the layered model of Fig. 3. A security and privacy cross-layer module is included into this Cloud@Home reference architecture to provide mechanisms addressing related issues. The core module deployment in either the contributing node or in the infrastructure side is also highlighted and detailed in the following according to the deployment, bottom up.

3.1 Node-Side

Considering a generic Cloud service built on top of a contributing resource, the node-side Cloud@Home framework provides tools for managing virtual resources considering, on one hand, contribution policies from contributors and, on the other, requests and requirements coming from the higher Cloud side. To this purpose, two modules are identified and deployed on the contributing node, at the bottom of the stack: the Hypervisor and the Node Mananager.

Fig. 4.
figure 4

The Cloud@Home hypervisor modules.

The Hypervisor is a core component of Cloud@Home since it introduces layers of abstraction and mechanisms for virtualization in the contributing resource. It mainly provides the primitives, the API for managing a virtual resource. In the case of processing resources, it is the Virtual Machine Manager while in the case of storage resources it could be a distributed file system module, or for sensors and actuators it corresponds to the SAaaS Hypervisor [7]. It is composed of thee main building blocks, i.e. Adapter, Abstraction Unit and Virtualization Unit as reported in Fig. 4.

At the bottom there is the Adapter, which plays several distinct roles, such as converting the high-level directives in native commands, processing requests for reconfiguration of the resource and providing mechanisms for establishing an out-of-band channel to the system, for direct interaction with the resources.

The Abstraction Unit operates on top of the Adapter, mainly implementing abstraction of underlying physical-hardware resources towards open and well-known standards and interfaces, also dealing with networking issues. The Virtualization Unit, name after the Virtual Machine Monitor to highlight its role as a manager of the lifecycle of virtualized resource instances. This includes APIs and functionalities for virtual instance creation, reaping and repurposing, as well as for boot- (defined statically) and run-time (dynamically) parameters discovery and tuning in accordance with contextualization requests. It can work either directly on the Adapter in the case the resource provides generic, standard, abstracted interfaces by itself, or on the Abstraction Unit otherwise.

Fig. 5.
figure 5

The Cloud@Home node manager modules.

The other component of the Cloud@Home stack deployed into a node is the Node Manager, which can be considered as the brain of a node. Indeed, it implements a first step towards a volunteer Cloud merging local and global mechanisms and policies. It is the bridge between virtual nodes and the Cloud, allowing the node to join a Cloud@Home to expose its resources as services. This is therefore implemented in a collaborative and decentralized way, interacting with neighboring nodes and adopting autonomic self-managing approaches.

This way, the main blocks composing a Node Manager are showed in Fig. 5. The Provisioning System implements functions for allocating, managing, migrating and destroying a virtual resource on the node. The Monitoring System allows to take under control the local resources. Together with the resource provider they establishes whether a virtual resource allocation request can be satisfied or should be rejected, also alerting the higher Cloud level on crash or shortage of resources in the node.

The Policy Coordinator selects and enforces the node management strategy, taking into account Cloud policies coming from the higher level and contribution directives, based on the current status of the node. To perform this task, the Policy Coordinator interacts with the Cloud layer and with the Subscription Manager, coordinating their inputs. Specifically, the Subscription Manager is in charge of storing and carrying out the subscriptions of the node to all the Cloud@Home it contributes, since a contributor can be involved in more than one Cloud@Home. For each of them a contribution profile should be specified, also allowing the system to choose the Cloud@Home to contribute in case of overlapped incoming requests from different sources. Moreover, it also locally manages the credits assigned by the different Cloud credit reward systems (if any), transferring and exchanging them as required.

Then, the Cloud Overlayer provides mechanisms and tools for joining and leaving a Cloud@Home. From a node perspective the Cloud@Home can be considered as an overlay-opportunistic-P2P network on top of the node resources. A possible way for implementing such mechanism could be through distributed hash tables or similar peer to peer approaches that also provide the concept of neighborhood as well as enrolment, indexing and discovery facilities. This way the Cloud Overlayer is just a client for such kind of systems, also allowing to interact with the high-levele management system, if required. Anyway, different implementations are possible, by choosing the P2P Cloud one the Cloud@Home system is autonomous and can also provide basic Cloud functionalities without requiring advanced, high level mechanisms.

3.2 Cloud Side

This way, on the Cloud side there are mainly mechanisms and tools for managing the Cloud infrastructure as a whole. To this purpose a Software Defined Infrastructure approach has been adopted, splitting basic Cloud mechanisms and functionalities from policies.

Fig. 6.
figure 6

The Cloud@Home cloud enabler modules.

At the bottom, the Cloud Enabler could be considered as the counterpart, server/Cloud-side, of the Cloud Overlayer on the node, implementing basic mechanism and tools for the (centralized) management of the Cloud@Home infrastructure. Its main modules are depicted in Fig. 6: the Indexing, Discovery and Monitoring (IDM) service and the Placement and Scheduling (P&S) one. The former is in charge of enrolling, indexing, and monitoring contributing nodes. The P&S is a peripheral resource broker of the Cloud@Home infrastructure, allocating a resource to an incoming request, moreover it is in charge of moving and managing services and data (for example VM migrations).

Fig. 7.
figure 7

The Cloud@Home policy manager modules.

On top of the basic infrastructure mechanisms, advanced ones are implemented in the Policy Manager. It is composed of the 5 modules shown in Fig. 7. The Resource Engine is the hearth of Cloud@Home, acing as a resource coordinator at Cloud infrastructure layer. To achieve such goal, the resource engine adopts a hierarchical policy in synergy with the P&S, also interacting with all the other Policy Manager components.

The Incentive Mechanism aims at increasing the availability and reliability of volunteered resources by assigning credit and reward or penalties to contributing nodes in a P2P-volunteer fashion. The SLA Manager enforces the SLAs negotiated and agreed by the parties (providers and end users), if any. It aims at implementing more reliable services on an infrastructure made up of sensors contributed on an otherwise just best-effort basis. Therefore it also specifies the Cloud policies to be actuated in case of SLA violations. The QoS Manager provides the service quality management framework at single Cloud level, through metrics and means to measure the underlying Cloud@Home infrastructure, directly interacting with the IDM service and contributing nodes through the latter.

The Cloud broker collects and manages information about the available Clouds and the services they provide (both functional and non-functional parameters, such as QoS, costs and reliability, request formats’ specifications for Cloud@Home-foreign Clouds translations, etc.). Moreover, the Cloud broker collaborates with the resource engine to fulfil resource discovery, becoming “inter-clouds”.

4 Cloud@Home OpenStack-Based Architecture

In our effort to extend OpenStack to support the Cloud@Home volunteer-powered Cloud model, we map here the concepts and the modules explored in the previous sections in light of the current status of the OpenStack architecture.

Fig. 8.
figure 8

The Cloud@Home logical architecture.

In particular, following a derived logical architecture of Nova as depicted in Fig. 8, starting from the bottom up, we may essentially identify in LibVirt (or other Nova plugin-enabled VMM) the role of the Cloud@Home Hypervisor. Apart from the Hypervisor, also the Queue is depicted in green, to highlight components which are part of the architecture, but not specific of the OpenStack framework. Magenta-colored blocks are instead Nova-specific components. Nova-api, nova-console, nova-consoleauth and nova-cert are not involved in the mapping, because those are either interfaces (-api), authorization-related (-consoleauth for access to the console of any instance, and -cert for client certificates), or specific to certain technologies (-console, as proxy for Xen-based consoles).

The Cloud@Home Provisioning System would be covered by functionalities exposed by the Nova Compute service and the Monitoring system would map to a monitoring subsystem for nodes, which we’ll describe later on in the following. It is important to remark that another monitoring subsystem, Ceilometer, is available for OpenStack, but in a different role, i.e., metering for billing purposes, and thus focused on VM-relevant metrics, not on the health of the hosting subsystem, i.e., Compute nodes.

The Cloud@Home Policy Coordinator instead is not easy to map to the current OpenStack architecture, as it implies some node-mandated restrictions to contribution are in place and need to be mediated with Cloud-level directives, even if a simplified scheme may just be devised where the contributor statically chooses certain constraints, e.g., both temporal ones and resource quotas, and let the subscription phase expose the subset of instance flavors compatible with those, and switch node (or specific resources) availability on or off accordingly. The same considerations apply to the Subscription Manager, which entail for a node to potentially be part of multiple Clouds, and to the Cloud Overlayer as well, in this case implying a different, Controller-less (or distributed Controller), flavor of OpenStack, thus relying on a fully P2P topology.

Within the Cloud@Home Cloud Enabler, the IDM essentially enables the enrollment of volunteering nodes to the Cloud and should actually “enable” nodes to be exploited as Compute ones, i.e., should provision the dynamic pool of resources by deploying essential node-side services, e.g., Nova Compute, on the remote hosts. The Cloud@Home P&S may be considered more or less overlapping with the Nova Scheduler, and the Resource Engine may thus play the role of a (generalized) Nova Conductor. Upwards the Resource Engine, we may consider all components outside the scope of the current (and announced) OpenStack architectural choices and efforts, thus material for future investigations.

A note has to be made about our neglecting more details about the storage/ networking subsystems and the corresponding workflows: this is intentional, as our design of the Cloud@Home framework on top of OpenStack is to be considered either totally transparent (in the case of networking) or not relevant, as with storage, considering the choice remains the same, i.e., node-local images or volumes vs. remote ones, where the only difference lies in the expected performance of remote image/volume-backed instances, thus the need to warn the user about the implications of such a choice when served with volunteered nodes, and to craft suitable SLAs accordingly.

Moreover, while the Scheduler may be left as is, and the policies just implemented by resorting to suitable (reliability/churn-aware) filters and weights, further investigation may point to a more granular mechanism under the guise of a hierarchy of schedulers or, conversely, a specific segregation of the pool of nodes in suitable aggregates, e.g., OpenStack availability zones or possibly even cells, the latter oriented to more granular pooling, but unfortunately not supporting inter-cell migration yet.

5 Preliminary Implementation

In an effort to not lose generality of the solution where feasible, we envisioned volunteering nodes running heterogeneous host Operating Systems (e.g., Windows, MacOS, Linux), thus leading to a nested approach to virtualization, where a VM gets deployed, in the host OS-native (or otherwise preferred) VMM, as Compute Node, in turn able to accomodate the instantiation of either fully virtualized (e.g., Linux KVM-based) VMs as user-requested instances, or even (better) containerized ones, by means of, e.g., Nova LXC/LXD plugin. It is now more clear that the monitoring system needs to actually track the status and health of first-level, remotely hosted, VMs, as exemplified above.

From a deployment perspective, which means that the IDM would operate on what would be abstracted as remote nodes but actually consisting of “virtualized” bare metal the host OS VMM (and a suitable virtual network) exposes. We believe Fuel may be a suitable candidate for such automatic deployment and provisioning of the additional compute nodes, as already one of the community-blessed frameworks for whole OpenStack instances deployment on bare metal. In particular, whereas Fuel is especially meant for deployment of an instance from scratch, the setup of additional nodes to any existing (up and running) instance is still possible, as long as it has already been deployed by Fuel itself, in order to have the deployment recipe ready and let the Fuel monitoring subsystem track the availability of the underlying resources, i.e., the virtualized bare metal.

Moreover, Fuel comprises a dashboard for visual point-and-click administration, but its core, Nailgun, exposes REST APIs and a CLI, so interaction on the side of the administrator, graphical or through the APIs, is actually not required after the setup of a Cloud instance by means of an initial configuration phase or the upload of a template.

The node-side core of the Subscription Manager may thus consist in an out-of-band, minimal service running on the host OS, i.e., a bootstrapping executable, which upon first execution: starts up the first-level VM and sets up one TAP-based GRE tunnel between the VMM bridge and each endpoint in the Cloud, corresponding to any unique and essential centralized service behind such endpoint, e.g., at least the Nova Compute in charge of the node, as well as the Neutron Controller, if available on separate machines. Obviously this step includes registering both VM and (at least a control) channel for execution at boot-time for subsequent reboots, as well as to be always-on, via the relevant OS-dependent VMM and networking facilities.

In particular with regard to the control channel for command streams and monitoring services, we modeled such a facility as WebSocket-based. Web Application Messaging Protocol (WAMP), our choice of asynchronous transport and delivery system for message-encapsulated commands, is a sub-protocol of WebSocket, in its turn a standard HTTP-based protocol providing a full-duplex TCP communication channel over a single HTTP-based persistent connection. WAMP specifies a communication semantic for messages sent over WebSocket, and is natively based on WebSocket (even if it also allows for different transport protocols), providing both publish/subscribe (pub/sub) and remote procedure call (RPC) mechanisms. A WAMP router is responsible of brokering pub/sub messages and routing remote calls, together with results/errors.

Figure 9 shows the C@H node-side architecture. The Subscription Manager interacts with the Cloud by connecting to a centralized WAMP router through a WebSocket full-duplex channel, sending and receiving data to/from the Cloud and executing commands provided by the users via the Cloud. Such commands are mostly related to the host-level Virtual Machine Monitor subsystem, and in particular about monitoring its state and ensuring the first-level VM is up and running at all times. Moreover, a set of WebSocket tunneling libraries allows the Subscription Manager to also act as a WebSocket reverse tunneling server, connecting to a specific WebSocket server running in the Cloud. This enables internal (host-level) services to be directly accessed by external users through the WebSocket tunnel whose incoming traffic is automatically forwarded to the relevant resident processes, e.g., hypervisor services such as the remote video console, either unmediated or through a specific local proxy service.

The aforementioned GRE tunnels, needed for communication with centralized services, get instantiated over WebSocket-based reverse tunnels which get activated on demand. Outgoing traffic is redirected to the WebSocket tunnel and eventually reaches the relevant Cloud endpoints.

As soon as the setup of the “bare metal” is ready, the Subscription Manager should let the IDM (i.e., Nailgun) mark the new node as ready to be deployed as soon as reachable by polling for its presence through Nailgun APIs, and afterward request the IDM to set its role as a Compute node, and at last trigger deployment of the modifications to the Cloud due to the node just being appended.

Now we can see that Fuel may also tackle churn by an automated node “evacuation” process, i.e., migrating VMs, when some form of advance warning is conveyed from the volunteer to the Cloud (“the node is going to shut down in 10 min from now”), or otherwise instantiating suitable replacements in place of (abruptly) missing ones. Evacuation is an already available primitive in OpenStack, but it is currently meant to be operated (manually or via scripts) by an administrator. We envision then the design of an extension to Nailgun which monitors the centrally available logs to react on the event of one or more nodes, either signaling imminent churn or outright gone missing, with reaction set as a full evacuation workflow.

Fig. 9.
figure 9

Stack@Home node-side architecture.

6 Conclusions

In this paper, a novel Cloud paradigm is proposed merging volunteer-crowd computing with service oriented infrastructure. It shifts the traditional Cloud paradigm enabled by the availability of datacenters and server farms a step forward into an ecosystem able to connect any device or node contributing to a complex IT infrastructure by sharing its processing, storage, networking and/or sensing resources. To implement the Cloud@Home paradigm thus proposed, a reference architecture of a software stack has been first defined, and then mapped on top of the OpenStack framework. In the design of the Cloud@Home stack we chose to start from the de-facto standard for IaaS Cloud frameworks, OpenStack, adapting and customizing related modules to the volunteer contribution context. This allowed us to demonstrate the feasibility of the Cloud@Home vision, leveraging off-the-shelf, open source components. Further efforts are thus required to implement a full fledged Cloud@Home system, which have to mainly manage node churn, on one hand improving node reliability by motivating/retaining contributors through specific incentive mechanisms, on the other by implementing mechanisms for dealing with service level agreements and related quality of service guarantees to meet end user/customer requirements.