Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1.1 Introduction

Cloud Computing refers to the delivery of computing as a service, instead of a traditional product. According to the definition of National Institute of Standards and Technology (NIST) [1, 2], Cloud Computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. In a Cloud Computing environment, decentralized consumers are provided by flexible and measurable service from the resource pool. The key spirit of the Cloud concept can be summarized as the capability of providing distributed, fast-responding, on-demand and quantifiable services.

In the past few decades, many novel technologies have been proposed to improve the environment of manufacturing business, e.g. collaborative manufacturing, virtual manufacturing, agile manufacturing, etc. Amongst these solutions, Cloud Computing technology provides a promising solution and an even broader definition for the concept of “Design anywhere and make anywhere (DAMA)”.

In the first half of this chapter, Cloud Computing technology related to the manufacturing perspective is discussed, followed by a review of the related research work. In the second half, a Cloud-based platform is proposed to achieve an interoperable distributed manufacturing environment.

1.2 Cloud Manufacturing: Cloud Computing with a Manufacturing Perspective

Cloud Computing includes both the software service delivered to the user and the systems and hardware that are able to provide the service in need. The former is defined as Software as a Service (SaaS) and the later as Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Under the broad concept of Cloud, everything is treated as a Service (XaaS).

1.2.1 From Cloud Computing to Cloud Manufacturing

To precisely define and classify the Cloud model, there are four deployment types, i.e. private, community, public, and hybrid Cloud. To support these concepts or architecture, platforms are provided by the major software vendors such as Amazon’s Elastic Compute Cloud (EC2) [3], Google’s App engine [4], Microsoft’s Azure [5] and Sun’s Cloud [6]. According to a recent Forrester research [7], Cloud Computing business has reached $40.7 billion globally in 2010 while impacting 948 billion Information and Communications Technology (ICT) market. Even though some obstacles are observed, for instance the unpredictability and confidentiality, it is still predicted that Cloud Computing market will keep growing and developing in the future [8].

As a service-centric solution, users or the enterprise will benefits from Cloud Computing features such as low cost, flexibility, mobility and automation. In the Cloud Computing environment, an enterprise does not need to purchase the software/hardware that is rarely used. Based on the “pay-as-you-go” principle, the cost can be reduced including less maintaining and labor expenses. For the service provider, the updating and re-producing procedure is simplified as well. By updating the codes in the provider Cloud, traditional shipping, re-packaging cost is made avoidable, so is the cost. The user is provided by more flexible computing methods. There are more choices for companies to process a specific task, and more freedom to launch/terminate a service with less issue. The users/employees could access the application/information via various devices connected to the network. It improves the organization with an environment of distributed collaboration. In an industry scenario, users are enabled to get the real-time device status, for instance storage information, thanks to the network sensors without the need to visit the scene. For both Cloud users and providers, new application publishing and updating is made easier. By updating the server, all the personal/enterprise would get the latest service without any additional effort.

Nowadays, manufacturing business may not survive in the competitive market without the support of computer-aided capabilities (CAx) and Information Technology (IT). Cloud technology can improve the environment of product design, manufacturing process management, enterprise resource planning, and manufacturing resource management by providing a globally optimized solution. As mentioned above, a broader DAMA can be described as completing designing and manufacturing procedures via Clouds, while the users and business partners are loosely connected by a Cloud-centralized network. It is therefore natural to bring Cloud concept into manufacturing business. Figure 1.1 illustrates typical market-oriented Cloud architecture. It is possible to map manufacturing resources to the Cloud-based architecture and deploy computing capabilities and hardware in a service-centric environment.

Fig. 1.1
figure 1

Market-oriented Cloud Architecture [9]

To address Cloud technology in manufacturing context, Xu proposed the definition of Cloud Manufacturing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable manufacturing resources (e.g., manufacturing software tools, manufacturing equipment, and manufacturing capabilities) that can be rapidly provisioned and released with minimal management effort or service provider interaction [10]”. In this piece of work, a Cloud manufacturing system framework is proposed consisting four layers, manufacturing resource layer, virtual service layer, global service layer and application layer (Fig. 1.2).

Fig. 1.2
figure 2

Layered Architecture of a Cloud Manufacturing System

1.2.2 Manufacturing Resource Requirements

Manufacturing resources can be integrated in the Manufacturing Resource Layer during the lifecycle of the product. This includes both physical manufacturing resources, e.g. materials, machines, equipment, devices, and manufacturing capabilities existing in the software/hardware environment, e.g. product document, computing capability, simulation/analysis tools and etc. At this layer, resources are modeled in a harmonized information technology and ready to be delivered and re-used. For product data, portability and longevity have to be guaranteed in the Storage Cloud. Open data format and open Application Programming Interfaces (APIs) can be used. Among the solutions proposed so far, STEP (the STandard for Exchange of Product data [11]) is one of the most successful formats up to the task. STEP provides mechanisms for describing the product information throughout the lifecycle. As a neutral data format, STEP also provides different Application Protocols for specific utilities and applications. For instance, AP203 [12] is one of the most popular product design data format that is used for data exchange between different CAD software tools. Furthermore, AP224 [13] provides feature-based definitions to a STEP-compliant CAD/CAPP/CAM system. As an extension of STEP, STEP-NC has been developed to model the machining capability of CNCs. Meanwhile, the capability and variety of heterogeneous resources also needs to be described. Research has been carried out to describe the manufacturing resource in a scalable and standardized methodology [1416]. Hence, both the provider and the user are able to compare different solutions and make a reasonable choice.

1.2.3 Virtual Service Requirements

The key functions of this layer are to (a) identify manufacturing resources, (b) virtualize them, and (c) package them as Cloud Manufacturing services. Comparing with a typical Cloud Computing environment, it is much more challenging to realize these functions for a Cloud Manufacturing application.

A number of technologies can be used for identifying (or tagging) manufacturing resources [1719], e.g. RFID, computational RFID, wireless sensor networks (WSN), Internet of things, Cyber Physical Systems, GPS, sensor data classification, clustering and analysis, and adapter technologies. Some of them have been proposed to be connected with Cloud architecture [20, 21].

Manufacturing resources virtualization requires a high-quality result from the physical or logical resources. Manufacturing hardware can be virtualized/simulated by Virtual Machine Monitoring or Virtual Machine Managing systems [22]. As an agent based monitoring tool, MTConnect™ provides a standardized open protocol for data exchanging over the network [2325]. By using MTConnect™ agents, the machine tools and other process equipment can communicate with each other. Figure 1.3 shows how MTConnect™ can be utilized to feed data for the event Cloud and generate virtualization output in the end.

Fig. 1.3
figure 3

Complex reasoning engine via MTConnect [23]

1.2.4 Global Service Requirement

The requirements of a suite of Cloud applications (e.g. PaaS) are deployed at this layer. Physical devices and machines are connected by internet-related technologies, i.e. RFID, intelligent sensors and wireless sensor networks. Thus at this layer, the request of the user, especially the enterprise requirements should be evaluated and realized. The service should have the flexibility that allows the user to choose, modify and terminate it. Essential advice and diagnose report should be there when needed. Virtual services, for instance computing capability, manufacturing capabilities, partial infrastructure management and configurations, should be delivered to this layer to fulfil customer requests.

For the Cloud provider, the supervision and pricing system should be implemented towards user/enterprise’s need. It is necessary to analyse the participant’s specific requirement, allocate the service in need and deliver an optimum solution to the end user. QoS (Quality of Service) and Pricing system is in need afterwards [26]. The service provider is responsible to maintain the computing environment, execute the manufacturing tasks and take care of outputs in the virtual environment, plus guaranteeing the quality of both the service and the products.

1.2.5 Application Requirement

The application layer provides terminals to end-users and brokers. The user is able to access the virtual applications/resources via Graphical User Interfaces (GUI). Different from enterprise-level solutions, activity-based cost calculating mechanism can be introduced to this level. The provider is capable of different measurement or metering mechanism while offering different kinds of services. Additionally, for both Application and Global Service layer, error-tolerance mechanism is also necessary. When a software/hardware fault appears, the mechanism should keep the service working by advising the user with solutions or alternative options.

In the Application Lever of a Cloud Manufacturing system, confidentiality and security requirement is deemed as major considerations [27]. User/Company sensitive data can be protected by technologies, for example,

  • Data compressing/Encrypting mechanisms

  • Firewall/packet filters for communications amongst layers and users

  • Virtual LAN offering remote and scoped communications.

1.2.6 Research Contributions to Cloud Manufacturing

Cloud Manufacturing is still a new concept. Yet, studies of distributed manufacturing, collaborative manufacturing and virtual enterprise have given rise to new technologies that are capable of supporting Cloud Manufacturing. The rest of this section discusses some of these research works.

1.2.7 Service-Oriented Manufacturing Environment

Brecher et al. proposed a module-based, configurable manufacturing platform based on Service-Oriented Architecture (SOA). In this system, (called open Computer-Based Manufacturing (openCBM) environment [28]), software tools along the CAD-CAM-CNC chain are integrated and harmonized (Fig. 1.4). To implement the architecture and integrate inspection tasks into a sequence of machining operations, STEP standards are utilized to preserve the results of manufacturing processes that are fed back to the process planning stage [29]. The openCBM platform is organized through service-orient architecture providing the abstraction and tools to model the information and connect the models. It is much like the Platform as a Service concept that is delivered to the end user directly at the Application Layer. The service provider is deployed at the Manufacturing Resource Layer. The service is ready to be organized at the Global Service Layer.

Fig. 1.4
figure 4

OpenCBM working environment [28]

Li et al. [30] introduced a four-layer application service integration platform that is able to bridge multiple Clouds and intra-IS (Information Systems). Interactions across organization boundaries are supported by collaboration point, which plays as an interface providing data exchange, command transferring, monitoring and so forth. Implemented in a small metal product manufacturer, the system integrated manufacturing business processes with the help of collaboration agents. This research work examined the possibility of integrating existing manufacturing applications in the Cloud Computing environment.

Wang and Xu proposed a Distributed Interoperable Manufacturing Platform (DIMP) as an integrative environment among existing and future CAD/CAM/CNC applications [31, 32]. It is also based on SOA concept. In the platform, the requests and tasks from the users are modelled and collected, based on which a serial of service applications are organized and delivered back to the user. Therefore these service applications can be defined as “Virtual Service Combination”, which echoes the Enterprise requirement at the Global Service Layer. Meanwhile modularized resource is effective in the Manufacturing Resource Layer of Cloud Manufacturing. Moreover, both STEP and STEP-NC data models are utilized as the central data schema. With “coupling” technologies, STEP and STEP-NC data models can be connected to commercial CAD/CAM software suites, giving the system much needed portability in the storage domain.

1.2.8 System Providing IaaS

Nessahi et al. [33] proposed a framework to address the incompatibility issue happening in the heterogeneous CAx environments. Much like an IaaS structure, software tools are connected to the platform through specific interfaces and act as “Plug-and-Play” applications. The integrated applications work with Intercommunication Bus, Manufacturing Data Warehouse, Manufacturing Knowledgebase as shown in Fig. 1.5 [34]. In this architecture, mobile agents are utilized to support the communication bus and CAx interfaces, while STEP is utilized as the neutral data format for different applications.

Fig. 1.5
figure 5

Universal manufacturing structure

More recently, Mokhtar and Houshmand [35] reported a similar manufacturing platform, combined with the axiomatic design theory to realize interoperability and production optimization. The methodology of axiomatic design is used to generate a systematic roadmap of an optimum combination of data exchange via direct (using STEP) or indirect (using bidirectional interfaces) solutions in the CAx environment. This research work provides some insight into how the manufacturing resources (Device and Software tools) can be described and encapsulated, and how the resources can be utilized and organized at the Global Service Layer.

Laguionie et al. [36, 37], studied a manufacturing system that can integrate a multi-process numerical chain. This system is called STEP-NC Platform for Advanced and Intelligent Manufacturing (SPAIM). Manufacturing processes are connected in the system through a standardized data exchanging carrier, i.e. STEP-NC (Fig. 1.6). The figure shows how the manufacturing progresses can be integrated via a neutral data format and how the process-planning to manufacturing procedure may be organized at the IaaS level.

Fig. 1.6
figure 6

STEP-NC multi-process manufacturing concept

1.2.9 Modularized Simulation SaaS

To achieve a run-time configurable integration environment for engineering simulations, van der Velde [38] reported a plug-and-play framework for the construction of modular simulation software. In this framework (Fig. 1.7), the user (at the Application Layer as in a Cloud Manufacturing system) is allowed to select a target of simulation and assign the performer of the simulation called “component” before running the selected components. These components are effectively software entities (or otherwise known as SaaS as in Cloud Computing/Manufacturing). They are modularized, self-contained, mobile and pluggable. After the simulation, the output is post-processed through the components. In such architecture, software modules are detected, loaded and used at run-time with the framework (i.e. the Global Service Layer) needing no prior knowledge of the type and availability of components, thus providing true plug-and-play capabilities.

Fig. 1.7
figure 7

Main elements of the plug-and-play framework

Lee et al. proposed a Web-based simulation system using a neutral schema. In this approach, vender-independent data model is designed to support an interoperable simulation environment coordinating multiple simulation applications. In this four-layer architecture, interfaces are developed to interact with specific simulation applications, which receive simulation models automatically generated via a Web service layer. Thus, commercial simulation tools meshed with interfaces can work collaboratively in a distributed environment.

So far, there is no system that can facilitate a complete Cloud Manufacturing environment. A possible solution may be that of bridging the existing advanced manufacturing models with a Cloud Computing environment. In the next section, a Cloud-based manufacturing system is proposed based on the previous research work.

1.3 An Interoperable Cloud-Based Manufacturing System

As mentioned above, Cloud Manufacturing provides more opportunities and avenues to business. Moving from production-centric manufacturing to service-oriented solution, the authors propose an interoperable manufacturing environment called ICMS, which adopts Cloud Manufacturing concept (Fig. 1.8). In this section, the ICMS structure is discussed in details, which is followed by initial case studies.

Fig. 1.8
figure 8

Interoperable Cloud Manufacturing System

1.3.1 Manufacturing Resource Layer

As discussed above, both physical manufacturing resource, e.g. devices, machines, sensors, materials, and the unphysical capabilities e.g. product documents, data and software need to be included in the Cloud. In ICMS, the manufacturing capabilities are modularized as “Virtual Function Blocks (VFBs)” [32] and applied to the Manufacturing Resource Layer.

1.3.2 Module Application Cloud

With the help of VFBs, the manufacturing capabilities, including software tools (e.g. CAx applications) and hardware devices, are packaged by mobile agents as self-contained modules. Defined initially, these applications will be launched to complete the task requested by the user. Allocated by the control central, these VFBs can be easily controlled by manipulating the in-and-out data/event flow. In short, these individual VFBs can work autonomously and be considered as “black boxes”. If there is any update or modification to the capability itself, the algorism of the wrapper agent can be adjusted accordingly, keeping VFB autonomous.

In particular, production machines can be represented and integrated as VFBs as well. Upon a user inquiry into machining task details, the requirement package is sent to the service provider. Then, the provider responds with appropriate machining progress plan. In this case, the input is the product documents (data-in) and materials (event-in), while the output is the machined product (event-out) and updated product data.

1.3.3 Storage Cloud

In the Storage Cloud, all the product documents, standards, intelligent properties, and etc., are stored in remote databases. To provide an advanced mechanism to feed the right amount of data for a specific service domain, a data-exchanging environment is proposed to support the Storage Cloud [39]. Besides the product information, the manufacturing resource data is kept in the data base as well. A quantifiable resource description model is utilized to describe the functionality type, so that both the provider and user are able to choose the best module to finish a specific process. In addition, the capability and availability of machine/device is kept in the data base. In this way, the service provider is able to maintain an agile service to the user’s request, with trust-worthy facilities and solutions.

To model the products and related manufacturing resources, STEP/STEP-NC is chosen as the main data format. As mentioned above, STEP/STEP-NC provides specific APs for specific applications, activities or environments, and these APs are built on the same Integrated Resource. Thus, the portability and longevity of the data is guaranteed. In Storage Cloud, a backup database is in place too, saving dynamic data timely in case of data loosing and emergent request during high traffic.

To recap, the Manufacturing Resource layer provides both physical and intellectual manufacturing capabilities. Both software packages and hardware devices are meshed as autonomous modules and implemented in the Cloud. The utility of standardized product/resource format maximizes the portability of manufacturing document, and guarantees a smooth data flow. Additionally, a central server is placed at this layer providing the computing capability for the Supervisory Module on the Virtual Service Layer.

1.3.4 Virtual Service Layer

As the core of the Virtual Service Layer, the Supervisory Module plays the role of a coordinator of the service procedures. Consisting of Interface Agent, Broker Agent, Supervision Agent and Fire wall, Supervisory Module bridges service providers and users (Fig. 1.9).

Fig. 1.9
figure 9

Virtual service layer

The Interface Agent provides a Human Machine Interface (HMI). A service-request interface is developed to handle the service information required by the user. The service description is organized in a standard document, which is compliant with the format in the Module Database, and then sent to the Broker Agent.

Based on this service document, the Broker Agent analyses the user’s demand which is mapped to the Module description data stored in the Resource Database. If there is no specific module allocation from the user, the broker agent advises with an optimistic option to fulfil the user’s need and generate a complete document containing the service/application details. The Broker Agent’s procedure of can be understood as “Request Find and Provide”. Before the service document is sent back to the user, the capability and availability of resource will be verified by the agent. If negotiation is needed (e.g. waiting for the machine availability or modifying the plan), broker feeds the result back to the user and asks for alternative solution.

After the service document is approved by the user, the Supervision Agent organizes the related Modules in the warehouse and merges them as a “Virtual Service Combination”. Thanks to the VFB concept, the software/hardware tools are easily controlled and monitored. The service is delivered to the user based on the task list defined in the service document. Supervision Agent is able to easily launch/shutdown the service by controlling the event flow of VFB. After a user finishes a task on one module, the Supervision Agent detects the event-out and delivered the module that the user may need next. Besides the role as the central serve controller, the Supervisory Agent also contains a pricing mechanism. As mentioned previously, different algorisms can be deployed based on the type of service requested. For instance, “pay-as-you-go” principle can be applied for the Cloud storage service based on the amount of data user/enterprise archiving, while credit authorization/advance payment can be request by the user for reasons such as precious material preparations.

As one of the major considerations of Cloud technology, Firewall Module is used for the security of user’s, providers, and ICMS itself. The functionality of Firewall Module includes Identity, Protection and Privacy.

  • Identity management is needed for both users and providers, by setting up different authorization levels, the participant would access different sets of data documented in the Storage Cloud, and the applications in the warehouse. For instance, the service provider is able to modify and update the software/device configuration implemented in the Cloud, while the end-user can only work with the application in his working domain. When it comes to an Enterprise customer, the company will manage a privilege regime such as being able to set up the infrastructure partially at the Global Service Lever.

  • For protection, the firewall takes care of both data security and the hardware protection. The Firewall agent guarantees the safety of data/software codes without leaking, and the hardware devices can only be accessed by people with the approved identity. For manufacturing industry, remote controlling scenario needs to be protected properly. Before a remote service is launched, for instance web-based machining, it has to be confirmed by both Identity management module from the Firewall and the availability message from the on-site provider.

  • Privacy refers to both the critical information and the operation records. ICMS firewall protects the confidential data (e.g. credit card information, contract and personal details) from unauthorized access. Meanwhile, the activity of the user/provider working with the system cannot be collected or utilized by any unauthorized third-party.

Through Firewall, the Virtual Service Combination is delivered to the Interface Agent where the user is able to manipulate the application. For different types of users, ICMS provides Global Service for the Enterprise customer and Application Layer for end users.

1.3.5 Global Service Layer

For the Enterprise solution, the virtual Service is virtualized by the Drag-and-Drop interface. The Enterprise administrator is capable of organizing service and processes via graphical tools. The results defined by graphical flow charts are mapped to the service documents and delivered to the control kernel. In this way, an enterprise user with the accessibility is enabled to query the service easily. If there is any modification applied to the service plan, the result affects the service document before it is fed back to the virtual service layer.

At this layer, workflow and process control logic is in place to meet the enterprise’s need of management. The enterprise administrator has the authority of launching, changing, or terminating a service or procedure, which provides full flexibility to the customer. Report, record and diagnose mechanisms provide the records and analysis of service operations. For manufacturing industry, collaborative design/process planning is commonly required. In ICMS the collaboration mechanism is supported by standardized process/resource document with novel data collecting/archiving method mentioned earlier. After processed by the applications, the latest data are kept in the Storage Cloud with previous versions saved in the Backup Database. Therefore the data traceability and reliability is guaranteed. APIs can be provided at this layer too. Thus, the user can connect ICMS applications to their existing applications.

1.3.6 Application Layer

At this layer, interface is provided between the user and ICMS. It provides both graphical service and virtual working environment. Firstly, the user is identified by the Firewall. After the identity is confirmed, the user is asked about what kind of service he or she requires. Based on the user’s description, the information is kept in the initial service document and sent to the Broker Agent at the Manufacturing Resource Layer. The Brokered Agent scans the service document and maps the descriptions to the available module function. After the module(s) fulfilling the inquiry is (are) chosen by the Broker Agent, an updated service document will then be sent back to the user for confirmation. After approved by the user, the Supervision Agent packages the modules as a “Virtual Service Combination” and provides the service in the virtual manufacturing environment. Controlled by the Firewall, the user can only retrieve the resource he/she is authorized to, without disturbing others’ working domain. For specific physical devices such as machines and robots, exclusive agreement applies, which means no other user is able to access this device or make changes while it is occupied.

To present the service information of VFBs, data models are in need to describe such information. Even though ISO10303 part 1746 [40] provides data structures describing information product (software), more information is required. Figure 1.10 shows a VFB data model Express-G [41].

Fig. 1.10
figure 10

VFB module

The top-level of VFB has its_service entity to describe the service that VFB can offer at the functionality level. Note that one single VFB could have multiple specifications or software/hardware applications integrated across the system. Thus, both multi-functionality and reusability are guaranteed in ICMS. In this figure, software service is chosen as example. The detailed service information is defined through Software_view_definition and its attributes its_id, its_provider, description, defined_verstion, initial_context, additional_context, its_time_stamp, name, additional characterization, its_event_flow and its_data_flow.

Entity Id stores identity information for every single service so that service can be tagged easily. Working with the Software_version, Id enables the full traceability of service and version history of a VFB.

Entity Provider describes the provider description of a service. Via this entity, the detailed specification about the provider can be easily found and utilized. The system administrator is enabled to setup the authority rules accordingly. Meanwhile the user can get more information about potential business partners during survey.

Entity Security_setup defines the data representing the security level and authorization information, which is to be processed through Firewall. After the user is identified, the firewall is able to manage the authorization process based on the result of comparing the VFB security configuration with the user privilege domain.

Entity View_definition_context stored the technical definition about the service a VFB provides. Through further definition of application_domain and life_cycle_stage, the scope of a software tool can be defined in details. In this entity, multi-definition is available as well realized by additional context. Supplementary description of complex functionality of integrated software can be modelled without information lost.

Entity Event describes the attributes event_in and event_out which support the event-driven concept of VFB. Entity data_flow defines the input and output data of each service. A copy of the input data is saved and linked to attribute data_in after the service is initialized while an output copy is saved after the service terminates. Entity Event_flow and Data_flow are defined in a familiar structure as they are in the Service Document, where the event-trigger input/output information is stored. With this familiar definition, communications between Supervisory Module and VFB warehouse are streamlined. Thanks to these interfaces, Broker Agent is enabled to access the Resource Database directly and make an optimal selection with the information gathered here.

As mentioned before, ICMS provides mechanisms feeding the right amount of data for a specific working domain. Entity Data_domain defines and records both the relationships between entities within a data-subset that a service is working with, and the connections around the subset. Based on the service description, the information request asked by the user will be tagged and extracted based on the type of connections. More detailed explanation of this mechanism is given in [39].

Entity Time_stamp records the time when the service is commenced and terminated. With the attribute date_and_time which is defined in ISO 10303-41 [42], history of service and project can be traced from the chronological data.

Attribute description offers more detailed software profile in a flexible form. The provider is able to add more detailed introduction/description of the software application, while the user is able to record the experience, comments and feedback after terminating the service.

As mentioned above, metering and fee-calculating mechanism works with activities at this layer directly. Consumptions such as electricity, material, gas and storage place (including physical and data storage) are monitored and metered by the Supervisory Module. It can be understood as SaaS structure deployed at this layer. User Identity procedure is in place after the user connects to the system.

1.4 Case Study

To evaluate VFB and its service integration philosophy, a case study has been carried out. As illustrated in Fig. 1.11, software tools and CNC machines are packed and integrated as a Virtual Service Combination. Based on the Service Document delivered by the Supervisory Module, the STEP-NC simulator, Creo and CNC application are launched for the user one after another to fulfil his/her request. Using Java Agent programme, VFBs wrap these applications by controlling the event/data flow. For example, when the user finishes with STEP-NC simulator and saves the file (Data-Out) in the database, this action is defined as “Even-Out” for this VFB and triggers its VFB’s algorithm. The Wrapper agent inside VFB shuts down the application and feeds back the service progress to the Supervision Module. After the Supervision Module sends back the next command line based on the service document, the next VFB, which is the Creo module, is launched, and then the agents keep listening to the user activity until the next trigger event comes along. In this way, the applications are easily controlled and integrated in the Cloud.

Fig. 1.11
figure 11

VFB service integration

When a VFB is initially defined in the application Cloud, a description document is generated in the Resource Database. By using the data structure compliant with STEP standard, data portability between VFB resources and serve document are realized. Thus the Supervision Module is capable of searching and choosing appropriate VFB solutions by mapping the user’s request to the VFB database.

In particular, one VFB can be employed as a pre-/post-processor within another VFB. Referring to the IEC standard [43], when a VFB algorithm and the control of its execution are expressed entirely in terms of interconnected VFB, the structure is called composite VFB. In this case study, it is assumed that the CNC machine cannot take STEP-NC data format naturally. Hence a STEP-NC/CMC interpreter is allocated to interpret STEP-NC file into machine command code before it is sent to the CNC application. This shows that integration of applications using VFBs guarantees the mobility and re-usability of manufacturing resources.

In addition, the Web-based communication structure is built for ICMS (Fig. 1.12). Socket-server connection bridges the user’s device and Central Server. User with accessibility is able to extract documents from the database remotely over the Internet and archive it afterwards. The transmission speed is affected by the network traffic, which reached 2 MB/s during the implementation test. Thanks to the mobility of Java language, the system can be easily implemented in different operating environments.

Fig. 1.12
figure 12

Data transmission and web-based messenger

Moreover, a real-time text-based message system is also developed. For communication between a user and a provider, or the collaborative work amongst participants, the system provides a messenger module enabling the users to send simplified data or message over the Internet. The users are enabled to build point-to-point real-time communication through the interface with the ICMS infrastructure.

1.5 Conclusions

Cloud Computing brings new opportunities and possibilities to manufacturing industry. In the competitive market environment, enterprises can reap a multitude of benefits from implementing the Cloud concept into their businesses. In this chapter the authors reviewed the Cloud Computing technologies and the related manufacturing research. By using Cloud, the Manufacturing Resource Management, Enterprise Management and Information Management can be greatly enhanced.

In the second half of this chapter, the authors proposed a Cloud-based system, namely ICMS. In this service-centric system, business processes are modelled by a standardized service document, which is mapped to manufacturing resources and modularized in the application Cloud. Thus, customized needs are fulfilled by a service-centric approach with the help of interacting between service supervision and resource integration.

The system provides an interoperable environment integrating not only software tools but also physical manufacturing devices in form of VFBs. Thus, manufacturing resources are merged as Virtual Service Combinations according to the user’s original request. The reusability of VFBs can help both user and provider to realize a task solution efficiently. Moreover, vender-independent data models are deployed to improve the accessibility of the manufacturing resources and product/project documents. Besides the native file formats of applications, STEP/STEP-NC formats provide more portability, longevity and visibility performance. In a Cloud perspective, these data models enable an information exchange environment without additional time-/cost-consuming tasks.

Cloud technologies provide the opportunity to re-shape manufacturing businesses. They provide more flexibility and interactivity between users and providers. ICMS is suitable for such a globalized and decentralized environment. Standardized resource integration and service modelization help implement existing and future applications in the Cloud Manufacturing environment.