Keywords

1 Introduction

Connected intelligence [1] is an essential 6G vision and critical for robot continual learning [2]. An effective solution which connecting intelligence must be scalable to networks and at the same time personalized to vertical applications. The contradictory requirements are one of the main barriers on grounding the 6G vision into use cases within robotics and other verticals. The paper presents realistic scenarios for deploying robot continual learning in real world environments under connected intelligence. It demonstrates the potential of realizing the 6G vision under a new network softwarization framework called 5G-ERA middleware [3]. The use cases of the new network applications enable a native integration between AI and network to address the scalability and the transparency issue for both robotics and 6G networking at the same time.

2 Background

2.1 Network Applications

Network Applications is defined as “a set of services that provide specific functionalities to the verticals and their associated use cases” [4]. It is a separate middleware layer and common services to simplify the implementation and deployment of autonomous robots on a large scale [5]. With the Middleware embracing the service-based architecture and cloud-native design paradigms, the Network Applications, like 5G-ERA Middleware [6], can be utilized to translate current state-of-the-art robots deployed mainly for local applications towards large-scale robot deployment under a Platform as a Service paradigm.

2.2 Robot Continual Learning

Learning is an essential capability of autonomous robots and the driving force behind the deployment of 6G connected intelligence. In the past decade, the success of robot learning is pre-dominantly within a data-driven paradigm. A robot’s continuous leaning are considered be the way to increase the level of autonomy for robots and push the limit of its cognition [7]. Learning at the post-deployment stage is crucial for deployable robotics, although this is hard to be implemented due to the limited human supervision available, also known as the small sample problem. Hybrid intelligence [8] suggested the sharing of symbolic knowledge and data-driven knowledge among robots and the use of the shared intelligence as the “additional” supervision to compensate for the small samples. The consensus in the community is that symbolic knowledge is more suitable for problems that need abstract reasoning; data-driven knowledge is better for problems that need interacting with the world or extracting patterns from massive, disordered data [9]. Together, the pair may cover the continuous learning problem. Although the ways towards their integration have been largely fruitless, reality is always far more miserable than our expectations.

To realize the 6G vision. Network applications and continual learning are combined in the paper to build up potential use cases of robot applications under the Connected Intelligence.

3 Network Applications Under the 5G-ERA Middleware

5G-ERA is an ecosystem and development environment for robot developers to build large scale distributed robotics; at the same time for network service provider to deliver the services more efficiently [3]. The 5G-ERA Middleware is allowing robots from different vertical sectors to use 5G and 6G digital skills to enhance their autonomy. The Middleware is a virtual platform between vertical applications managed by Robot Operating System - ROS [10] and 5G infrastructure managed by orchestrators such as OSM [11]. It realizes the intent-based network using cloud-native design. The 5G-ERA Middleware can be instantiated in the core network either in the Edge Machines or in the cloud. The implementation allows Robots to request the instantiation of the cloud-native resources that will support the execution of the task. The main components of the Middleware are:

  • Gateway – It redirects the traffic across the Middleware system meaning rerouting to the microservices within the system. It also handles the authentication and authorization process.

  • Action Planner – Integrating the semantic knowledge of the vertical into resource planning and knowledge recommendation. It is also part of the vertical level life cycle management implemented by Middleware.

  • Resource Planner – is responsible for testbed level resource placement.

  • Orchestrator – It orchestrates the process of the deployment of the distributed vertical applications. It is responsible for the vertical level lifecycle management of the deployed services.

  • Redis Interface – Is the backend for synchronization. It allows the users to retrieve, insert and update data from/into the Redis-Server

The 5G-ERA Middleware is developed under the Platform as a Service (PaaS) paradigm to integrate the robotic applications and networking facility. It allows connecting the robots to the cloud-native systems to maximize the capabilities of the 5G networks by simplifying the design and the deployment processes of dynamic offloading critical tasks from the robots to the external Cloud or Edges. At the same time, recommending the potential knowledge and resources to robots for connected intelligence. The Middleware provides a flexible and intelligent selection of the needed actions and resources to integrate with the semantic knowledge.

4 Use Case Scenarios

The use case presented in the paper is focused on deployment of robot for continual learning under the connected intelligence. Robot deployment includes all the steps, processes, and activities that are required to make a robot adapted to its intended real-world environment. In a lab-environment, robots can learn and adapt but require detailed support and guidance. The construction of the environment is hardly to be scaled into deployment under real-world settings due to limited supervision and unexpected situations. The following scenarios will showcase the use cases of the network applications under the distributed real-world settings on tackling unexpected situations; adapt continuously and verified in the real-world by additional AI services sharing by the networks.

4.1 Scenario 1: Knowledge is not Known to the Robot but Already Available in the Cloud

The use case under the scenario is for Network Applications to address an unexpected situation of robot manipulation by making pro-active analogies. By default, knowledge is not synchronised (connected) among the robots and the cloud. In this specific scenario, skills such as open caps have already been defined in the cloud (with actions such as Pull, Flip, and Twist); although part of them (Flip and Twist) are designed by third parties, hence not yet known by the robot. At the same time, robot does not have a concept of “twisting”, therefore cannot retrieve the knowledge without additional intervention.

To link the fragmented knowledge, the use cases of the network applications are defined step by step:

  1. 1)

    Samples (pictures and videos) from this unexpected situation will be collected by the robot and uploaded to Edge for help. Meanwhile, the robot will try to explain the corresponding circumstance for as much as possible (e.g., this is a bottle opening task in a kitchen, tried to pull with no success). This contextual information will be transferred together with the raw data samples to the Edge.

  2. 2)

    An 5G-ERA middleware will have already been deployed within the Edge to personalise the connected intelligence for current robot settings. It could make analogies based on the situation:

  3. 3)

    The information broker inside the task planner will analyse the context and make recommendations based on a multi-objective ranking system. A group of AI “experts” which are trained simultaneously within the Edge on current robot settings will track the spatial and temporal relationships of the tasks. They are designed to improve the precision of the recommendation. References of potentially useful knowledge will be retrieved from the cloud by the broker.

  4. 4)

    Recommendation raised by the task planner will be confirmed either semi-autonomously or fully autonomously by the deployed robot. This is similar to videos recommended by YouTube, they may or may not be relevant to the ongoing task. The mechanism is used to filter and prioritise relevant knowledge sets from the recommendations.

    Fig. 1.
    figure 1

    Samples of recommended meta dataset. Top: Meta-sample-set-A on flip open caps; Bottom: Meta-sample-set-B on twister open caps

    • In semi-autonomous mode, the recommended knowledge will be visualized and confirmed by the human operator under enhanced/mixed reality. As shown in Fig. 1, Meta-sample-set-A contains procedure knowledge for flipping open caps is rejected; Meta-sample-set-B on twister open will be accepted. Alternatively, the recommended knowledge can be meta-tested using the samples submitted by robots for compatibility check. Similar to video playback, this is a process of experience replay. Users can cancel the replay, confirm, or reject at any time. The decision will be remembered by AI “experts” to further improve the broker in the 5G-ERA middleware.

    • In a fully autonomous mode, the recommended knowledge will be meta-tested for similarity, similar as above. It must be noted that experience replay is not necessary to be accurate. In this example, both flip open and twist open models could be considered as compatible to the live samples after the replay.

  1. (5)

    Until now, pre-designed meta-models with some specific knowledge have been selected for the ongoing task. The task is reduced to a standard Few-shots Learning problem. New design patterns presented in the next section will be applied to integrate local data and cloud meta models for solutions towards the targeted problem.

It should be noted that, even after the verification, the proposed solutions can still be wrong. Trial-and-error must be carried out further by robots in the field to reject unrealistic recommendations.

Within the scenario, the knowledge is distributed between the robot and the cloud. The intelligence of the robot and the cloud are connected by a meta-based computational approach online. The scenario is a direct example of knowledge sharing through connected intelligence. Compared to centralised knowledge systems with predefined ontology, adding new but unstructured knowledge in the centralised system is very hard. Any attempts to adapt ontology towards the new knowledge require strong a generalisation. Therefore, the scalability of a centralised knowledge architecture is limited. The distributed approach proposed illustrated in this scenario use analogies rather than induction or deduction, therefore there is little need of the ontology or the generalisation. Meta-models are used as bridges for synchronising expected and unexpected situations.

4.2 Scenario 2: Neither the Robot nor the Cloud Has a Full Knowledge

Under the scenario, skill for the specific robot task has not been defined locally in the robot or globally on the cloud. It needs to be programmed live through learning-by-demonstration.

In this case, there is either no recommendation from the AI “experts”, or all recommendations were rejected. The expected skill is not available on the cloud, some kind of meta-based learning-by-demonstration would need to be triggered. Compared to the traditional learning-by-doing approach. The meta-based demonstration will be focused on a structured way of reusing prior knowledge for fast learning. Meta-models will be dynamically generated to align the unexpected scenes with known skills. The process of the scenario is summarized as follows:

  1. 1)

    AI “Experts” on Edge will further analyse the context and provide recommendations of generic knowledge such as “Grasp”, “Navigate” and “Release”. That knowledge was established beforehand as generic skills towards novel situations. To this end, the model selection from the pre-learned meta-learning models is first realized by measuring the similarity between the new task scene and the previous scenes used for meta training. Thanks to the meta-learning representation, the similarities be-tween different scenes could be obtained given scene images or frames. Specifically, for the first trial several scene images are collected and imputed to the Edge, and several (e.g., three) models with the highest similarity scores are selected, with which the robot tries to complete the given task, simultaneously yielding scene frames. Subsequently, we can further obtain an optimal model based on the frames for the next trial.

  2. 2)

    Following the same routine as scenario 1, the robot will learn the new manipulation tasks using limited demonstration through prior knowledge and verified by trial-and-error. More specifically, if the optimal model selected from the last step cannot enable the robot to complete the task successfully, the model could be improved by adding the new scene data into the meta-training dataset, through a trial-by-error process. In this way, the robot would be able to handle the new task situations.

  3. 3)

    Finally, both the robot and the cloud will be synchronized with the new knowledge of “twisting” for future use in other robots. The meta-based knowledge representation enables the new skill to be reused as part of the connected intelligence.

Scenario 2 is illustrated by the Fig. 2. Meta knowledge will be retrieved by the 5G-ERA middlware for fast learning and knowledge synchronization. As part of the learning protocol defined in the lab, declarative knowledge for manipulation such as “Grasp”, “Navigate” and “Release” will be dynamically recognized by meta-learning for knowledge synchronization. Hence reuse of the models is not limited to the original designer, but also the third-party manipulation models.

Fig. 2.
figure 2

Left: a learning-by-demonstration iteration & Right: meta-samples for PR2 “Grasp”, grasp-meta-model can be generated accordingly

Under the scenario, new knowledge is generated by restructuring and reorganising existing knowledge on declarative level. The transparency enabled by meta-learning helps the skills to be aligned between known and unknown situations.

5 Design Patterns for Robot Continual Learning Under the Connected Intelligence

To realize the proposed scenarios into use cases, design patterns for meta learning are implemented to align live data captured by the robot into the pre-defined declarative knowledge. From a knowledge sharing perspective, meta-learning is used as a synchronization protocol between new observation and old experience. From the AI perspective, analogies are generated computationally. The processes defining Network Applications on continual learning under the connected intelligence are reflected by the following design patterns:

Knowledge Update Pattern (Supervised)

Meta-model A will be updated by some “small samples” from domain B via the meta-testing. A revised knowledge meta-model AB is generated.

This is a common approach applied in the few-shot learning (Fig. 3).

Fig. 3.
figure 3

Knowledge update pattern

Experience Replay Pattern

Experience of A will be replayed for recognizing some unlabeled data from domain B. Knowledge in domain A and domain B are shared within the meta-model (AB) (Figs. 4 and 5).

Fig. 4.
figure 4

Experience replay pattern

Fig. 5.
figure 5

Knowledge update pattern (semi-supervised):

Knowledge Update Pattern (Semi-supervised)

In an unexpected situation with-out a human supervisor, captured live data are not fully understood by the AI, therefore they can only be partially labelled. Semi-supervised meta-learning will be applied in the pattern. Intermediate models will be constructed to search for pseudo labels of the unlabeled data so that the model can generalize well on the labelled data. The pattern is formulated as a nested optimization problem to identify optimal share be-tween knowledge in domain A and domain B.

Analogy Pattern

Analogies are based on similarity checks between unknown domain B and existing domain A via experience replay. Analogies are used by 5G-ERA middlware for recommending knowledge sets without a pre-defined ontology. It leads to abstract knowledge and reasoning for sharing knowledge and connecting intelli-gence. The pattern shows similarity of unexpected B and priori experience A. An analogy between A and B will be produced if the similarity is high (Fig. 6).

Fig. 6.
figure 6

Analogy pattern

Knowledge Aggregation

Labelled data and meta-models will be aggregated using the Knowledge update pattern (semi-supervised) for unexpected situations, or Knowledge update pattern (supervised) at expected situations (Fig. 7).

Fig. 7.
figure 7

Knowledge Aggregation

Combining the design patterns for the connected intelligence a distributed machine learning platform will be constructed. The reference design is illustrated in Fig. 8. It is delivered by 5G-ERA PaaS middleware for network computing fabric from the resource orchestration perspective. The personalized AI to the distributed robotics is delivered under the proposed deign patterns towards continual learning.

Fig. 8.
figure 8

Edge-to-Cloud pipeline for connected intelligence using the proposed design pattern.

6 Conclusion and Future Work

With 5G and 6G communication, Cloud Computing, and distributed AI coming together to facilitate the 6G vision, how we collaborate, connect, and interact under 6G is to be further clarified.

The paper is designed to discuss and promote use cases of Network Applications that need to be implemented for connected robots by steering digital transitions through human-centered data-driven technologies and innovations. Typical scenarios applied to the robot continual learning are identified and integrated into design patterns of network applications under connected intelligence. It will lead to technology to scale up network enhanced robot deployment in broader domains and enable frictionless integration of network and communication concepts into native robot design and how various communities work together to develop these novel solutions.