Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Reinforcement Learning (RL) [37] methods gaining popularity in the area of control because they allow to build control systems without detailed modeling of the underlying dynamics, because they learn how to maximize the control objective by means of interacting with the environment. This is quite an advantage over compared with traditional analytical control techniques requiring a deatiled formal model, which may be difficult to construct for complex non-linear systems. The quality of these approaches rely on the quality of the model itself and thus, require a good understanding of the problem at hand. In the RL approach, parameter tuning is substituted by iterative adaptation to an stochastic environment. Some systems (i.e., Multi-component Robotic Systems [13]) are best approached from a multi-agent perspective in order to better exploit the computation capabilities and robustness of distributed control systems. Multi-Agent Reinforcement Learning (MARL) is the extension of single-agent RL to multi-agent scenarios. MARL methods have already been successfully applied to several multi-agent control scenarios [17, 36, 41, 43].

2 Reinforcement Learning

2.1 Single-agent Reinforcement Learning

Markov Decision Process. Single-agent RL methods use Markov Decision Proces ses (MDPs) to model the interaction between the agent and the environment. An MDP \(<S,A,P,R>\) is defined by the set of states (S), the set of actions from which the agent can choose (A), a transition function (P) that determines state transitions produced by actions, and a reward function (R) that gives a numerical value assessing how good a state transition was. S can be a finite set of states (i.e., a cell number in a grid-world) or a vector of real values (i.e., the x and y coordinates read from a GPS receiver). The goal of the agent is to learn a policy \(\pi \) that maximizes the expected return R by means of interacting with the environment. The state-action value function \(Q^{\pi }\left( s,a\right) \) is the value of taking action a in state s.

Policy learning methods. There are basically three classes of RL methods [7]: value iteration, policy iteration and policy search methods. Value iteration methods (such as Q-Learning) generally learn the optimal state-action value-function \(Q^{*}\) and then derive the optimal policy. Policy iteration methods usually follow some policy, evaluate its value by learning \(V^{\pi }\), and then, aim to improve \(\pi \). Actor-critic methods belong to this class: an actor implements a parametrized policy, a critic learns its value function (i.e., Least-Squares Temporal Difference [6, 42]). Value updates are then fed back to the actor, which can us it to improve its policy (i.e., Natural Policy Gradient [5]). Finally, policy search methods directly search on the policy space for the optimal policy that maximizes the expected return for any possible initial state.

2.2 Multi-agent Reinforcement Learning

Stochastic Games. The interaction model in Multi-agent Reinforcement Learning (MARL) is the Stochastic Game (SG), that is defined by the number of agents (n) and the tuple \(\left\langle S,A_{1},\ldots A_{n},P,R_{1},\ldots ,R_{n}\right\rangle \). Each i-th agent chooses actions from its own local action space \(A_{i}\) and receives its own reward \(R_{i}\). Multi-agent systems can be competitive, cooperative or mixed. In fully cooperative systems, all the agents share the same goals and so, the same reward signals \(R_{1}=R_{2}=\ldots =R_{n}\). MARL algorithms can also be classified depending on whether they use models of the other agents or not. In this paper, we will focus on model-free methods because we expect them to scale better to multi-dimensional control problems.

Distributed Q-Learning (D-QL) [26] is an example of independent learning. Each agent assumes that the remaining agents will behave optimally thus projecting the virtual centralized state-action values \(Q\left( s,\mathbf {a}\right) \) (\(\mathbf {a}\in \mathbf {A}\)) to its own local action space \(Q_{i}\left( s,a\right) \), \(a\in A_{i}\). An instance of the MARL algorithms in which agents are aware of other agents’ choices, is the Team Q-Learning [28], where (Team-QL) agents learn the joint state-action \(Q\left( s,\mathbf {a}\right) \). The algorithm uses the Q-Learning update rule, but using the joint-actions \(\mathbf {a}\) and \(\mathbf {a}'\) instead of a and \(a'\). This algorithm converges to optimal values under an additional assumption: a unique optimal action exists in each state. This implicit coordination mechanism ensures that agents will exploit the Q-function in a coordinated manner. Some other implicit coordination mechanisms based on heuristics [8, 23] or models of the other agents [30, 40] can be found in the literature. MARL methods aware of the other agents’ actions eliminates the non-stationarity due to other agents’ policy changes, but then it becomes a more complex problem.

In order to reduce this complexity, it can be assumed that agents need not coordinate in every state with every other agent, but only with some of them. Under this assumption, the agents first learn when and which agents to coordinate with, and then use an explicit coordination mechanism [20, 24] to select a joint-action that maximizes the expected return. Coordinated Reinforcement Learning (Coordinated-RL) [20] builds a Coordination Graph (CG) for each state that defines with whom agents do need to coordinate. An agent’s local state-action values thus only depend on its own local action and those taken by the agents connected to it through the CG. Agents can maximize the global value using a message-based coordination mechanism. An improved version of this algorithm based on an edge-based decomposition of the CG instead of an agent-based decomposition was proposed in [24]. This method scales linearly on the number of agents. The downside of these methods is having to learn and store the CG and the additional processing time introduced by the coordination mechanism. Hierarchical Reinforcement Learning (HRL) is another interesting approach to reduce the complexity of a task by decomposing it as a hierarchical structure of subtasks. Single-agent MAXQ [11] allows agents to learn concurrently low-level subtasks and higher-level tasks based on these subtasks. This idea is extended to multi-agent problems in [19]: Cooperative HRL assumes costless communication, and COM-Cooperative HRL considers communication as part of the problem. A communication level is added to the subtask hierarchy, so that the agent can learn when to communicate. Decomposing the task into a hierarchy of subtasks is not always trivial, and the decomposition of the task itself determines how good the system will approach the optimal solution. This has led research towards automated decomposition of task, both in single-agent [22, 29] and multi-agent environments [10].

3 Control Applications of MARL

Last years have seen a number of novel MARL applications: traffic light control [1, 4, 25, 35], robotic hose maneuvering [17], micro-grid control [27], structure prediction of proteins [9], route-choosing [2], supply chains [43], or management of the cognitive radio spectrum [41].

Advantages. MARL-based approaches to control systems offer some inherent advantages over traditional control methods. For example, MARL algorithms can adapt to changes in the environment thanks to their learning nature. Another big advantage over analytical approaches is that model-free algorithms do not require the dynamics of the environment to be fully understood, thus enabling the control of more complex systems. There is still quite a gap between single-agent RL and MARL techniques, but we expect more works currently restricted to the single-agent case to be extended to the multi-agent case in the near future. An example of this is Multi-objective Reinforcement Learning [12], which aims to maximize different possibly conflicting objectives at the same time.

Challenges. We have not found in the literature MARL algorithms able to deal with continuous state-action spaces [21]. Continuous controllers have been shown to outperform algorithms with discretized state and actions in general feedback control tasks [15]. Several methods have been developed in single-agent RL paradigm to deal control tasks involving continuous action spaces: actor-critic learning architectures [5, 18, 21, 32], policy search methods [3] or parametrized controllers [34]. A lot of effort has been devoted in recent years towards obtaining efficient policy gradients [5] and data-efficient value estimation methods [6, 31]. For a complete review of the state-of-the-art on single-agent RL using VFA, we refer the reader to [42]. On the other hand, MARL algorithms are mostly based on Q-Learning, hence they estimate the state-action value. General Value Function Approximation (VFA) methods can been used to approximate the state-action value function. This allows continuous states [7], but greedy selection of the action with the highest value will correspond to the center value of one feature. This limits the ability of Q-Learning to output continuous action spaces.

Most of the MARL applications to realistic control problems so far found in the literature are either uncoupled systems of agents operating with no influence on each other [2, 41], or loosely coupled tasks, such as traffic light control [1, 4]. Some loosely coupled problems may be better approached using systems of unaware agents. Regarding fully-coupled systems in which agents’ actions have an effect on other agents’ decisions, only a few instances can be found [16, 17, 25]. This kind of systems require either full observation and learning on the joint state-action space, which does not scale well to real-world environments. Between unaware multi-agent systems and learning on the full joint state-action space, there are alternative approaches, such as exploiting the coordination requirements of the task using CG(i.e., Coordinated-RL), or decomposing tasks into a structure of hierarchical subtasks. Both CGs and task hierarchies can be designed by hand in small-scale or clearly structured tasks [25], but manual design is not feasible in more complex or unstructured problems. Some advances have been done towards automatic learning of Coordination Graphs [10] and hierarchies of tasks [27], but none is applicable to continuous state or action spaces. It is not clear either how good these methods will scale to more complex MARL problems. CG-based algorithms require communication each time step. A variable-elimination procedure was proposed in [20] to give an exact solution to the joint-action value maximization process. The number of messages exchanged at each decision step depends on the topology of the specific CG. In order to alleviate this problem, two anytime algorithms were proposed in [39] to approximate the optimal joint-action in a predetermined time: Coordinated Ascent and Max-Plus. Whether these methods provide an acceptable solution in complex real-time control scenarios within an acceptable time-frame remains an open question.

Another important uncontested challenge is learning an initial policy from scratch in large real-world applications, where it is unaffordable to allow agents thousands of trials before they can start completing the task (i.e., robotic maneuvering tasks [16]). There are several approaches to this problem, all based on some form of Learning Transfer [38]: agents can be first trained in a simulated environment and then allowed to face the real task [14], they can be initialized resembling some initial policy (i.e., a PID controller) available to the system designer [18], or agents may be trained to imitate some expert performing the task [32].

Proposals for Future Research. MARL methods are mostly based on general heterogeneous Stochastic Games and, thus, they work under very broad assumptions. From a control perspective though, one can further assume fully cooperative tasks and homogeneous learning agents. This kind of systems might be better approached from a distributed point of view. Consider a multiple output continuous actor-critic architecture: an approximated value function estimated by the critic and an actor with several VFAs, each representing a different output of the actor. When an improvement in the value function is detected, the actors update its policies towards the last action explored. This same idea can be translated to a multi-agent system in which each agent keeps an instance of the critic learning the value function and an actor with a subset of the system’s output. Agents would only require to coordinate exploration and exploitation, which could be achieved by using consensus [33] to share and update the exploration parameters using some preset schedules. This learning structure would allow to use the state-of-the-art in single-agent model-free environments. Full observation of the state could also be alleviated by deferred updates of the critic/actor: agents can follow their policies tracking their local actions and yet incomplete states, and defer the update of the actor and policy until all the state variables have been received.

4 Conclusions

In this paper, we have reviewed the basics of MARL and some recent works in the literature of this field applied to control systems. MARL offers some advantages over traditional analytical control techniques. The most important is that the system designer needs not fully understand or have an accurate model of the system. MARL-based methods also pose some interesting challenges when applied to real-world control problems. Most of the algorithms have been developed with small environments in mind. In this respect, we point out that the main gap between single-agent and MARL algorithms to date is the ability to deal with continuous state and action spaces.