Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Historical debates about the necessity and wisdom of employing symbolic reasoning in autonomous mobile robots notwithstanding (Chap. 13), there seems to be agreement now that some part or layer in the control system of such robots should or could include reasoning. Hybrid control architectures (Chap. 12) would be typical software structures for solving the difficult problem of amalgamating their contributions with those of other parts of the controller, yet guaranteeing sufficiently short control cycle times that the robot can act safely in a dynamic environment.

Symbolic reasoning is understood here in the classical sense of artificial intelligence (AI), i. e., deliberation based on symbols as in first-order predicate logic (GlossaryTerm

FOPL

) or Bayesian probability theory, but typically with restrictions and/or extensions thereof, which have to be traded off against each other to achieve a suitable combination of expressivity and inference speed.

1 Why Should a Robot Use AI-Type Reasoning?

If robots are to accomplish tasks that are too complicated for specifying the appropriate behavior for all possible situations beforehand, they should be able to reason themselves about what is the appropriate way to perform the task – even if the task is specified vaguely. In this setting, reasoning means inferring the needed knowledge from what is spelled out explicitly in the available knowledge. It further means automated reasoning using methods from AI including knowledge representation (GlossaryTerm

KR

) and inference, heuristic search, and machine learning.

To perform their course of action appropriately, robots need to reason about the actions they intend to perform, their intended effects, the unwanted side effects, whether they are executable in the respective situations, the goals the actions are to accomplish, and so on. Consider, for example, the seemingly simple task of picking up an object from a table. To do so, the robot has to decide where to go in order to pick up the object, which hand(s) to use, how to reach for the object, which type of grasp to apply, where to place the gripper, how much grasp force to apply, how much lift force, how to lift the object, where to hold it, and so on.

If programmers have to specify the decisions for every conceivable combination of object and task, the control program for picking up an object becomes very complex. But even this program would in most cases not suffice to generate competent robot behavior, because how the pickup action should be performed depends on the context, too – the state of the object, the task to be performed, the scene that the object is located in. If the object is a glass filled with juice, it has to be held upright and if it is a bottle the robot intends to fill a glass with, it should not grasp the top of the bottle. If the bottle is to be picked up in the middle of a cluttered scene, then grasping the top of the bottle might be the best choice. If the scene is cluttered and the purpose is filling a glass, then the robot might even have to re-grasp after picking up the object.

The situation gets even more complicated if the task is not a single action, but a compound activity such as cleaning the table. In this case, AI reasoning methods can be employed to enable programmers to specify the course of activity very compactly: for every object on the table put it where it belongs. For such a vague plan to be a sufficient activity prescription, the robot has to infer the information needed to decide on the appropriate parameterizations of the pickup actions and it has to infer where objects belong depending on whether they are to be used again, are dirty, perishable, etc. Competently cleaning the table might also require the robot to think about the order in which it wants to put away the objects. It might decide to group the carry actions according to the place where the items have to be placed, or clean up perishable items first. It might reason about whether to stack items, get a tray and use that, and whether to leave doors of cupboards open.

Thus, an important task of AI reasoning techniques is: given a vague instruction, infer what is the appropriate action, what is the appropriate object to be acted on, what is the appropriate order of action executions, and what is the appropriate way of performing every action.

To deal with these issues, autonomous robots can represent and reason about aspects including the robots’ capabilities, their environments, the objects they are to act on, their actions and the effects they cause, and other agents in the environment.

Important kinds of reasoning that robots should be capable of, therefore include the following:

  • Prediction (often called temporal projection): inferring what will (probably) happen if the intended course of action is executed

  • Envisioning: inferring (all) possible events and effects that will happen if a plan gets executed in a hypothetical situation

  • Diagnosis: inferring what caused a particular event or effect in plan execution

  • Query answering: given some knowledge preconditions for plan execution (e. g., a robot has to know the combination of a safe in order to open it), inferring pieces of knowledge that satisfy these knowledge preconditions.

2 Knowledge Representation and Processing

Reasoning requires that the reasoner – in our case, a robot – has an explicit representation of parts or aspects of its environment to reason about. Two questions come up immediately: what are suitable formats for such an explicit representation, and where does the represented knowledge come from?

The second question refers to the problem of generating and maintaining in real time a symbolic description of the robot’s environment, or at least of some part of it, based on, possibly, a previous symbolic description and on recent environment information as by sensors and by communication with other agents. In full generality, this problem is currently unsolved, involving AI fundamentals such as symbol grounding [14.1] and object anchoring  [14.2]. So practical symbolic reasoning in a robot is restricted to that part of its knowledge that can be kept sufficiently recent. This includes, obviously, static knowledge about the environment, such as the topological elements and their relations in a building; transient knowledge available in symbolic forms, such as in facility management data bases; and, most challenging, symbolic data that has to be distilled from sensor data. Object recognition (Chap. 33) is a crucial ingredient to solve this; these issues will not be treated here.

This section deals with answers to the first question, i. e., formalisms suitable for representing knowledge. Suitability has two aspects here that are to be treated as two sides of one coin. One is epistemological adequacy: Does the formalism allow the targeted aspects of the environment to be expressed compactly and precisely? The other is computational adequacy: Does the formalism allow typical inferences to be drawn effectively and efficiently? There is a tradeoff between the two adequacies, given that very rich, expressive, and hence epistemologically appealing formalisms are typically accompanied by intractable or even undecidable inference problems, and vice versa. KR, then [14.3, p. xiii]:

is the field of AI that focuses on the design of formalisms that are both epistemologically and computationally adequate for expressing knowledge about a particular domain.

The plural in formalisms does not come by chance: there is no such thing as the KR language. The reason is twofold.

First, the archetype of KR languages, FOPL, is undecidable, i. e., any reasoner dealing with a language at least as expressive as FOPL is not guaranteed to terminate on a single query, let alone terminate fast. However, full FOPL expressivity is not always needed; for example, to represent a finite domain or to make restricted use of quantification may lead to decidable, and even tractable representation languages. Second, some applications of KR and reasoning, like quite a number of applications in Robotics, require evidence rather than truth to be represented, and FOPL, in particular, does not come handy for doing that. So, there are good reasons, both for pragmatics and for epistemological adequacy, for having several representation languages – and that has happened in AI.

Table 14.1 lists in its rows four broad types of KR languages, which have been in use in Robotics. Description logics (GlossaryTerm

DL

s) are in fact a specific family of relational languages, of which there are decidable and tractable members. The table gives in its columns the Assertions of the language types, i. e., the language elements used to represent some domain; the Queries, i. e., the type of problems that a reasoner in a particular language is supposed to solve; the Subject that a particular language of the respective type is normally used for representing; and Examples of such languages. The remainder of this section sketches generic KR in logic and in a probabilistic language, including the respective types of inferences. Specific robotics-related instances of reasoning on the symbol level are described in the following section, in particular, reasoning about temporal and spatial objects and relations, which are pervasive in a robot’s reasoning.

Table 14.1 Abstract categorization of main relational languages as used in Robotics (see text for explanations)

2.1 Logic-Based Reasoning

As stated above, FOPL does not qualify for a KR formalism in the above-mentioned sense of being both epistemologically and computationally adequate for typical application domains – in many cases, it is neither of the two. Yet, it is the background for understanding conceptually and mathematically general relational languages.

In this short text, we can only introduce KR in logic by way of example, assuming familiarity with the basic notions of FOPL. For more comprehensive and formal treatments, we refer to the broad body of literature: every typical AI textbook introduces FOPL, and [14.4, 14.5] are no exceptions; [14.6] is a principled introduction to logics; [14.7] a practical one, and [14.8] a concise mathematical one.

Representing knowledge in some variant of logic facilitates to draw inferences from that knowledge using provably sound and/or complete calculi. Automated deduction is a subfield of logic and AI that offers a healthy number of very powerful implemented deduction systems that can readily be used. Reference [14.4, Chap. 9] gives an introduction; [14.9] is a traditional handbook covering all the classical topics of logical reasoning.

Relying on logical inference, a robot could deduce a large number of facts that might otherwise be hard or impossible to get at. For example, assume a robot perceives by acquiring and interpreting sensor data, that the door D509 in an office building is currently closed: C l o s e d ( D 509 ) expressed in some ad hoc FOPL language, assuming intuitive interpretations of symbols. Let us further assume that the robot’s base of static knowledge about the building contains the sentences

C o n n e c t s ( D 509 , C 5 , R 509 ) , C o n n e c t s ( D 508 , C 5 , R 508 ) , C o n n e c t s ( D 508 a , R 508 , R 509 ) , d , l 1 , l 2 . [ C o n n e c t s ( d , l 1 , l 2 ) C o n n e c t s ( d , l 2 , l 1 ) ] , d . [ C l o s e d ( d ) ¬ O p e n ( d ) ] , l . [ A t ( l ) A c e s s i b l e ( l ) ] , l 1 , l 2 . [ A c c e s s i b l e ( l 1 ) ( d . [ C o n n e c t s ( d , l 1 , l 2 ) O p e n ( d ) ] A c c e s s i b l e ( l 2 ) ) ] .
(14.1)

Constants Di and variable d denote doors; Ri denote rooms; Ci corridors; variables l , l i denote locations, i. e., rooms and corridors. Assume that the robot’s localization tells it its current room or corridor location as A t ( ) .

Then, assuming the robot knows it is A t ( C 5 ) , observing C l o s e d ( D 509 ) entails ¬ O p e n ( D 509 ) , and, more interestingly, that A c c e s s i b l e ( R 509 ) is true only if O p e n ( D 508 ) O p e n ( D 508 a ) is true. So, performing, for example, a delivery task to room R509, the robot may replan its route through R508, unless at least one of D 508 , D 508 a is known to be closed. If the status of one or both of them is unknown (neither O p e n ( ) nor C l o s e d ( ) is entailed by the current knowledge base), then accessibility of R509 can be neither proven nor disproven. That would leave open the option of planning the route through D508 and D 508 a , gathering their required statuses on site.

So, as this little example may show, FOPL is a powerful representation and reasoning tool. Moreover, its theory is very well understood. In particular, it is well-known that consequence in FOPL is in general undecidable, that means, a sound and complete deduction algorithm cannot even guarantee termination for some particular inference attempt, let alone speedy results.

However, that does not mean logic as a representation and inference tool needs to be completely disposed of. Many applications do not require the full expressivity of FOPL. Moreover, there are many interesting language subsets of FOPL that are decidable and can be handled by algorithms that are efficient in most practical cases. So considerable effort by the KR community has gone into identifying FOPL subsets and fitting inference procedures that qualify for being both epistemologically and computationally adequate for a broad number of applications. We will consider two of them in more detail: propositional logic and description logics; we will then go briefly into using logics for high-level robot control.

2.1.1 Propositional Theories

In quite a number of practical cases, FOPL theories (sets of formulas) represent in fact finite domains. Logically speaking, they have finite Herbrand universes or can at least be recast in a form such that they have. In this case, it may still be handy to express the domain theory in FOPL syntax, but all that goes beyond a purely propositional theory is just for notational convenience. For example, an axiom stating that a robot can only be in one location (room or corridor) at a time

l 1 , l 2 . [ A t ( l 1 ) ( ¬ A t ( l 2 ) l 1 = l 2 ) ]
(14.2)

can be flattened for a finite building into the, clumsy but equivalent, form handling all locations explicitly, for example in a six-storey building

A t ( R 001 ) [ ¬ A t ( R 002 ) ¬ A t ( R 514 ) ¬ A t ( C 0 ) ¬ A t ( C 5 ) ] , A t ( R 002 ) [ ¬ A t ( R 001 ) ¬ A t ( R 514 ) ¬ A t ( C 0 ) ¬ A t ( C 5 ) ] ,
(14.3)

where every ground (variable-free) instance of a predicate, such as the At instances above, are to be considered as propositional variables, regarding textual identity.

The good news here is that the corresponding flat theory to a FOPL theory over a finite Herbrand universe is propositional, hence, it is decidable. Moreover, under some practical conditions – e. g., if the variables in the FOPL theory are sorted (that is, the information is available that the variables l 1 , l 2 , e. g., range over rooms and corridors) – it can even be generated mechanically from a more compact FOPL syntax.

The potentially bad news is that the now propositional theory may of course consist of a huge amount of propositional sentences: in general, as all combinations of variable substitutions in all FOPL sentences need to be generated, the growth is multiexponential in terms of the domain sizes of the FOPL variables.

However, the technology for propositional satisfiability checking or model checking is making considerable progress, allowing propositional theories with thousands or even tens of thousands of variables to be handled in the order of seconds of computation time on regular hardware. The respective methods are particularly efficient if many models exist for a satisfiable propositional formula or if the truth or falsity of many ground facts is known a priori (such as A t ( C 5 ) for a robot by independent localization). Both is often the case in practical KR.

References [14.4, Chap. 7.6] and [14.9, Chap. 24] give introductions to model checking. One modern family of methods is based on the classical Davis–Putnam (GlossaryTerm

DPLL

) algorithm [14.10]. It attempts to construct systematically a propositional model of a given theory, efficiently propagating interpretation constraints for variables. Another family of algorithms applies local search techniques, attempting to generate interpretations using random (Monte Carlo) variable assignments. Reference [14.11] is a Web page maintained for the annual satisfiability checking competitions held together with the annual GlossaryTerm

SAT

(Theory and Applications of Satisfiability Testing) conferences; it gives a continuously updated overview over current top-performing satisfiability checking systems.

2.1.2 Description Logics

DL have emerged as rigorous formalizations of somewhat intuitive KR forms of the AI of the 1970s, namely, semantic networks and frame systems. Logically speaking, DLs, of which there is a considerable variety, form a certain family of subsets of FOPL, some of which are decidable or even tractable.

There are two parts of representing knowledge about a particular domain using a DL language. First, the upper ontology of the domain is formulated. It introduces the general domain concepts as well as relations between these concepts. A particularly interesting type of relations occurring in all ontologies is the superclass-subclass relation. Given that most DLs – as part of the means for enforcing decidability – strictly disallow cyclic relations between concepts to be specified, the superclass relation imposes a hierarchical taxonomy on concepts, which serves for defining property inheritance, much like in object-oriented programming. This first part of a DL-based domain representation is, for historical reasons, often called the T-Box (or terminological knowledge).

concept in the DL language corresponds to a unary predicate. To give an example, an ontology of robot navigation domains might include concepts like Door, Location, and so on. The concept hierarchy is built by defining concept equality = or a subconcept property, e. g., Room Location , and Corridor Location . Concepts can be combined using concept conjunction, disjunction, and negation ( , , ¬ , respectively), allowing, e. g., concept definitions like Location = Room Corridor , Door = Closed Open , and Open = ¬ Closed .

Roles in a DL language correspond to binary predicates, such as leadsTo for a door and a location. Inversity, intersection, and union of roles are defined as expected, where leadsTo = leadsFrom - 1 is an example for defining inverse roles. Roles can be composed, such as in defining adjacent = leadsFrom leadsTo (location l is adjacent to m if and only if some door connects them). Finally, concepts and roles can be combined for defining new concepts and roles. In particular, it is possible to quantify over role-fillers, i. e., the individual objects (see next) that can be consistently substituted for role arguments. For example, one could define BlockedLoc = Location ¬ leadsFrom . Open (assuming the intuitive binding rules of the operators). Different variants of DLs differ in what operators they make available; the set of available operators and additional constraints on definitions shape both the expressivity and the computability of the respective DL variant – the spectrum ranges from undecidable to tractable. See [14.3] for details.

As the second part of domain representation using DLs, individual objects have to be introduced into the language of concepts and roles. This part of the domain representation is called A-Box (or assertional knowledge). For example, Room ( R 509 ) , leadsTo ( D 509 , R 509 ) , and Closed ( D 509 ) could be asserted.

DLs have a number of reasoning services to offer, which are based on logical inference in a given T-Box and A-Box. They include consistency of the concept definition, subsumption and disjointness of concepts, consistency of the A-Box wrt. the T-Box, concept and role instances, all of which are decidable in many DLs. Given the T-Box and A-Box rudiments above, it would, e. g., be concluded that everything is consistent and that BlockedLoc ( R 509 ) (note that only door D 509 is known here to lead to R 509 !). These DL inferences are theoretically intractable, but run efficiently in most of practical cases.

Reference [14.3] provides a comprehensive overview of DL. In 2004, the WWW consortium (GlossaryTerm

W3C

) has defined the Web Ontology Language GlossaryTerm

OWL

 [14.12] as a technical basis for the Semantic Web, which was followed by OWL 2 in 2009 [14.13]. Part of the language is OWL-DL, a classical DL in the sense just described. OWL-DL ontologies are publicly available over the Web. Reference [14.14] gives a tutorial introduction.

2.1.3 Logics for High-Level Robot Control

Robot domains are dynamic by nature, including at least one physical agent, namely, the robot. Capturing that in logic-based formalisms is possible and has been undertaken, as introduced in [14.4, Chap. 12.3]. However, it poses some conceptual and technical problems, which we will describe at the end of this section on Logics.

Very briefly, one of them comes from the requirement of expressing concisely and reasoning efficiently with logical models of events in general, and actions in particular. Logically speaking, an individual action changes the truth value of a limited number of facts, leaving everything else as it was. For example, modeling on some abstract level the action of going from one location to another as atomic, it changes the robot’s current location before and after; depending on the modeling, it may also change the battery state and the mileage count; but it does not change the layout of the building or the name of the president. The problem of formalizing actions concisely in a logical language so that they allow facts to be inferred efficiently that may or may not have changed after applying a sequence of actions, has been termed the frame problem. It has received much attention in the literature; there are now a number of practical solutions to it.

Another problem concerns knowledge-base update made necessary by independently changing facts. Consider, for example, a robot sitting – as it is told by its self-localization – in front of some door D, which is believed to be open, but the robot perceives a closed door. Logically, this is a contradiction. Now there are in theory several ways to make the knowledge and the perception consistent. One is to assume that D has been closed since learning that it was open – probably the most intuitive explanation. Logically just as good would be, e. g., that the perception is faulty, or that the robot has been teleported in front of a known closed door. Among these explanations, some are more intuitively plausible than others; logically, some would require less formulas of the knowledge base to be withdrawn and should therefore be preferred. Ideally, after replacing an old piece of information with a new one, one would have to make sure that the consequences of a retracted formula are no longer believed.

Theoretically, these problems are arbitrarily difficult. Practically, they can be sufficiently restricted to allow solutions within a neat logical framework. Typical solutions would introduce some notion of state or holding period for formulas, following the classical example of the situation calculus [14.15]. That allows change to be tracked. Typical solutions would also give up completeness of inference, resorting to a Prolog-like inference engine. Three examples for such solutions with reported applications in robot control are GOLOG [14.16], event calculus [14.17] and FLUX [14.18]. Another family of solutions would model the course of time in a more fine-grained way than switching between states. Temporal logics are appropriate here, and the temporal action logic (GlossaryTerm

TAL

) is an example, which has even been integrated in a performant planner [14.19]. It leads to a more sophisticated form of temporal reasoning that will be treated in Sect. 14.3.

2.2 Probabilistic Reasoning

KR formalisms based on logics are worth considering whenever factual knowledge is to be represented, from which consequences are to be queried. However, part of the knowledge that a robot might use about its environment does not really have this character.

Uncertainty is one of these traits, or rather, a whole family of them, as uncertainty is in itself an overloaded concept. Lack of knowledge is one of its aspects. Logics can handle this insofar as truth or falsity of some facts may remain undetermined. If too much is undetermined in a knowledge base, logics will no longer be able to make interesting deductions though, as everything is possible logically. Yet, the different possibilities may differ considerably in likelihood, according to intuition. The point here is to represent and reason about evidence.

The field of KR has adopted Bayesian probability as a means for representing and reasoning with evidence, using correlations between facts rather than strict implications. Note that this approach amalgamates different sources of lack of precision and completeness of knowledge. Some piece of knowledge may be unknown because it is in principle unknowable, or because it was judged too costly to build a precise theory or determine all information relevant for making a sound deduction. In either case, working with probabilities rather than binary truth values can serve as an approximation. Note that the notion of truth is still the same here as in classical logics: objectively, a fact is supposed to be either true or false; probability just models the subjective evidence that the fact is true.

We next describe two popular and powerful representation and processing formats for probabilistic knowledge: Bayesian networks (GlossaryTerm

BN

s) and Markov decision processes (GlossaryTerm

MDP

s). In analogy to Sect. 14.2.1, we here assume familiarity with the basic notions of probability theory. Reference [14.4, Chap. 13] gives an excellent basic introduction; for a more comprehensive treatment, there is a large variety of introductory textbooks around, [14.20] being an example.

2.2.1 Bayesian Networks

Inference in Bayesian probability theory basically means to infer the probability of some event of interest, given prior and dependent other relevant probabilities. Practically, an important type of probabilistic inference is diagnostic reasoning from observed effects back to hidden causes, given rules that specify conditional probabilities from causes to effects. So the problem is, for a potential cause C and an observed effect E: Given prior probabilities P(C) and P(E) and the conditional probability P ( E | C ) , determine the posterior P ( C | E ) . The solution is of course given by the Bayes rule as

P ( C | E ) = P ( E | C ) P ( C ) P ( E ) .
(14.4)

However, like in logical reasoning, the theoretically appealing principle turns out to be impractical if applied naively. Consider that not just one effect E may be observed, but E 1 , , E n ; moreover, not all of them are conditionally independent. A generalized form of the Bayes rule to calculate the correct posterior is straightforward, but who can specify all the conditional probabilities involved – in the worst case O ( 2 n ) of them, where n may easily range in the hundreds in practical cases?

Until the late 1980s, this problem was more or less circumvented. One way to do so was to treat the Ei in bad faith as independent: simply take the n individual conditional probabilities P ( E i | C ) and use them for approximating straightforward the full joint probability distributions. Reference [14.4, Chap. 14] reviews this and other alternative approaches.

The solution used ever since it has appeared [14.21] is Bayesian networks (BN), more recently subsumed under the more general concept of graphical models . The idea is to represent the random variables as nodes in a directed acyclic graph, where a node is directly preceded by a set of parent nodes if and only if it is directly conditionally dependent on the corresponding parent variables. So the huge full joint probability distribution is broken down into many, typically very small, local joint probability distributions without loss of information, the trick being to use the typically many known conditional independences among variables to reduce the representational and inferential effort drastically.

Figure 14.1 shows a simple BN expressing that D is dependent on B and C, with probabilities given by a conditional probability table which specifies locally the joint probability distribution. In addition, the structure says that D is independent from A, given B and C (a node is independent from its ancestors, given its parents), and that B is independent from C, given A, i. e., P ( C | A , B ) = P ( C | A ) and P ( B | A , C ) = P ( B | A ) .

Fig. 14.1
figure 1

Structure of a simple Bayesian network. It is associated with conditional probability tables. (The one for D, dependent on B and on C, is omitted)

Network inference can be interpreted in both directions. Bottom up, a BN enables to explain an observed phenomenon using the known conditional probabilities (diagnosis). For example, if D is observed, the likelihoods of its known causes (B or C) can be inferred. Top down, a BN enables to propagate evidence to compute the probability of a phenomenon, e. g., to compute the probability of D given A (causation).

For systems evolving over time, for which the Markov property holds, i. e., a state of a system variable in some state depends on no variables earlier than the previous state, a BN representation takes on a particular form, a dynamic Bayesian network, GlossaryTerm

DBN

. Assuming that the dependences among variables within a state remain constant over time, and assuming further that a variable x at different states is represented as a family of variables x 1 , , x t , , a DBN unfolded into a BN has a particular structure, as sketched by example in Fig. 14.2. The variables representing a new state correspond to a new BN slice, which reproduces the structure of the previous state representation and whose nodes depend only on each other and on nodes representing the previous state. This locality allows inference in DBNs to be efficient.

Fig. 14.2
figure 2

Unfolding of a simple DBN over two time slices representing two successive states. Variables x denote states, u denote actions, and z denote measurements. The structure of each individual slice (signaled by identical variable subscripts) is invariant over time

2.2.2 Markov Decision Processes

In the context of robot control, uncertainty is induced by perception, action, and incompleteness of prior knowledge about the environment, all of which are related. Perception uncertainty results from sensor noise, from occlusion, from interpretation ambiguities, and other effects. Action uncertainty results from approximate action models (e. g., wheel slippage), unsuccessful action (e. g., grasping failure), or other practically unavoidable effects.

Figure 14.3 depicts this uncertainty laden interaction between the robot and its environment.

Fig. 14.3
figure 3

The robot acts on its environment producing a new state with a given probability. Knowledge of the robot about state results from its observation and is also uncertain. Its decisions are taken based on its beliefs

Probabilistic representations are used to cope with these uncertainties. Planning being a sequential decision problem, MDP and, in the case of partial observability of the environment, partially observable Markov decision processes (GlossaryTerm

POMDP

s) [14.22] are widely used to address them. The Markov assumption, which is the hallmark of this approach, is that a state sn of the system depends only on the previous state s n - 1 and the action that lead to sn, and not on any earlier states.

Under this assumption, an MDP is formally defined by four components S , A , T , R where S is a finite set of states, A is a finite set of actions, T : S × A S the transition function defining the probability of state change upon application of a given action at time t

T ( s , a , s ) = P ( s t + 1 = s s t = s , a t = a ) .
(14.5)

T is known and provided as a table of transition probabilities. Finally, R ( s , a ) , R : S × A R , is defined as the reward received by the system after achieving action a leading to state s. Often, the reward is associated with the state only (R(s)). In the sequential process, the rewards are supposed to be additive. Solving an MDP is an optimization problem in which the sum of the expected rewards, called the utility, is maximized. MDPs suppose that the rewards are already known. When this is not the case, the robot can acquire them through experience. One technique to do so is reinforcement learning, as described in Chap. 15. Figure 14.4 shows an example of a simple MDP.

Fig. 14.4
figure 4

A representation of an MDP. In this example, states A, B, C, D have associated rewards (−1, −10, −50, −100). There are three possible actions in each state, u, s, d, each have a given probability (shown on the arrows) to lead to another state

There are two main methods to solve an MDP. In the value iteration (GlossaryTerm

VI

) method, the first step is to compute the utilities U(s) for all states, and the second step is to compute the optimal policy π * ( s ) which provides, for each state, the optimal action. Computing the utilities iteratively is based on the Bellman equation [14.23]

U i + 1 ( s ) = R ( s ) + γ max a s T ( s , a , s ) U i ( s ) ,
(14.6)

where 0 < γ < 1 is a discount factor. After computing the utilities, the optimal policy is the one that provides the best action (i. e., the one maximizing the utility) for each state. This is computed by π * ( s ) = arg max a s T ( s , a , s ) U ( s ) . So the result of the MDP is the best local decision. A policy is an a priori computation of the best sequence of actions, which is guided by the maximization of a global reward instead of just reaching a goal with minimal cost, as in classical planning. So a policy is not a plan of actions that has to be strictly executed. Since it already takes uncertainties in action execution into account, the policy can be pursued whatever the outcome of action execution.

The second method is called policy iteration (GlossaryTerm

PI

). Here one starts with an initial (e. g., random) policy π and, through an iterative algorithm, tries to improve it gradually by looking for the actions that augment utility.

In conclusion, in a deterministic setting, the solution to a planning problem is a sequence of actions. In an observable stochastic setting, the solution is a policy defined by the best local decision.

2.2.3 Partially Observable Markov Decision Processes

In the (realistic) case where the robot has no full access to the environment, decision making must be made under partial observability. To that end, the MDP formalism is enriched with an observation function that provides uncertain knowledge. A POMDP is defined by six components S , A , T , R , Ω , O . The first four describe an MDP. Ω is a finite set of observations o. O : S × A Ω is the observation model O ( s , a , o ) (derived for the robot sensor models) that provides the probability to obtain observation o after action a which has lead to state s . The robot belief state is the probability distribution over all states, i. e., all the probabilities of being in all the states. The belief to be in a given state is usually noted by b(s). The belief is updated after an action and observation by

b ( s ) = α O ( s , a , o ) s T ( s , a , s ) b ( s ) ,
(14.7)

where α is a normalization constant.

Although partial observability is a typical case for robots acting in mundane environments, and hence POMDPs are the more adequate formalism for modeling action and observation, POMDP-based models are often avoided. The reason is that calculating optimal POMDP policies is impracticably complex. Mostly, solving a POMDP is based on approximate or derived methods, such as working with the corresponding belief MDP of a POMDP, which results from considering the (fully observable) belief function induced by the POMDP.

A good example of using Markov processes for modelling and solving robotics decision-making problems, combining task planning, and motion planning is provided in [14.24].

3 Reasoning and Decision Making

The main motivation for developing reasoning capacities in robotics is to enable robots to make decisions about their future actions. This is why planning was one of the main early research topics in AI, within the SHAKEY project ( ), resulting in the seminal STRIPS planner [14.25]. Designing a planning system requires to address three main questions:

  • How to represent the world?

  • How to represent actions?

  • How to guide the plan search process?

According to the formalism used to answer these questions, the planning system will cope differently with the constraints of the real world, and this is an essential issue in robotics. As mentioned above, early planning techniques were based on first-order predicate logic (GlossaryTerm

FOPL

) for representing knowledge, and the limitation of FOPL when facing the uncertainties of the real world has lead to adopt probabilistic representations and probabilistic reasoning to cope with real situations.

World Representations. Classically, when reasoning at the symbolic level, knowledge about the world is represented as states which are sets of logical propositions such as Ontable(Cup). Under the closed world assumption, the facts that have the value FALSE are not represented in the states of the world. In order to deal with uncertainties, instead of having TRUE/FALSE values, the propositions have a probability distribution of being true.

Representation of Actions. In classical planning, as introduced by STRIPS, actions modify the state of the world and are defined by three sets of predicate lists. The predicates that should be true for the action to be feasible are the preconditions. The set of predicates that become TRUE after the action is executed is the ADD list. The set of predicates that becomes FALSE after action execution is the DELETE list. The planner includes a unification procedure to determine which predicates correspond to the propositions describing the world states. Taking into account uncertainties in action execution leads to including a probabilistic characterization of their outcome.

Search. Finding the most adequate action is based on a search algorithm in the state space. The state space is an implicit graph given by the initial state and a successor function that provides the states immediately adjacent to a given state. Each action, which represents the transition from one state to its successor, has a given cost. The well known A* algorithm [14.26] was proposed in 1968, again within the SHAKEY project, to address the search problem. The planning algorithm, based on this search, recursively selects the action that would produce the current goal state (or a subgoal thereof), and under certain conditions the optimal (i. e., least cost) solution is found.

This classical planning scheme is not able to cope with uncertainties because it is based on FOPL representations. The only way it can do that is through plan mending or replanning (Sect. 14.4). When probabilistic representations are used, the plan search will adopt a very different scheme as explained in Sect. 14.2.2.2.

However, reasoning in AI and robotics involves much more than sequential planning. Robots act in the real world in which action duration determines success or failure. They have to consider events occurring at given instants and situations that unfold over time. Temporal relations and an appropriate temporal logic are necessary to represent and reason about time. Section 14.3.2 discusses temporal reasoning.

Another central issue is reasoning about space. What does it mean exactly when we represent a piece of knowledge by the predicate Ontable(Cup), or Near(Robot,Table)? How to express spatial relationships symbolically from sensor data? This involves specific formalisms as well which are developed in Sect. 14.3.3.

3.1 Using Multiple Knowledge Representation Formalisms

Automated planning carries an immediate appeal for robotic applications: the actions a robot should perform to achieve a goal are obtained as a result of reasoning in a model which can be changed to suit different environments, physical capabilities, and tasks. Yet reasoning about action per se is not a silver bullet for robot programming. As noted earlier, obtaining competent robot behavior by reasoning in a model crucially depends on the epistemological adequacy of the modeling language. As a case in point, let us elaborate on our initial example: imagine a waiter robot with two arms that has to serve meals to guests in a restaurant. The robot would have to reason about time in order to infer that cold beverages should be delivered before they get warm. Food needs to be served in front of guests, hence the robot should be capable of some form of spatial reasoning. It should be capable of reasoning about resources, e. g., infer that it cannot hold more than two parts of the meal simultaneously with its two arms, or that its tray can only accommodate a limited number of dishes. A DL ontology might inform the robot that all mugs in the cupboard are clean – hence, committing to a particular mug at planning time is not necessary – or that a cup or a mug both work well for serving coffee. Most of this knowledge is difficult or impossible to model in classical planning models [14.27], which are limited to describing the causal relations that exist between actions and predicates in FOPL.

The types of knowledge a robot should possess ultimately depend on the application. It should be however evident that – short of encoding this knowledge in FOPL, which is computationally inadequate – several models expressed in different formalisms are necessary in most meaningful domains. Note also that any plan containing actions that are to be executed by a robot should also be translated to actionable metric terms that are understandable by the robot. For instance, specific positions in which objects are to be placed, locations to navigate to, and action dispatch times should be indicated in the plan. Finally, the knowledge represented in different models a robot should use for reasoning (the causal model being only one of these) often presents nontrivial interdependences: the capacity of the robot’s tray (resource model) determines the number of trips a robot has to perform to and from the table it is clearing (causal reasoning), the type of meal (ontological model) may affect the spatial layout of the cutlery (spatial reasoning), and the time it takes for a beverage to get cold (temporal model) will affect the order of goal achievement (causal reasoning). The development of problem solving algorithms that account for these interdependences is an active topic of research. An overview of recent results in this direction is provided in Sect. 14.3.4.

Of particular interest to this discussion are temporal and spatial KR formalisms. Several temporal and spatial logics possess properties that make them both epistemological and computationally adequate for a variety of robotic problems. Below, we outline the principal temporal and spatial KR formalisms that are relevant to robotics.

3.2 Reasoning About Time

Linear temporal logic (GlossaryTerm

LTL

) [14.28] is a decidable propositional logic which provides a means to describe how execution paths will evolve in the future. The temporal operators (next), (always), (eventually), U (until) and R (release) are used to represent conditions on the state of a system. For instance, the robot will eventually place the cup on the table (reachability), the robot will never serve a cold coffee (safety), and the robot will always reach its charging station before its battery is discharged (liveness). A fragment of LTL (specifically, syntactically co-safe LTL formulae [14.29]) has become relevant in robotics because of its ability to predicate upon discrete temporal properties of motions. For example, an exploration robot may be required to visit locations A, B and C in a predetermined sequential order; or conditions like avoid C unless A or B have been visited, and always avoid D. Given such temporal goal specifications, it is possible to compute motions that satisfy these goals with probabilistic roadmaps (GlossaryTerm

PRM

s) [14.30, 14.31].

LTL formulae express qualitative temporal specifications, thus offering the domain expert a means to specify conditions on execution at a high level of abstraction. LTL does not however consider that robot actions have a temporal extent, and that the temporal extents of different actions may be required to have specific relations. The qualitative temporal logics point algebra (GlossaryTerm

PA

) [14.32] and interval algebra (GlossaryTerm

IA

) [14.33] can be used to capture (and enforce) such relations. Specifically, they allow to represent qualitative temporal relations among temporal variables. Variables in PA represent time points, which can be used to represent events, such as the robot has reached the table, or start/end times of actions, such as the robot starts to place a mug on the table. In IA, variables represent time intervals, e. g., the robot navigates to the table, or the robot places the mug on the table. The basic PA relations are the three temporal relations { < , > , = } . All unions of these relations (≤, ≥, ≠, the universal relation , and the empty relation ) are also valid PA constraints. In IA, constraints represent temporal relations among temporal intervals. The basic IA relations BIA are the 13 possible temporal relations between intervals, namely precedes (p), meets (m), overlaps (o), during (d), starts (s), finishes (f), their inverses (e. g., p−1), and equals (≡) (Fig. 14.5). A constraint in IA is a disjunction of basic relations { r 1 , , r n } B IA × × B IA . Going back to our example, a relevant piece of temporal knowledge for our waiter robot is that the action of picking the mug temporally overlaps the fact that it is holding the mug, i. e., the IA relation

Pick ( Mug ) { o } Holding ( Mug ) .
(14.8)

This knowledge in fact represents the qualitative state of affairs in the presence of a successful pick-up action, whereas the contingency in which the mug slips from the robot’s gripper (an all too common occurrence with today’s robots!) can be expressed by the temporal relation

Pick ( Mug ) { d - 1 } Holding ( Mug ) .
(14.9)
Fig. 14.5
figure 5

The 13 basic qualitative relations in interval algebra. [ l , u ] indicate metric bounds that can be attached to the basic relations to refine the metric semantics of the relations

A set of variables and constraints in PA or IA constitutes a constraint satisfaction problem (GlossaryTerm

CSP

) [14.34]. A fundamental reasoning problem in temporal CSPs is to test the satisfiability of the CSP – assessing whether a substitution of temporal values to variables exists that satisfies all constraints.

A related problem is that of computing the minimal representation of a given CSP, that is, the equivalent CSP formed by the strongest implied constraints. Both problems are tractable for PA and for several fragments of IA [14.35]. A particularly useful example of tractable subalgebra of IA is the set of convex IA relations [14.36]. For example, { b , m , o } is a convex IA relation, whereas { b , o } is not. These good computational properties can be leveraged by robots during plan execution. Ensuring robust execution in the case of our waiter robot picking a mug need not be hard-coded as an ad-hoc status checking procedure into the implementation of the pick up action. Conversely, it can be modeled as a simple relation as above, and its verification can be cast as a CSP containing the relations

Pick ( Mug ) { o } Holding ( Mug ) , Pick ( Mug ) { d - 1 } Holding ( Mug ) .
(14.10)

The first constraint represents the requirement we expect to hold in nominal execution, whereas the second constraint results from perception and proprioception (more on this later). The above CSP is not satisfiable: the observed and executed situations are not consistent with the modeled temporal relation representing successful plan execution. This determination can be made in polynomial time (with respect to the number of variables in the CSP) with a particular form of inference, called path consistency  [14.37].

Although convenient for specifying high-level requirements on robot plans, PA and IA cannot capture metric information. For this, we need metric temporal constraints, which allow to predicate on durations and relative temporal distances between variables. An important metric temporal problem is the temporal constraint satisfaction problem (GlossaryTerm

TCSP

). As in PA, variables represent time points. Constraints represent disjunctions of bounds on the temporal distance between the pairs of time points: given two time points a and b, the constraint

a [ l 1 , u 1 ] [ l 2 , u 2 ] [ l n , u n ] b

states that l i b - a u i , for one or more i { 1 n } . A TCSP is satisfiable iff at least one disjunct (pair of inequalities) per constraint holds. Unlike PA and IA, variables in a TCSP are associated with an explicit set of possible assignments of time values, called domain. Computing the minimal representation consists of restricting the domains of time points to the smallest intervals of time containing values that satisfy all constraints. A particular restriction of the TCSP, called the simple temporal problem (GlossaryTerm

STP

), has the property that domains are contiguous intervals of time. The condition for remaining in the STP fragment of TCSP is that constraints have only one disjunct, i. e., are of the form

a [ l , u ] b .

Whereas computational problems for the the general TCSP are NP-hard, the satisfiability and minimal representation problems for the STP are tractable. Both problems are solved by path consistency inference [14.38], and highly optimized algorithms have been developed for the satisfiability and minimal representation problems [14.39, 14.40].

The semantics of basic PA and IA can be understood through the use of metric temporal constraints. For instance, the constraint

Pick ( Mug ) { o } Holding ( Mug )
(14.11)

is equivalent to the metric constraints

Pick ( Mug ) - [ 1 , ) Holding ( Mug ) - , Holding ( Mug ) - [ 1 , ) Pick ( Mug ) + ,
(14.12)

where ( ) - and ( ) + represent, respectively, the start and end times of the corresponding intervals. Moreover, metric bounds can be used to define a metric extension of IA in which we can express high-level relations between intervals with additional metric bounds (Fig. 14.5). For instance, A { p [ 5 , 13 ] } B states that interval A should end at least 5 and at most 13 time units before interval B starts. In addition to binary constraints, it is also common practice to define the two unary constraints Release [ l , u ] A and Deadline [ l , u ] A , stating, respectively, that A starts between l and u time units after the origin of time, and that A ends between l and u time units after the origin of time.

Overall, constraint-based temporal calculi can represent both qualitative and metric time. Provided that appropriate fragments of IA are used in modeling, the reasoning tasks can be conveniently reduced to path consistency inference in an STP. This is an attractive feature for robot reasoning: it provides a means to anchor observations and proprioception in time. In the example above, the robot must infer from sensing that the relation

Pick ( Mug ) { d - 1 } Holding ( Mug )
(14.13)

holds. An implementation of this capability can be achieved by representing as time points both observed and planned behavior, and constraining these time points so as to reflect the precise times in which they were observed. Assuming the current time is 20, that the robot started to pick the mug at time 5, and that the gripper reports that it is holding an object between times 10 and 18, the following STP models the robot’s situation

Pick ( Mug ) { o [ 1 , ) [ 1 , ) [ 1 ) } Holding ( Mug ) , Release [ 5 , 5 ] Pick ( Mug ) , Deadline [ 20 , 20 ] Pick ( Mug ) , Release [ 10 , 10 ] Holding ( Mug ) , Deadline [ 18 , 18 ] Holding ( Mug ) .
(14.14)

The above STP is not satisfiable, reflecting the fact that execution has not gone according to plan. Again, this state of affairs can be ascertained in low-order polynomial time, hence providing a simple mechanism for online fault detection.

The use of temporal constraint reasoning in planning and plan execution goes beyond diagnosing failures. The fact that intervals are grounded on a metric temporal CSP (an STP) provides a means to maintain knowledge about when actions should be dispatched, and whether they should be delayed to accommodate unexpected contingencies. For instance, we can express that the robot should start moving back to the counter once the mug has been held for at least 3 s with the constraint

Holding ( Mug ) { o [ 3 , ) [ 1 , ) [ 1 , ) } Move ( Table , Counter ) .
(14.15)

A delay in the start time of the Holding ( Mug ) predicate will propagate to the start time of the Move ( Table , Counter ) action, progressively pushing its start time into the future until the robot starts holding the mug. Correct propagation of exogenous events is ensured by maintaining in the STP the requirements of the plan together with constraints representing when events are observed, when actions are dispatched, and when they terminate. A simple procedure for maintaining these constraints is the following:

  • When an action is dispatched at time t, a Release [ t , t ] constraint is imposed on the corresponding interval, constraining its start time to the current time.

  • The fact that the action is still executing at time t + m is modeled with the constraint Deadline [ t + m + 1 , ) , modeling the fact that the action will end some time in the future.

  • When the lower level executive signals at time t + m + n that an action has finished, the constraint Deadline [ t + m + n , t + m + n ] is added to the STP, thus constraining the action’s end time.

Every time a constraint is added, the minimal STP is recomputed – that is, the domains of the time points (start and end times of actions and predicates) are updated. In so doing, we ensure that the lower bound of time points representing the start times of actions will always correspond to the earliest admissible time at which actions should be dispatched to the robot’s executive layer. This guarantees that execution will uphold all temporal constraints modeled in the temporal CSP. These may include additional requirements on the plan, such as specifications regarding necessary orderings of actions, actions that must occur concurrently, and synchronizations with exogenous events.

The computational adequacy of temporal CSPs entails that reasoning about temporal models can be performed online during execution. Thus, reasoning about time is a way to enable fault diagnosis, guarantee timely action dispatching, and enforce temporal specifications on robot plans. Mechanisms based on temporal constraint reasoning have been used as a tool for this purpose in many settings involving execution of plans on real physical systems [14.41, 14.42]. Similar techniques have been used to account for uncontrollable events in plan execution [14.43], inferring context from sensor traces [14.44], and integrated context inference and planning [14.45]. A good overview of the fundamental principles of temporal constraint reasoning underlying these results is given in Dechter’s book on constraint processing [14.46].

3.3 Reasoning About Space

Spatial KR formalisms serve the purpose of specifying desired spatial relations in a scene. As for temporal models, they can be used to make sense of sensor traces, to enforce conditions during plan execution, as well as to drive the planning process itself. Most work has focused on the use of qualitative spatial relations for scene understanding (e. g., in the context of perceptual anchoring [14.47]). Structural pattern recognition, important applications of which can be found in medical domains, employ cognitive vision techniques to match qualitative spatial knowledge (representing a specified structure) to perceived context [14.48, 14.49]. In many applications, qualitative relations do not belong to a well-defined calculus, rather are tailored to capture specific features (e. g., distance, orientation, and shape) which are useful for pattern specification and recognition in particular applications. As is the case for temporal knowledge, a well-defined spatial calculus is useful because of its provable formal properties – e. g., tractability of specific reasoning problems, expressiveness of the language, and so on. Well-founded spatial calculi permit logical reasoning, and, not surprisingly, they are grounded on similar principles as PA and IA. We illustrate here these principles and show simple examples of their use in robotic applications.

The main entities of interest in qualitative spatial calculi are objects, or, rather, the regions (or points) of physical space that they occupy. Spatial calculi provide a means to represent the relations among these regions. There exist several well-known and well-studied qualitative calculi. Each of these calculi focuses on one category of spatial concepts – e. g., topology, direction and distance. Region connection calculus (GlossaryTerm

RCC

) [14.50] is used for representing and reasoning with topological relations. Cardinal direction calculus (GlossaryTerm

CDC

) [14.51] is an approach based on directional relations. As for temporal calculi, these calculi use the language of constraints to represent spatial properties, and classical constraint-based reasoning techniques like path consistency [14.52] to ascertain consistency.

Of particular interest here is RCC, which is grounded on eight spatial relations describing the connectedness of regions: disconnected (GlossaryTerm

DC

), externally connected (GlossaryTerm

EC

), tangential proper part (GlossaryTerm

TPP

), nontangential proper part (GlossaryTerm

NTPP

), partially overlapping (GlossaryTerm

PO

), equal (≡), and the inverses TPP−1 and NTPP−1 (Fig. 14.6).

Fig. 14.6
figure 6

The eight basic qualitative relations in RCC

The full algebra deriving from the eight basic RCC relations is called RCC-8. An important restriction of RCC-8 is RCC-5, obtained by subsuming the relations DC and EC into one relation, and NTPP and TPP into another. The satisfiability and minimal CSP problems are NP-hard for both RCC-8 and RCC-5 – however, these problems are tractable if we restrict the language to the basic relations (as for IA), and a large number of tractable fragments of RCC-8 and RCC-5 are known [14.53]. A good example of where these properties are useful is in the problem of anchoring modeled relations to observed spatial layouts. This is exemplified in the work by Cohn et al. [14.54] on video sequence analysis, where the problem is to construct qualitative representations of the evolution of spatial relations in video sequences. The qualitative descriptions are obtained online, due to the computational adequacy of the qualitative spatial calculus used (a derivative of RCC called CORE-9).

Qualitative spatial reasoning can be used, much like qualitative temporal reasoning, to robustify plan execution. We may, for instance, require that mugs should be placed on top of saucers (Mug NTPP Saucer). As we have shown for temporal reasoning, the task of verifying the correct state of affairs can be cast to a CSP which is guaranteed to be feasible iff the desired spatial layout is observed.

However, note that all we can express with RCC constraints are topological relations between objects. RCC cannot express notions like left-of, above, and so on, as the only assumption made on variables is that they are convex regions of space. Conversely, rectangle algebra (GlossaryTerm

RA

) [14.55] expresses more than topological relations – at the cost of assuming that objects have a particular shape. Specifically, RA is an extension of IA to two dimensions. Its variables thus represent axis-parallel rectangles, and relations are pairs of IA relations, one for each axis.

RA can be augmented, like IA, with metric bounds. The resulting calculus, called ARA +  [14.56], subsumes both topology and cardinal relations. For instance (Fig. 14.7), the relation

B p [ 5 , 13 ] , p A
(14.16)

states that B precedes A in both the x and y axes, and that the distance along the x axis between A and B should be at least 5 and at most 13. The ARA + relation subsumes the qualitative relation A is Northeast of B, as well as the RCC relation A { DC } B . Note that rectangular regions are compatible with the representation of segmented objects in most off-the-shelf perception modules. This property facilitates the process of going from sensor data to symbols. Toward this aim, ARA + provides the unary constraints At [ l 1 , u 1 ] [ l 2 , u 2 ] [ l 3 , u 3 ] [ l 4 , u 4 ] and Size [ l 1 , u 1 ] [ l 2 , u 2 ] [ l 3 , u 3 ] [ l 4 , u 4 ] . The first constraint bounds the length of the sides of a rectangle, while the second bounds the placement of a rectangle in GlossaryTerm

2-D

space. Note that the At constraint performs a similar function to the Release and Deadline constraints used in the metric extension of IA, which denote specified or perceived absolute placements in time of intervals. It is thus intuitive to see how the At constraint can be implemented with Release and Deadline constraints in the augmented IA CSPs used to represent the projections of rectangles on the two axes.

Fig. 14.7
figure 7

The ARA + relation B p [ 5 , 13 ] , p A , which subsumes RCC relation A { DC } B and CDC relation A Northeast of B

We can employ ARA + constraints to represent desired placements of objects, both in qualitative and metric terms. For instance, the specification of a well-set table for our waiter robot could be

Fork d [ 5 , + ) [ 5 , + ) , d [ 5 , + ) [ 5 , + ) Table , Knife d [ 5 , + ) [ 5 , + ) , d [ 5 , + ) [ 5 , + ) Table , Fork p , d Mug , Mug p , d Knife , Mug Size [ 8 , 8 ] [ 8 , 8 ] , Fork Size [ 2 , 2 ] [ 15 , 15 ] , Knife Size [ 2 , 2 ] [ 15 , 15 ] ,
(14.17)

that is, forks and knives should be at least 5 cm from the edge of the table, forks should be located on the left of mugs and knives on the right, the size of forks and knives is 2 × 15 cm 2 , and the size of mugs is 8 × 8 cm 2 .

The above relations can be maintained, much like we have illustrated for temporal reasoning, in a lower-level metric constraint-based representation – in this case, consisting of two STPs, one for each axis of the reference frame. It has been shown [14.56] that these two STPs are satisfiable iff the ARA + CSP is satisfiable, and that the minimal STPs contain all the admissible placements of objects. The minimal STPs can thus be used to extract an admissible placement of the objects in the scene. Technically, this is done by hallucinating the objects that are to be placed in the scene, that is, representing both observed objects and those that are intended to be placed into the scene in the spatial CSP. In the example above, supposing the robot must place the mug on a table which already has a fork and a knife on it, the hallucinated object is the mug, while the fork and knife would be constrained by At constraints representing their observed locations. The bounds of all variables in the minimal STPs represent all the admissible placements of the objects. Those of the fork and knife will not have been refined in the minimal representation, as they were already fixed by the At constraints resulting from observation; conversely, the bounds of the Mug rectangle are refined in the minimal representation to contain only those points that are admissible wrt. the other constraints in the spatial CSP.

So far, we have shown how employing constraint-based temporal and spatial KR formalisms allows to cast the problem of fault detection to a tractable problem that can be solved online. Another important aspect of plan execution that is facilitated by these representations is fault identification and repair [14.57]. It is easy to see why an explicit representation formulated in terms of constraints facilitates these tasks: suppose that the fork and knife are placed on the table correctly from the qualitative point of view – i. e., the fork is left of the knife – but that they are only 5 cm apart. The resulting spatial CSP would not be satisfiable, as it would be impossible to fit the mug (whose bounding box is 8 × 8 cm 2 ) between the two pieces of cutlery, as prescribed by constraints ( Fork p , d Mug ) and ( Mug p , d Knife ) . The spatial CSP can be used at this point as a tool to perform culprit analysis: the same CSP without the At constraint modeling the observed placement of the fork represents the situation in which we assume that the fork can be moved; if this CSP is satisfiable, then we know that replacing the fork is a way to achieve a well-set table.

Spatial reasoning has not received as much attention as temporal reasoning in the context of robotic applications. The AI and Robotics research communities are, however, quickly clustering around this important topic. Spatial calculi are a well studied topic in AI, albeit with less applications in robotics than temporal reasoning. As robots become more competent in manipulation and navigation tasks, the issue of representing and enforcing high-level requirements on space begins to come to the fore. As always, it is the combination of epistemological and computational adequacy of significant fragments of spatial KR formalisms that makes them useful in robot applications. Indeed, the potential of qualitative and metric spatial reasoning goes beyond planning and plan execution, with active research directions in human–robot interaction [14.58, 14.59, 14.60] and object search [14.61]

3.4 Reasoning About Multiple KR Formalisms

Despite their simplicity, the examples we have used to illustrate the basic concepts of temporal and spatial reasoning point to the fact that reasoning about action, time, space, and any other knowledge possessed by the robot must occur jointly. For instance, the decision to place the mug on the table may depend on the spatial layout of other objects on the table, or on whether there is a more urgent task to be carried out before doing so. It is easy to see that even this simple task cannot ignore kinematics and geometric constraints, which may call for approaching the table from a different side, or to move another object before placing the mug. Recent work in integrated task and motion planning is a response to the need for integrating these forms of reasoning into planning. Today, this is very much seen as the next big challenge, and consistent research results in this specific area of hybrid reasoning are closing the gap between classical planning and real robotic applications [14.62, 14.63, 14.64, 14.65, 14.66, 14.67].

We argue, however, that kinematics and geometric constraints are only two of the important types of knowledge necessary for achieving competent robot behavior. Comparatively less work exists in integrating other types of knowledge. DL reasoing has been employed to generate high-level plans [14.68], to refine planning domains [14.69], and Galindo et al. [14.70] show how to enhance the task planning process with spatial and ontological models. The largest volume of work in integrating diverse forms of reasoning into planning focuses on integrating planning, temporal, and resource reasoning – e. g., planning with metric temporal reasoning [14.71], qualitative temporal models [14.44], combined qualitative and metric temporal reasoning [14.72], and resource scheduling [14.73, 14.74, 14.75].

Although these advancements contribute to widening the expressive power of models for robot reasoning, they do not suggest general methods for jointly reasoning about heterogeneous KR formalisms. As noted earlier, the types of knowledge a robot should possess ultimately depend on the application, which makes this an important basic research question. A few examples pointing in the direction of general solutions do exist, although work in the area is sparse. These include planning modulo theories [14.76], an extension of satisfiabiliy modulo theories (GlossaryTerm

SMT

s) [14.77], which enrich classical planning models with semantics belonging to arbitrary first order theories. SMTs themselves are an example of general hybridization scheme, and have been employed in a robotic context by Nedunuri et al. [14.78] to enforce metric requirements on motion and manipulation tasks. The approach known in the literature as meta-constraint reasoning is similar in structure to SMTs, but is grounded on the more general notion of CSP. Meta-constraint reasoning has been used to develop solvers for highly expressive planning domains that include qualitative and metric time, resources, and logical constraints [14.79]. In the context of robotic applications, the approach has been used for online planning with qualitative and metric time, spatial relations in ARA + and resources [14.57], as well as for online configuration planning for multirobot systems [14.80].

3.5 Knowledge Representation Systems for Robots

In autonomous robot control systems, the kinds of information that we have discussed in the previous subsections are not only represented and reasoned about but also used subsymbolically for generating perception-guided activity. For example, some information such as the appearance and poses of the objects on the table might be generated by the perception system of the robot through the interpretation of sensor data. Other information such as the position of the robot in the environment might be estimated using state estimation algorithms and stored in dedicated data structures. Also, symbolic action descriptions have to be translated into low-level parameterizations of control routines in order to generate behavior and cause effects.

These aspects of embodying abstract and symbolic reasoning in robots are at least partly addressed by system-level research on KR and reasoning for robots. The resulting systems serve as integrated question-answering components in a robot’s control program that gather information from different sources and employ a range of inference methods for controlling and parameterizing control routines.

Examples of recent systems include the pntology-based unified robot knowledge (GlossaryTerm

OUR-K

) framework [14.81], mostly used for navigation actions, the ORO system [14.82] that focuses on knowledge for human–robot interaction and dialog grounding, and the KnowRob system [14.83] providing deep knowledge for robotic manipulation tasks. The semantic layer [14.84] of the PEIS Ecology [14.85] serves both autonomous robots and ambient intelligent environments. In the Ke Jia project [14.86], Answer Set Programming is used as representation for integrating KR, natural language processing, task planning and motion planning on a mobile platform. Also classical cognitive architectures such as Soar [14.87] and ACT-R also integrate modules for storing knowledge [14.88] with inference capabilities, but have only rarely been used on robots [14.89].

One field that has seen much progress in the past years is the creation and representation of environment models, often termed semantic maps. Advances in object recognition allowed to detect and perceptually interpret single objects in environments, which enables robots to store higher level semantic information about them in their respective maps. The term semantic is used for a range of different interpretation capabilities – from a mere segmentation and categorization of the environment into different region types to the segmentation of objects, their categorization, and the interpretation of their properties [14.90], or from the co-occurrence of objects in scenes [14.91] up to environment representations in description logics [14.92], from statistical relational environment models [14.93] to an embedding of spatial information into an encyclopedic and common-sense knowledge base [14.94]. Many of the reasoning tasks that we have discussed in Sect. 14.3.3 have implicitly assumed expressive semantic maps that can make symbolic place descriptions such as on the table or in the cupboard effective.

Knowledge representation aspects also become increasingly important for the instruction of robots with natural language as well as for human–robot interaction. For understanding directives involving objects in the environment [14.95] and for following spoken directions [14.96, 14.97, 14.98, 14.99], a robot might have to ground the words it hears or reads into its percepts and actions. The natural-language text may be the result of direct interaction with a human, but can also be obtained from resources on the Web, for example, for mining object locality knowledge [14.100] or extracting task instructions from websites [14.101] ( ).

Web-enabled robotics is another branch of robotics that has recently become increasingly important. Modern robot control systems use the worldwide web as an information resource [14.102], for sharing knowledge and skills [14.103], for telepresence applications [14.104], to employ web services [14.105], and to use computational resources provided through the web [14.106] ( ).

4 Plan-Based Robot Control

Plan-based control is an information processing approach for controlling autonomous robots that aims at making robot control systems more general – in at least three respects. First, it enables robots to successfully carry out multiple, diverse, and possibly interfering tasks. Second, it can improve the course of action by adapting the control software for the respective execution context. Third, plan-based control can enable robots to perform novel tasks without requiring the collection of extensive experience.

In the plan-based approach, robots generate actions and behavior by synthesizing, maintaining, and executing plans, where plans are robot control programs that a robot cannot only execute but also reason about and manipulate [14.107]. Thus plan-based controllers can manage and adapt plans in order to better achieve complex and changing tasks [14.108]. The use of plans enables robots to flexibly interleave complex and interacting tasks, exploit opportunities, quickly plan their courses of action, and, if necessary, revise their intended activities.

Plans in plan-based control have two roles. They are (1) executable prescriptions that can be interpreted by the robot to generate behavior in order to accomplish its jobs and (2) representations that can be synthesized and revised by the robot to meet the robot’s criterion of utility. Besides having means for representing and generating plans, plan-based controllers are also equipped with tools that enable computational processes to (1) predict what might happen when a robot controller gets executed and return the result as an execution scenario, (2) infer what might be wrong with a robot controller given an execution scenario, and (3) perform complex revisions on robot controllers.

In most general terms robot planning can be considered to be the automatic generation, refinement, revision, and optimization of robot plans [14.107]. As a computational problem it can be formulated as follows: given an abstract plan P and a description of the current situation, find an executable realization Q that maximizes some objective function V. In this problem formulation, an abstract plan might be to go shopping and to clean up the apartment, to win a robot soccer game, to monitor the traffic in a particular area using an autonomous helicopter, or, for a museum tourguide robot, to inform and entertain the visitors of the museum.

Robot planning algorithms aim at finding appropriate plans by hypothesizing possible courses of action and predicting what could happen when the plans get executed. Based on these predictions, the robot decides whether the plan will meet its objectives.

In the remainder of this subsection we will first discuss conceptual models underlying robot planning and the representation of robot plans. We will then talk about the automated synthesis of robot plans and finally discuss mechanisms for revising plans and optimizing plans based on experience.

4.1 Models of Plan-Based Control

The generation of plans and the prediction of what will happen is based on models of the robot, its actions, and the world. The most commonly used models are state-transition systems (also called discrete-event systems or problem spaces) [14.27].

Using state-transition systems, the behavior and the effects of robot activity can be represented as a triple Σ = S , A , γ , where:

  • S = { s 1 , s 2 , } is a set of states

  • A = { a 1 , a 2 , } is a set of actions

  • γ : S × A S is a state-transition function.

Thus, the action repertoire of a robot can be stated as a graph where the nodes represent the states of the environment and links of the form

s i A s j

represent the fact that the state si can be transformed into the state sj by the robot executing action A.

Imagine a robot has to stack three different objects, e. g., a cup, a saucer, and a set. In the initial state, all objects are placed on a table surface. The robot can pick an object (provided that there is no other object on top of it) and can place it either on the surface or on top of another object (provided, again, that there is nothing already on top of it). If we further assume that each object can be placed on all other objects, and that only a single object can be moved at a time (and not a stack of two of them), then we get the state space depicted in Fig. 14.8.

Fig. 14.8
figure 8

Example: problem space for stacking three objects

The state-transition model abstracts away aspects of robot activity such as the structure of actions and the events, behavior, and effects that are caused during action execution. Actions are considered to cause atomic state transitions.

Even with these abstractions, a number of variations of planning problems are investigated. These variations result from different trade-offs one can make with respect to the computational complexity of the planning problem, the strength of assumptions one makes about the capabilities of the robot, and the realism with which one wants to represent the behavior of the robot and the effects of its actions.

An important subclass of state-transition models are the ones that characterize action by specifying their preconditions and effects [14.109, 14.110, 14.25]. Preconditions state the conditions under which the respective action is executable and has the specified effects. The effects represent how the world changes when the action is executed.

If these models are stated as axioms in a logical representation language (Sect. 14.2) planning methods can be particularly powerful: they can compute sequences or partially ordered sets of actions that are provably working plans from the logical point of view. This means that planning algorithms can find plans that are guaranteed to transform any state satisfying a given state description into a state that satisfies a given goal, if such a plan exists. Unfortunately, in many robot applications action models that are axiomatized this way are too unrealistic.

Many extensions of the state-transition model deal with providing the expressive power needed to state action models more realistically. The first one extends the state-transition model by allowing state transitions to be caused by a combination of an action and an event ( γ : S × A × E S ) rather than the action alone. This modification allows us to represent action failures by stating that a failure event occurs while the action is executed. It also allows us to approximate environments that are dynamically changing. The second extension is to change the state transition function such that it maps into a subset of states rather than a single state ( γ : S × A 2 S ). This modification allows the robot to reason through the nondeterminism that can often not be avoided when autonomous robots perform realistic activities. Other researchers extend models used for plan-based control by including aspects that allow the robot to reason about its resources, geometric aspects of manipulation actions [14.111, 14.24], sensing actions, and so on.

Besides the state-transition model researchers also use hybrid system models to reason about robot activities [14.112, 14.113, 14.114]. In the hybrid system model the action repertoires of robots are specified as continuous variable dynamical systems with a phased operation. Within each phase, called control mode, the system evolves continuously according to the dynamical law of that mode, called continuous flow. Thus, the state of the hybrid system can be thought of as a pair – the control mode and the continuous state. The control mode identifies a flow, and the continuous flow identifies a position in it. Also associated with each control mode are so-called jump conditions, specifying the conditions that the discrete state and the continuous state together must satisfy to enable a transition to another control mode. The transitions can cause abrupt changes of the discrete as well as the continuous state. The jump relation specifies the valid settings of the system variables that might occur during a jump. Then, until the next transition, the continuous state evolves according to the flow identified by the new control mode.

The advantage of a hybrid-system-based conceptualization over state-based ones is that hybrid systems can adequately model concurrent processes with interfering continuous effects. They also allow for discrete changes in process parameterization, which we need to model the activation, deactivation, and reparameterization of control processes through concurrent reactive plans (Sect. 14.4.2). In addition, hybrid system based conceptualizations can model the procedural meaning of waiting for and reacting to asynchronous events.

While the increased expressive power allows us to model the behavior that today’s autonomous manipulation robots generate much more faithfully, the models are also in most cases too complex to completely synthesize action plans from primitive actions. Many of the modelling issues to be addressed in the intersection between action planning and control are discussed in [14.115, 14.116].

4.2 Representation of Plans

When the robot is performing its activity, it is under the control of a plan. The most common forms of action plans in autonomous robot control are:

  • State-transition plans, totally or partially ordered sets of atomic actions.

  • Policies, mappings from (perceived) environment states to actions.

  • Reactive plans that specify how the robot is to respond to sensory events and execution failures in order to accomplish its jobs.

4.2.1 State-Transition Plans

Robot action plans that are generated according to the state-transition model are ordered sets of actions, often called plan steps. The assumptions of the state-transition model, in particular that plan steps are atomic and their execution can be abstracted away, are not satisfied in many robot applications. To execute plan steps successfully, actions have to include subactions such as detecting the object to be fetched in a scene or detecting that the object has been successfully grasped before lifting the arm. As autonomous robot activity is far from perfect, the robot might not be able to generate the necessary execution events or it can generate the events but fails to achieve the desired effects. For example, an object might slip out of the robot’s hand after grasping it.

As a consequence, state-transition plans can often only serve as guidelines for generating robust and flexible robot behavior [14.117]. To close the gap to low-level robot control, researchers have proposed multilayer software architectures for autonomous robot activity, in particular 3T (3-tiered) architectures [14.118] (Chap. 12). 3T architectures run planning and execution at different software layers and different time scales where a sequencing layer synchronizes between both layers. Each layer uses a different form of plan or behavior specification language. The planning layer typically uses a problem space plan, whereas the execution layer employs feedback control routines that can be activated and deactivated. The intermediate layer typically uses a simple form of a reactive plan language (see below). Even though 3T architectures have been shown to produce competent activity in some application domains, there remain important issues to be addressed for more complex applications. One of them is that the behavior generated by the sequencing layer might deviate substantially from the action model used for planning, which might cause plans not to work as expected by the action models used by the planning component. As the planning layer abstracts away from control structures, it also lacks the expressiveness to specify robust and flexible behavior.

4.2.2 MDP Policies

A second category of plans are policies, which are mappings from discretized robot states into robot actions. Such policies are computed as solutions of Markov decision problems (Sect. 14.2.2.2). The policies computed by MDPs aim at robustness and optimizing the average performance. They are less appropriate when the consequences of decisions are not in the near future. The decision of how to grasp an object might considerably affect the ease with which the object can be placed later in the activity episode. Extensions of MDP policies such as the combination of programming and reinforcement learning [14.119, 14.120] and the structuring of policies with so-called options [14.121, 14.122] partly address these issues.

4.2.3 Concurrent Reactive Plan Languages

Concurrent reactive plan languages are the result of applying programming language design methods to the specification of robot behavior. Concurrent reactive plans specify how the robot is to respond to events, including perception and failure events, in order to achieve its goals [14.123]. Such plan languages offer rich sets of control structures that support the structuring and transparent synchronization and monitoring of concurrent control processes, the detection and recovery from execution failures, the context-dependent execution of subplans, the prompt reaction to asynchronous events, and data structures for the descriptions of objects to be manipulated [14.124, 14.125]. Some reactive plan languages also incorporate explicit representations of the beliefs, goals, and intentions of the robot [14.126, 14.127, 14.128] and even provide powerful reasoning mechanisms to adapt the plan execution to the respective situation context [14.129].

While concurrent reactive plan languages enable robots to accomplish their tasks competently, they are too expressive for automatically generating robust, flexible high-performance plans from first principles. They typically require the use of plan libraries and transformational planning techniques to deal with the computational complexity caused by the expressiveness of the plan languages (Sect. 14.4.4).

4.3 Generating Robot Action Plans

The most common form of reasoning performed on plans is the synthesis of plans for given problems. In general, planning is concerned with reasoning about intended courses of action before executing them in order to avoid failures in achieving given goals, to prevent undesired side effects, or to improve the performance of the robot in accomplishing its goals.

There are many different variants of planning problems, categories of plans that can be generated, and methods that can be applied to planning problems. Comprehensive overviews can be found in textbooks [14.27, 14.4] and overview articles [14.130, 14.131]. A wide range of the planning problems is addressed within an international AI planning competition (GlossaryTerm

IPC

) [14.132] using standardized problem specification languages [14.109, 14.110]. Several implementations of planning algorithms participating in this competition are available as open-source software.

In planning for state-transition, the task of the planning system is to find an ordered set of actions that transform any state satisfying the initial state description into a state satisfying the given goals.

The first class of planning systems are systems that enumerate the states that can be reached by prefixes (or suffixes) of hypothetical action sequences until a goal state is reached. Planning systems that fall into this category include the STRIPS planning algorithm [14.25] and newer and more efficient algorithms such as the GlossaryTerm

FF

(fast forward) planning algorithm [14.133] as well as various variants of these algorithms.

Another class of algorithms enumerates abstract plans that represent sets of plans. The idea underlying this approach is that by reasoning about an abstract plan, the planning algorithm can compactly reason about different plan instances at the same time by reasoning about their common causal structure. For example, a partial order plan represents all possible completions of the plan. The GlossaryTerm

POP

(partial-order planning) algorithms is a family of algorithms that is primarily used for teaching action planning [14.130] while GraphPlan [14.134] is an algorithm that is designed for planning process efficiency [14.131].

The algorithms above are domain independent: they compute plans by analyzing whether preconditions of plan steps are achieved through the effects of prior actions using general algorithms. In contrast, domain-dependent planners use domain knowledge to constrain and prune the search space to be explored in a planning process. An example of such domain knowledge is that stacks of objects are built bottom up. Domain-dependent planners are often much more efficient than their domain-independent counterparts. Examples of domain-dependent planners are the GlossaryTerm

TL

- (temporal logic) [14.135] and the TAL planners [14.19].

Another category of planners that are designed to deal with more complex planning problems are GlossaryTerm

HTN

(hierarchical task network) planners. HTN planners are equipped with libraries of plan schemata that specify how abstract plan steps can be decomposed into more complex subplans. Examples of such planners are the SHOP planners [14.136] and extensions thereof that are combined with description logics and applied to autonomous robot control [14.69].

The planning approaches above generate plans that have similar expressiveness in terms of the behavior they can specify. The planners mainly differ with respect to the size of the planning problems they can handle and their efficiency in solving planning problems.

4.4 Revising and Improving Plans

Another dimension that robot action planning methods can be scaled along is the expressiveness of the plans they generate in terms of specifying competent robot behavior. A subclass of planning methods that is suitable for generating concurrent reactive plans is transformational planning .

The underlying idea of transformational planning is to paste together plans from libraries of plan schemata and revise them for the specified task [14.137] rather than creating completely new plans from scratch. Transformational planning has also been referred to as plan repair or plan debugging [14.138, 14.139] and is often performed within a generate-test-debug control strategy [14.140]. Plan transformation is a promising approach in applications where default plans that have a substantial chance of being successful can be constructed easily, tested by projecting them, analyzed with respect to their expected execution failures, and where failures can be used to index the appropriate plan revisions.

Transformational planning subsumes classical planning methods such as partial-order planning in the sense that the operations of the POP algorithms can be simply expressed as transformations. In addition to the transformations that correspond to classical planning, transformational planning typically employs collections of transformations including domain-specific ones, heuristic transformations, and transformations for plan optimization [14.141]. In cases where the planners employ transformations that cannot be guaranteed to work, they rely on powerful plan projection methods for testing the generated plan hypotheses [14.142].

Transformational planning is a promising approach for autonomous robots performing complex manipulation tasks because it makes it easy for programmers to specify how plans should be modified to avoid certain expected behavior flaws. For example, a transformation could tell the robot to stack items in order to transport them more efficiently subject to additional constraints such as that it should not stack too many items because it might drop them. Or, it might stack one kind of object on another one but not vice versa – typically it is safe to stack cups on plates but not plates on cups.

Transformational planning can also effectively facilitate the learning not only of plans [14.139] but also of planning capabilities by learning for a particular robot and environment what the robot can stack and carry and what not.

4.5 Other Reasoning Methods

There are a number of other reasoning methods that plan-based robot control needs to employ in order to produce competent object manipulation capabilities [14.143]. One of the main reasons is that the tasks and subtasks that a robot is to perform are typically underspecified: the robot might be asked to fetch the screwdriver from the table but when it gets there are several different screwdrivers. Or, the robot might be asked to crack an egg and has to decide where to crack it, where to put the yolk and egg white, what to do with the shell, etc. When fetching objects, it has to reason about abstract states such as visibility and reachability of objects. We can only give a couple of examples here.

The first reasoning problem is that of object identity resolution , the inference task of deciding whether two internal representations refer to the same object in the real world. When receiving the command to pick up the red cup from the table, the robot must make at some point the decision that a data structure returned by the robot’s perception system and the language description refer to the same object. Only by making this inference, the robot is able to infer that it can achieve the given task by picking up a particular object that it sees. In logic-based representations this problem is even harder because symbols used in the planning problem specification denote objects in the outside world. In this setting the problem of object identity resolution problem becomes the symbol grounding problem [14.1], i. e., the problem to determine the right or, at least, the most likely object in a given environment for each constant and each definite description in the logical representation. The most powerful inference methods to reason about object identity resolution are first-order probabilistic reasoning mechanisms as described in [14.144].

Other important inference methods are needed to model some predicates used in the activity models more realistically. Predicates that deserve such special treatment include visibility and reachability. For example, the Asymov planner [14.111] reasons about action preconditions that include reachability conditions and tests reachability through the existence of motion plans. Other planning systems reason about the next best views that a robot should take in order to perform complex perception tasks. Sophisticated action planners that include action and motion planning employing more general geometric reasoning capabilities can be found in [14.66]. Cognition-enabled plans employ such reasoning methods in order to parameterize actions with respect to the effects that should be achieved and avoided [14.62].

5 Conclusions and Further Reading

The aim of this chapter was, first, to motivate the purpose and benefit of employing general reasoning methods in robot control; second, to sketch some of the generic reasoning methods from AI that are well-understood, available, and most likely of use on a robot; and third, to go into plan-based robot control in some more detail, as it is a particularly obvious form of employing reasoning to the end of improving robot action.

Yet, the chapter has only sketched the tip of an iceberg. Employing symbolic reasoning in an autonomous mobile robot requires to have and update on the robot a symbolic model of the world, or at least of that part of the world which is relevant for the current task – but who knows beforehand exactly what is and what isn’t relevant for some task? It therefore requires to be able to interpret the current sensor data stream on a semantic level to transform it into a symbolic description in terms of categories that are meaningful in the KR. Assuming that it is not practical to equip a robot with all knowledge that may likely be relevant for each and every situation it may find itself in, it requires some form of learning, generalization, and/or the ability to look up unknown knowledge from available sources. These and some more requirements that are needed for practically using AI reasoning methods on a robot cannot be fulfilled completely based on current state-of-the-art AI methods and technology; as a consequence, currently existing robot prototypes employing AI reasoning methods in their control systems are limited in terms of generality and adaptability.

It is no recent insight that employing AI reasoning methods on robots is both a chance and a challenge. In fact some of the early influential work in AI was driven by the idea of designing integrated reasoning robots, such as SHAKEY [14.145] ( ); as a consequence, early AI textbooks like [14.146] would naturally include robotics as a target field for using AI reasoning methods. The field of AI has since focused on applications other than Robotics. Recent efforts in integrating reasoning systems with autonomous robots have done much to bring the original robot-inspired vision back into focus, as witnessed by series of workshops on AI-Based Robotics (e. g., IROS 2013) or AI and Robotics (AAAI 2014). It is exactly the family of challenges and chances that this chapter is about that is discussed at these events.

5.1 Further Reading

For a more complete coverage of the generic methods described here, as well as the rest of AI, we refer to AI textbooks, of which the one by Russell and Norvig [14.4] is comprehensive and currently most widely used. Among the sources for first-hand material, let us mention the two journals Artificial Intelligence and Journal of Artificial Intelligence Research (JAIR) [14.147], which both cover well the topics addressed here. The main international conference in the field is GlossaryTerm

IJCAI

(International Joint Conference on Artificial Intelligence); GlossaryTerm

ECAI

(European Conference on Artificial Intelligence [14.148]) and GlossaryTerm

AAAI

(Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence [14.149]) are other major international symposia on Artificial Intelligence in general. Regarding planning in particular, GlossaryTerm

ICAPS

(International Conference on Automated Planning and Scheduling [14.150]) is the main international conference.

Reference [14.151] reviews about the same topic as this chapter, but structures it differently along the notion of deliberation functions.