Keywords

1 Introduction

The vision of pervasive computing technology intends to provide invisible computing environments so that a user can utilize services at any time and everywhere [1]. Context-awareness is a key concept in pervasive computing. In context-aware pervasive computing every user may have several computing devices, where information can be collected by using tiny resource-bounded devices, such as, e.g., PDAs, smart phones, and wireless sensor nodes [2]. These systems interact with human users, they often exhibit complex adaptive behaviours, they are highly decentralised and can naturally be implemented as multi-agent systems. An agent is a piece of software that requires to be reactive, pro-active, and that is capable of autonomous action in its environment to meet its design objectives.

In the literature, various logical frameworks have been developed for modelling and verification of multi-agent systems [3]. However, such frameworks may not be very suitable to model context-aware applications. This is because, most of those existing frameworks consider propositional logic as a simple knowledge representation language which is often not suitable for modelling real life complex systems. For example, propositional logic cannot directly talk about properties of individuals or relations between individuals. Much research in pervasive computing has been focused on incorporation of context-awareness features into pervasive applications by adapting the semantic web technology (see e.g., [46]), where description logic (\( DL \))-based ontology languages are often used for context representation and reasoning. \( DL \) is a decidable fragment of first order logic (\( FOL \)). In [6], it has been shown how context-aware systems can be modelled as resource-bounded rule-based systems using ontologies. In that paper, the resources required by the agents to solve a given problem were considered the time and communication bandwidth. But not the space requirements for reasoning. Since context-aware systems often run on resource limited devices, memory requirement is an important factor for their reasoning. In this paper, we propose a logical framework based on the earlier work of Alechina and colleagues [79], and the resulting logic \(\mathcal {L}_\mathcal{{OCRS}}\) allows us to describe a set of ontology-driven rule-based reasoning agents with bound on time, memory, and communication. In addition to the incorporation of space (memory) requirements for reasoning in [7], \(\mathcal {L}_\mathcal{{OCRS}}\) also uses first order Horn clause rules derived from OWL 2 RL ontologies. While the frameworks presented in [7, 8] provide a useful basis for experimentation with both the logical representation and verification of heterogeneous agents, it has become clear that a more expressive logical language is required if these frameworks are to be used for real world context-aware agents. Though the logic developed by [9] is based on \( FOL \), memory bounds have not been imposed in that framework. The proposed framework allows us to determine how much time (measured as rule-firing cycles) are required to generate certain contexts, how many messages must be exchanged among agents, and how much space (memory) is required for an agent for the reasoning. For verification, we show how we can encode a \(\mathcal {L}_\mathcal{{OCRS}}\) model using the Maude LTL model checker [10] and verify interesting resource-bounded properties.

The remainder of the paper is organized as follows. In Sect. 2, we discuss how contexts are represented using OWL 2 RL and SWRL. In Sect. 3, we describe our model of communicating multi-agent context-aware systems. In Sect. 4, we develop logic \(\mathcal {L}_\mathcal{{OCRS}}\), in Sect. 5 we present an example system and experimental results, and conclude in Sect. 6.

2 Context Modelling

We view context is any information that can be used to identify the status of an entity. An entity can be a person, a place, a physical or a computing object. This context is relevant to a user and application, and reflects the relationship among themselves [11]. A context can be formally defined as a (subject, predicate, object) triple that states a fact about the subject where — the subject is an entity in the environment, the object is a value or another entity, and the predicate is a relationship between the subject and object. According to [11], “if a piece of information can be used to characterize the situation of a participant in an interaction, then that information is context”. For example, we can represent a context “Fiona is the caregiver of Tracy” as (Fiona, isCareGiverOf, Tracy). Here, the caregiver of a patient is dynamically identified based on the care status of the caregiver. For context modelling we use OWL 2 RL, a profile of the new standardization OWL 2, and based on \(pD^*\) [12] and the description logic program (DLP) [13]. We choose OWL 2 RL because it is more expressive than the RDFS and suitable for the design and development of rule-based systems. An OWL 2 RL ontology can be translated into a set of Horn clause rules based on [13]. Furthermore, we express more complex rule-based concepts using SWRL [14] which allow us to write rules using OWL concepts. In our framework, a context-aware system composed of a set of rule-based agents, and firing of rules that infer new facts may determine context changes and representing overall behaviour of the system.

For illustration, we construct an ontology-based context-aware model for a healthcare epilepsy scenario adapted from [15]. The scenario is based on the monitoring of epileptic patients to detect epileptic seizures. An epileptic alarm may activate several actions such as warning the patient about potential danger, informing patient’s caregivers to take appropriate actions, and sending SMS messages to patient’s relatives who are currently near to the patient.

Fig. 1.
figure 1

A fragment of the epileptic patients’ monitoring ontology

“The goal of the epileptic patients’ monitoring context-aware system is to detect the seizures, and to react in the following ways: (i) notify the epileptic patient of an upcoming seizure; and (ii) notify his/her nearby caregivers of an upcoming seizure of the patient by showing a map with the location of the patient. The caregivers who receive the notification for help should be (i) assigned as one of the caregivers of that particular patient; (ii) available for helping; and (iii) physically close to the patient. Upon a notification for help, caregivers may either accept or reject the request for helping the epileptic patient. When a particular caregiver accepts to help, the other caregivers who had received the notification for help are informed that a certain caregiver has already accepted to help that patient” [15]. Using Protégé [16], we build an OWL 2 RL ontology to capture the static behaviour of the system. A fragment of this ontology is depicted in Fig. 1. The dynamic aspect of the system is captured using SWRL rules. A snapshot of some SWRL rules is given in Fig. 2. In order to design a context-aware rule-based system from the above ontology, we extract Horn clause rules using the technique described in [6]. We show an example system encoding in Maude in Sect. 5 based on the logic developed in Sect. 4.

Fig. 2.
figure 2

Example SWRL rules

3 Context-Aware Agents

In our model a multi-agent context-aware system consists of \(n_{Ag}\, (\ge 1)\) individual agents \(A_g=\{1,2,\ldots ,n_{Ag}\}\). Each agent \(i\in A_g\) has a program, consisting of Horn clause rules of the form \(P_1, P_2,\ldots ,P_n \rightarrow ~P\) (derived from OWL 2 RL and SWRL), and a working memory, which contains ground atomic facts (contexts) taken from ABox representing the initial state of the system. In the rule, the antecedents \(P_1, P_2,\ldots ,P_n\) and the consequent \(P\) are context information. The antecedents of the rule form a complex context which is a conjunction of \(n\) contexts. In a resource-bounded system, it is quite unrealistic to presume that a single agent can acquire and understand available contextual information and infer new contexts alone. Thus sharing knowledge among agents is an efficient way to build context-aware systems. In our model, agents share a common ontology and communication mechanism. To model communication between agents, we assume that agents have two special communication primitives \( Ask (i,j,P)\) and \( Tell (i,j,P)\) in their language, where \(i\) and \(j\) are agents and \(P\) is an atomic context not containing an \( Ask \) or a \( Tell \). \( Ask (i, j, P)\) means ‘\(i\) asks \(j\) whether the context \(P\) is the case’ and \( Tell (i, j, P)\) means ‘\(i\) tells \(j\) that context \(P\)’ (\(i \not = j\)). The positions in which the \( Ask \) and \( Tell \) primitives may appear in a rule depends on which agent’s program the rule belongs to. Agent \(i\) may have an \( Ask \) or a \( Tell \) with arguments \((i, j, P)\) in the consequent of a rule; e.g., \(P_1, P_2,\ldots ,P_n \rightarrow ~ Ask (i,j,P)\) whereas agent \(j\) may have an \( Ask \) or a \( Tell \) with arguments \((i, j, P)\) in the antecedent of the rule; e.g., \( Tell (i,j,P) \rightarrow P\) is a well-formed rule (we call it trust rule) for agent \(j\) that causes it to believe \(i\) when \(i\) informs it that context \(P\) is the case. No other occurrences of \( Ask \) or \( Tell \) are allowed. When a rule has either an \( Ask \) or a \( Tell \) as its consequent, we call it a communication rule. All other rules are known as deduction rules. These include rules with \( Ask \)s and \( Tell \)s in the antecedent as well as rules containing neither an \( Ask \) nor a \( Tell \). Note that OWL 2 is limited to unary and binary predicates and it is function-free. Therefore, in the Protégé editor all the arguments of \( Ask \)  and \( Tell \) are represented using constant symbols and these annotated symbols are translated appropriately when designing the target system using the Maude specification.

4 Logic \(\mathcal {L}_\mathcal{{OCRS}}\)

A \( DL \) knowledge base (\( KB \)) has two components: the Terminology Box (\( TBox \)) \(\mathcal {T}\) and the Assertion Box (\( ABox \)) \(\mathcal {A}\). The \( TBox \) introduces the terminology of a domain, while the \( ABox \) contains assertions about individuals in terms of this vocabulary. The \( TBox \) is a finite set of general concept inclusions (\( GCI \)) and role inclusions. A \( GCI \) is of the form \(C \sqsubseteq D\) where \(C\), \(D\) are \( DL \)-concepts and a role inclusion is of the form \(R \sqsubseteq S\) where \(R\), \(S\) are \( DL \)-roles. We may use \(C \equiv D\) (concept equivalence) as an abbreviation for the two \( GCI \)s \(C \sqsubseteq D\) and \(D \sqsubseteq C\) and \(R \equiv S\) (role equivalence) as an abbreviation for \(R \sqsubseteq S\) and \(S \sqsubseteq R\). The \( ABox \) is a finite set of concept assertions in the form of \(C(a)\) and role assertions in the form of \(R(a,b)\).

Definition 1 (Interpretation of DL-knowledge bases)

An Interpretation of a DL knowledge base is a pair \(\mathcal {I} = <\varDelta ^\mathcal {I},.^\mathcal {I}>\) where \(\varDelta ^\mathcal {I}\) is a non-empty set (the domain of interpretation) and \(.^\mathcal {I}\) is a function that maps every concept to a subset of \(\varDelta ^\mathcal {I}\), every role to a subset of \(\varDelta ^\mathcal {I} \times \varDelta ^\mathcal {I}\), and each individual name to an element of the domain \(\varDelta ^\mathcal {I}\).

An interpretation \(\mathcal {I}\) satisfies the concept assertion \(C(a)\), denoted by \(\mathcal {I}\vDash C(a)\), iff \(a^\mathcal {I} \in C^\mathcal {I}\) and it satisfies the role assertion \(R(a, b)\), denoted by \(\mathcal {I} \vDash R(a, b)\), iff \((a^\mathcal {I}, b^\mathcal {I}) \in R^\mathcal {I}\), where \(a\) and \(b\) are individuals.

We now introduce the logic \(\mathcal {L}_\mathcal{{OCRS}}\) which is an extension of the logic developed by [7]. Let us define the internal language of each agent in the system. Let the set of agents be \(A_g = \{1,2,....,n_{Ag}\}\), \(\mathcal {C}=\{C_1,C_2,\ldots C_n\}\) be a finite set of concepts, \(\mathcal {R}=\{R_1, R_2,\ldots , R_n\}\) be a finite set of roles, and \(\mathcal {A}\) be a finite set of assertions. We also define a set \(\mathcal {Q}=\{Ask(i,j, P),\) \(Tell(i,j,P)\}\), where \(i,j\in A_g\) and \(P\in \mathcal {C}\cup \mathcal {R}\). Note that \(\mathcal {C}\) and \(\mathcal {R}\) are the sets of concepts and roles that appear in \(\mathcal {A}\). Let \(\mathfrak {R}=\{r_1,r_2,\ldots ,r_n\}\) be a finite set of rules of the form \(P_1, P_2,\ldots ,P_n \rightarrow P\) , where \(n\ge 0\), \(P_i, P\in \mathcal {C}\cup \mathcal {R}\cup \mathcal {Q}\) for all \(i\in \{1,2,\ldots ,n\}\) and \(P_i\ne P_j\) for all \(i\ne j\). For convenience, we use the notation \(ant(r)\) for the set of antecedents of \(r\) and \(cons(r)\) for the consequent of \(r\), where \(r\in \mathfrak {R}\). Let \(g : \wp (\mathcal {A}) \rightarrow \mathfrak {R}\) be a substitution function that uses a forward-chaining strategy to instantiate the rule-base. We denote by \(\mathcal {G}(\mathfrak {R})\) the set of all the ground instances of the rules occurring in \(\mathfrak {R}\), which is obtained using \(g\). Thus \(\mathcal {G}(\mathfrak {R})\) is finite. Let \(\bar{r}\in \mathcal {G}(\mathfrak {R})\) be one of the possible instances of a rule \(r\in \mathfrak {R}\). Note that \(C(a)\), \(R(a,b)\), \(Ask(i,j, C(a))\), \(Ask(i,j, R(a,b))\), \(Tell(i,j, C(a))\), and \(Tell(i,j, R(a,b))\) are ground facts, for all \(C\in \mathcal {C}, R\in \mathcal {R}\). The internal language \(\mathcal {L}\) includes all the ground facts and rules. Let us denote the set of all formulas by \(\varOmega \) which is finite. In the modal language of \(\mathcal {L}\) we have belief operator \(B_i\) for all \(i\in A_g\). We assume that there is a bound on communication for each agent \(i\) which limits agent \(i\) to at most \(n_C(i)\in \mathbb {Z}^*\) messages. Each agent has a communication counter, \(cp^{=n}_i\), which starts at \(0~ (cp^{=0}_i)\) and is not allowed to exceed the value \(n_C(i)\). We divide agent’s memory into two parts as rule memory (knowledge base) and working memory. Rule memory holds set of rules, whereas the facts are stored in the agent’s working memory. Working memory is divided into static memory (\(S_M(i)\)) and dynamic memory (\(D_M(i)\)). The \(D_M(i)\) of each agent \(i\in A_g\) is bounded in size by \(n_M(i)\in \mathbb {Z}^*\), where one unit of memory corresponds to the ability to store an arbitrary formula. The static part contains initial information to start up the systems, e.g., initial working memory facts, thus its size is determined by the number of initial facts. The dynamic part contains newly derive facts as the system moves. Only formulas stored in \(D_M(i)\) may get overwritten if it is full. Note that unless otherwise stated, in the rest of the paper we shall assume that memory means \(D_M(i)\). For convenience, we define the following sets: \(CP_i =\{cp^{=n}_i\ |\ n=\{0,\ldots ,n_C(i)\}\}\), \(CP = \bigcup _{i \in A_g}CP_i\).

The syntax of \(\mathcal {L}_\mathcal{{OCRS}}\) includes the temporal operators of \(CTL^*\) and is defined inductively as follows:

  • \(\top \) (tautology) and \(start\) (a propositional variable which is only true at the initial moment of time) are well-formed formulas (wff) of \(\mathcal {L}_\mathcal{{OCRS}}\);

  • \(cp^{=n}_i\) (which states that the value of agent \(i\)’s communication counter is \(n\)) is a wff of \(\mathcal {L}_\mathcal{{OCRS}}\) for all \(n\in \{0,\ldots ,n_C(i)\}\) and \(i \in A_g\);

  • \(B_i C(a)\) (agent \(i\) believes \(C(a)\)), \(B_i R(a,b)\) (agent \(i\) believes \(R(a,b)\)), and \(B_i r\) (agent \(i\) believes \(r\)) are wffs of \(\mathcal {L}_\mathcal{{OCRS}}\) for any \(C\in \mathcal {C}, R\in \mathcal {R}\), \(r \in \mathfrak {R}\) and \(i \in A_g\);

  • \(B_k Ask(i,j,C(a))\), \(B_k Ask(i,j,R(a,b))\), \(B_k Tell(i,j,C(a))\), and \(B_k Tell(i,j,R(a,b))\) are wffs of \(\mathcal {L}_\mathcal{{OCRS}}\) for any \(C\in \mathcal {C}, R\in \mathcal {R}\), \(i,j \in A_g\), \(k\in \{i,j\}\), and \(i\ne j\);

  • If \(\varphi \) and \(\psi \) are wffs of \(\mathcal {L}_\mathcal{{OCRS}}\), then so are \(\lnot \varphi \) and \(\varphi \wedge \psi \);

  • If \(\varphi \) and \(\psi \) are wffs of \(\mathcal {L}_\mathcal{{OCRS}}\), then so are \(X\varphi \) (in the next state \(\varphi \)), \(\varphi \, U \psi \) (\(\varphi \) holds until \(\psi \)), \(A\varphi \) (on all paths \(\varphi \)).

Other classical abbreviations for \(\bot \), \(\vee \), \(\rightarrow \) and \(\leftrightarrow \), and temporal operations: \(F\varphi \equiv \top U \varphi \) (at some point in the future \(\varphi \)) and \(G\varphi \equiv \lnot F \lnot \varphi \) (at all points in the future \(\varphi \)), and \(E\varphi \equiv \lnot A \lnot \varphi \) (on some path \(\varphi \)) are defined as usual.

The semantics of \(\mathcal {L}_\mathcal{{OCRS}}\) is defined by \(\mathcal {L}_\mathcal{{OCRS}}\) transition systems which are based on \(\omega \)-tree structures. Let \((S,T)\) be a pair where \(S\) is a set and \(T\) is a binary relation on \(S\) that is total, i.e., \(\forall s\in S\cdot \exists s^\prime \in S\cdot sTs^\prime \). A branch of \((S,T)\) is an \(\omega \)-sequence \((s_0,s_1,\ldots )\) such that \(s_0\) is the root and \(s_iTs_{i+1}\) for all \(i\ge 0\). We denote \(B(S,T)\) to be the set of all branches of \((S,T)\). For a branch \(\pi \in B(S,T)\), \(\pi _i\) denotes the element \(s_i\) of \(\pi \) and \(\pi _{\le i}\) is the prefix \((s_0,s_1,\ldots ,s_i)\) of \(\pi \). A \(\mathcal {L}_\mathcal{{OCRS}}\) transition system \(\mathbb {M}\) is defined as \(\mathbb {M} = (S, T, V)\) where

  • \((S,T)\) is a \(\omega \)-tree frame

  • \(V : S \times A_g \rightarrow \wp (\varOmega \cup CP)\); we define the belief part of the assignment \(V^B(s,i) = V(s,i) \setminus CP\) and the communication counter part \(V^C(s,i) = V(s,i) \cap CP\). We further define \(V^M(s,i) = \{\alpha | \alpha \in D_M(i)\}\). \(V\) satisfies the following conditions:

    1. 1.

      \(|V^C(s,i)|=1\) for all \(s \in S\) and \(i \in A_g\).

    2. 2.

      If \((s,t) \in T\) and \(cp^{=n}_i \in V(s,i)\) and \(cp^{=m}_i \in V(t,i)\) then \(n\le m\).

  • we say that a rule \(r:P_1, P_2,\ldots ,P_n \rightarrow P\) is applicable in a state \(s\) of an agent \(i\) if \(ant(\bar{r})\in V(s,i)\) and \(cons(\bar{r})\notin V(s,i)\). The following conditions on the assignments \(V(s,i)\), for all \(i\in A_g\), and transition relation \(T\) hold in all models:

    1. 1.

      for all \(i\in A_g\), \(s,s^\prime \in S\), and \(r\in \mathfrak {R}\), \(r\in V(s,i)\) iff \(r\in V(s^\prime , i)\). This describes that agent’s program does not change.

    2. 2.

      for all \(s,s^\prime \in S\), \(sTs^\prime \) holds iff for all \(i\in A_g, V(s^\prime ,i) = V(s,i)\cup \{cons(\bar{r})\}\cup \{ Ask (j,i,C(a))\}\cup \{ Tell (j,i,C(a)\} \cup \{ Ask (j,i,R(a,b))\}\cup \{ Tell (j,i,R(a,b)\}\). This describes that each agent \(i\) fires a single applicable rule instance of a rule \(r\), or updates its state by interacting with other agents, otherwise its state does not change.

The truth of a \(\mathcal {L}_\mathcal{{OCRS}}\) formula at a point \(n\) of a path \(\pi \in B(S,T)\) is defined inductively as follows:

  • \(\mathbb {M},\pi ,n \vDash \top \),

  • \(\mathbb {M},\pi ,n \vDash start\) iff \(n=0\),

  • \(\mathbb {M},\pi ,n \vDash B_i \alpha \) iff \(\alpha \in V(s,i)\),

  • \(\mathbb {M},\pi ,n \vDash cp^{=m}_i\) iff \(cp^{=m}_i \in V(s,i)\),

  • \(\mathbb {M},\pi ,n \vDash \lnot \varphi \) iff \(\mathbb {M},\pi ,n \nvDash \varphi \),

  • \(\mathbb {M},\pi ,n \vDash \varphi \sqcap \psi \) iff \(\mathbb {M},\pi ,n \vDash \varphi \) and \(\mathbb {M},\pi ,n \vDash \psi \),

  • \(\mathbb {M},\pi ,n \vDash X\varphi \) iff \(\mathbb {M},\pi ,n+1 \vDash \varphi \),

  • \(\mathbb {M},\pi ,n \vDash \varphi U \psi \) iff \(\exists m \ge n\) such that \(\forall k\in [n,m)\) \(\mathbb {M},\pi ,k \vDash \varphi \) and \(\mathbb {M},\pi ,m \vDash \psi \),

  • \(\mathbb {M},\pi ,n \vDash A\varphi \) iff \(\forall \pi '\in B(S,T)\) such that \(\pi '_{\le n}=\pi _{\le n}\), \(\mathbb {M},\pi ',n \vDash \varphi \).

We now describe conditions on the models. The transition relation \(T\) corresponds to the agent’s executing actions \(\langle act_i,act_2,\ldots ,act_{n_Ag}\rangle \) where \(act_i\) is a possible action of an agent \(i\) in a given state \(s\). The set of actions that each agent \(i\) can perform are: \(Rule_{i,\bar{r},\beta }\) (agent \(i\) firing a rule instance \(\bar{r}\) and adding \(cons(\bar{r})\) to its working memory and removing \(\beta \)), \(Copy_{i,\alpha ,\beta }\) (agent \(i\) copying \(\alpha \) from other agent’s memory and removing \(\beta \), where \(\alpha \) is of the form \( Ask (j,i,P\!)\) or \( Tell (j,i,P)\), and \(Idle_i\) (agent \(i\) does nothing but moves to the next state). Intuitively, \(\beta \) is an arbitrary facts which gets overwritten if it is in the agent’s dynamic memory \(D_M(i)\). If agent’s memory is full \(|V^M(s, i)| = n_M(i)\) then we require that \(\beta \) has to be in \(V^M(s, i)\). Not all actions are possible in a given state. For example, there may not be any matching rule instances. When the counter value reaches to \(n_C(i)\), \(i\) cannot perform copy action any more. Let us denote the set of all possible actions by agent \(i\) in a given state \(s\) by \(T_i(s)\) and its definition is given below:

Definition 2 (Available actions)

For every state \(s\) and agent \(i\),

  1. 1.

    \(Rule_{i,r,\beta }\in T_i(s)\) iff \(r\in V(s, i)\), \(ant(\bar{r})\subseteq V(s, i)\), \(cons(\bar{r})\notin V(s, i)\), \(\beta \in \varOmega \) or if \(|V^M(s, i)| = n_{M}(i)\) then \(\beta \in V^M(s, i)\);

  2. 2.

    \(Copy_{i,\alpha ,\beta }\in T_i(s)\) iff there exists \(j \not = i\) such that \(\alpha \in V(s,j)\), \(\alpha \notin V(s,i)\), \(cp^{=n}_i \in V(s,i)\) for some \(n {<} n_C(i)\), \(\alpha \) is of the form \( Ask (j,i,P)\) or \( Tell (j,i,P)\), and \(\beta \) as before;

  3. 3.

    \(Idle_i\) is always in \(T_i(s)\).

Definition 3 (Effect of actions)

For each \(i \in A_g\), the result of performing an action \(act_i\) in state \(s\) is defined if \(act_i \in T_i(s)\) and has the following effect on the assignment of formulas to \(i\) in the successor state \(s^\prime \):

  1. 1.

    if \(act_i\) is \(Rule_{i,r,\beta }\): \(V(s^\prime ,i) = V(s, i)\setminus \{\beta \} \cup \{cons(\bar{r})\}\);

  2. 2.

    if \(act_i\) is \(Copy_{i,\alpha ,\beta }, cp^{=n}_i \in V(s,i)\) for some \(n\le n_C(i)\): \(V(s^\prime ,i)=V(s,i) \setminus \{\beta ,cp^{=n}_i\}\cup \{ \alpha , cp^{=n+1}_i \}\);

  3. 3.

    if \(act_i\) is \(Idle_i\): \(V(s^\prime ,i)=V(s,i)\).

Now, the definition of the set of models corresponding to a system of rule-based reasoners is given below:

Definition 4

\(\mathbb {M}(n_M,n_C)\) is the set of models \((S,T,V)\) which satisfies the following conditions:

  1. 1.

    \(cp^{=0}_i \in V(s_0,i)\) where \(s_0\in S\) is the root of \((S,T)\), \(\forall i \in A_g\);

  2. 2.

    \(\forall s, s^\prime \in S\), \(sTs^\prime \) iff for some tuple of actions \(\langle act_i,act_2,\ldots ,act_{n_Ag}\rangle \), \(act_i \in T_i(s)\) and the assignment in \(s^\prime \) satisfies the effects of \(act_i\), \(\forall i\in A_g\);

  3. 3.

    \(\forall s\in S\) and a tuple of actions \(\langle act_i,act_2,\ldots ,act_{n_Ag}\rangle \), if \(act_i \in T_i(s), \forall i\in A_g\), then \(\exists s^\prime \in S\) s.t. \(sTs^\prime \) and \(s^\prime \) satisfies the effects of \(act_i\), \(\forall i\in A_g\);

  4. 4.

    The bound on each agent’s memory is set by the following constraint on the mapping \(V\): \(|V^M(s,i)| \le n_M(i)\text{, } \forall s \in S\text{, } i \in A_g\).

Note that the bound \(n_C(i)\) on each agent \(i\)’s communication ability (no branch contains more than \(n_C(i)\) \(Copy\) actions by agent \(i\)) follows from the fact that \(Copy_i\) is only enabled if \(i\) has performed fewer than \(n_C(i)\) copy actions in the past. Below are some abbreviations which will be used in the axiomatization:

  • \(ByRule_i(P,n)\) \(= \lnot B_i P \wedge cp^{=n}_i\) \( \wedge \bigvee _{r\in \mathfrak {R}\wedge cons(\bar{r}))=P} ( B_i r \wedge \bigwedge _{Q \in ant(\bar{r})} B_i Q )\). This formula describes the state before the agent comes to believe formula \(P\) by the \(Rule\) transition, \(n\) is the value of \(i\)’s communication counter, \(P\) and \(Q\) are ground atomic formulas.

  • \(ByCopy_i(\alpha ,n) = \lnot B_i \alpha \wedge B_j \alpha \wedge cp^{=n-1}_i\), where \(\alpha \) is of the form \( Ask (j,i,P)\) or \( Tell (j,i, P)\), \(i,j\in A_g\) and \(i\ne j\).

Now we introduce the axiomatization system.

  1. A1

    All axioms and inference rules of \(CTL^{*}\) [17].

  2. A2

    \(\bigwedge \limits _{\alpha \in D_M(i)} B_i \alpha \rightarrow \lnot B_i \beta \) for all \(D_M(i)\subseteq \varOmega \) such that \(|D_M(i)|=n_M(i)\) and \(\beta \notin D_M(i)\). This axiom describes that, in a given state, each agent can store maximally at most \(n_M(i)\) formulas in its memory,

  3. A3

    \(\bigvee \limits _{n=0,\ldots ,n_C(i)} cp^{=n}_i\),

  4. A4

    \(cp^{=n}_i \rightarrow \lnot cp^{=m}_i\) for any \(m\not =n\),

  5. A5

    \(B_i r \wedge \bigwedge _{P \in ant(\bar{r})} B_i P \wedge cp^{=n}_i \wedge \lnot B_i cons(\bar{r})\rightarrow EX (B_i cons(\bar{r}) \wedge cp^{=n}_i)\), \(i \in A_g\). This axiom describes that if a rule matches, its consequent belongs to some successor state.

  6. A6

    \( cp^{=n}_i \wedge \lnot B_i \alpha \wedge B_j \alpha \rightarrow EX( B_i \alpha \wedge cp^{=n+1}_i)\) where \(\alpha \) is of the form \( Ask (j,i,P)\) or \( Tell (j,i, P)\), \(i,j \in A_g\), \(j\not =i\), \(n < n_C(i)\). This axiom describes transitions made by Copy with communication counter increased.

  7. A7

    \(EX ( B_i \alpha \wedge B_i \beta ) \rightarrow B_i \alpha \vee B_i \beta \), where \(\alpha \) and \(\beta \) are not of the form \( Ask (j,i,P)\) and \( Tell (j,i,P)\). This axiom says that at most one new belief is added in the next state.

  8. A8

    \(EX ( B_i \alpha \wedge cp^{=n}_i) \rightarrow B_i \alpha \vee ByRule_i(\alpha ,n)\vee ByCopy_i(\alpha ,n)\) for any \(\alpha \in \cup \varOmega \). This axiom says that a new belief can only be added by one of the valid reasoning actions.

  9. A9a

    \(start \rightarrow cp^{=0}_i\) for all \(i \in A_g\). At the start state, the agent has not performed any \(Copy\) actions.

  10. A9b

    \(\lnot EX\ start\). start holds only at the root of the tree.

  11. A10

    \(B_i r\) where \(r\in \mathfrak {R}\) and \(i\in A_g\). This axiom tells agent \(i\) believes its rules.

  12. A11

    \(\lnot B_i r\) where \(r\notin \mathfrak {R}\) and \(i\in A_g\). This axiom tells agent \(i\) only believes its rules.

  13. A12

    \(\varphi \rightarrow EX \varphi \), where \(\varphi \) does not contain \(start\). This axiom describes an \(Idle\) transition by all the agents.

  14. A13

    \(\bigwedge _{i \in A_g} EX (\bigwedge _{\alpha \in \varGamma _i} B_i \alpha \wedge cp^{=n_i}_i) \rightarrow EX \bigwedge _{i \in A_g} (\bigwedge _{\alpha \in \varGamma _i} B_i \alpha \wedge cp^{=n_i}_i)\) for any \(\varGamma _i \subseteq \varOmega \). This axiom describes that if each agent \(i\) can separately reach a state where it believes formulas in \(\varGamma _i\), then all agents together can reach a state where for each \(i\), agent \(i\) believes formulas in \(\varGamma _i\).

Let us now define the logic obtained from the above axiomatisation system.

Definition 5

\(\mathbb {L}(n_M,n_C)\) is the logic defined by the axiomatisation A1 - A13.

Theorem 1

\(\mathbb {L}(n_M,n_C)\) is sound and complete with respect to \(\mathbb {M}(n_M,n_C)\).

Sketch of Proof. The proof of soundness is standard. The proofs for axioms and rules included in A1 are given in [17]. Axiom A2 assures that at a state, each agent can store maximally at most \(n_M(i)\) formulas in its memory. Axioms A3 and A4 force the presence of a unique counter for each agent to record the number of copies it has performed so far. In particular, A3 makes sure that at least a counter is available for any agent and A4 guaranties that only one of them is present. In the following, we provide the proof for A5. The proofs for other axioms are similar.

Let \(\mathbb {M} = (S, T, V) \in \mathbb {M}(n_M,n_C)\), \(\pi \in B(S, T)\) and \(n\ge 0\). We assume that \(\mathbb {M},\pi , n\vDash B_i r \wedge \bigwedge _{P \in ant(\bar{r})} B_i P \wedge cp^{=m}_i \wedge \lnot B_i cons(\bar{r})\), for some \(r\in \mathfrak {R}\), and \(|V^M(s,i)|\le n_M(i)\). Then \(P\in V(\pi _n, i)\) for all \(P\in ant(\bar{r})\), and \(cons(\bar{r})\notin V(\pi _n,i)\). This means that the action performed by \(i\) is \(Rule_{i,r,\beta }\). According to the definition of \(\mathbb {M}(n_M, n_C)\), \(\exists s^\prime \in S\cdot \pi _n T s^\prime \) and \(V(s^\prime ,i) = V(\pi _n, i)\setminus \{\beta \} \cup \{cons(\bar{r})\}\). Let \(\pi ^\prime \) be a branch in \(B(S,T)\) such that \(\pi _{\le n}^\prime = \pi _{\le n}\) and \(\pi _{n+1}^\prime = s^\prime \). Then we have \(\mathbb {M},\pi ^\prime ,n+1 \vDash B_i cons(\bar{r}) \wedge cp^{=m}_i\). Therefore, it is obvious that \(\mathbb {M},\pi ,n \vDash EX(B_i cons(\bar{r}) \wedge cp^{=m}_{i})\).

Completeness can be shown by constructing a tree model for a consistent formula \(\varphi \). This is constructed as in the completeness proof introduced in [17]. Then we use the axioms to show that this model is in \(\mathbb {M}(n_M,n_C)\). Due to space limitations we omit the proof of this result. \(\Box \)

5 Maude Encoding

We build a multi-agent rule-based system whose rules are derived from the ontology of the healthcare epilepsy scenario described in Sect. 2. The system consists of four agents: Patient \((1)\), Planner \((2)\), CareGiver \((3)\), and HealthProfessional \((4)\). The set of rules and initial working memory facts that are distributed to the agents are shown in Table 1. For the specification and verification of the system we use Maude LTL model checker. The choice of LTL is not essential, it is straightforward to encode a \(\mathcal {L}_\mathcal{{OCRS}}\) model for a standard model checker. We use LTL because it is the logic supported by the Maude system used in our case study. We chose the Maude LTL model checker because it can model check systems whose states involve arbitrary algebraic data types. The only assumption is that the set of states reachable from a given initial state is finite. Rule variables can be represented directly in the Maude encoding, without having to generate all ground instances resulting from possible variable substitutions.

Table 1. Horn-Clause rules for the epileptic patients’ monitoring context-aware system

Due to space limitation we omit the encoding here, however, it is similar to [6], apart from the implementation of agents memory bounds. We verified a number of interesting resource-bounded properties of the system including the following:

figure b

the above property specifies that whenever there is an epileptic alarm for Tracy, agent \(1\) notifying agent \(2\) that “Tracy” has hazardous activity and she is located in “DownTown” within \(n\) time steps, while exchanging \(m\) messages and space requirement for agent \(1\) is at least \(l\) units, and

figure c

which specifies that whenever agent \(2\) gets notified that “Tracy” has hazardous activity and she is located in “DownTown” it believes that care giver Fiona accepts the request within \(n\) time steps, while exchanging \(m\) messages and space requirement for agent \(2\) is at least \(l\) units.

The above properties are verified as true when the values of \(n\), \(m\), and \(l\) are \(3\), \(1\), and \(3\) in the first property, and the values of \(n\), \(m\), and \(l\) are \(9\), \(3\), and \(2\) in the second property. However, the properties are verified as false and the model checker returns counterexamples when we assign a values to \(n\), \(m\), and \(l\) which are less than \(3\), \(1\), and \(3\) in the first property, and values to \(n\), \(m\), and \(l\) which are less than \(9\), \(3\) and \(2\) in the second property.

6 Conclusions and Future Work

In this paper, we presented a formal logical framework for modelling and verifying context-aware multi-agent systems. Where agents reason using ontology-driven first order Horn clause rules. We considered space requirement for reasoning in addition to the time and communication resources. We modelled an ontology-based context-aware system to show how we can encode a \(\mathcal {L}_\mathcal{{OCRS}}\) model using Maude LTL model checker and formally verify its resource-bounded properties. In future work, we would like to develop a framework that will allow us to design context-aware system automatically from a given scenario described in natural languages. This requires extracting specification to build its corresponding ontology for the desired system.