Keywords

1 Introduction

Intelligent assistive technology (AT) is an umbrella of artificial intelligence-based machinery, that in general, is able to observe and reason about appropriate and tailored support to individuals [24]. An AT may have different aims, as an assurance system, compensation system or as an assessment system [31]. No matter what the assistive goal is, the internal machinery of an intelligent AT should reason about: (1) the client-caregiver interactionFootnote 1; and (2) the context where the AT service is provided. Moreover, the AT system should generate consistent services, as outputs of the intelligent system. In contrast to AT provision based on artificial intelligence, public AT service provision is regulated by policies, procedures and approaches, being part of different national or regional health care and welfare systems [39].

In the deductive systems[26] literature as part of AI, different effort have been developed to provide formal principles of a deductive system (see [2, 7, 12, 16]). In this setting, and inspired by public AT efforts to provide high quality services, we propose in this paper a novel formal set of principles that AT based on AI should follow to warranty consistent AT services. We hypothesize that the internal reasoning process of an AT needs to fulfill general principles of consistency and soundness, aiming at not interfere, contradict or disregard client and/or caregiver actions.

To this end, this paper has a two-fold goals: (1) propose a general decision-making mechanism (algorithm) considering information of a client and a caregiver who supports during activity execution; and (2) introduce general principles of no-contradiction to which any AT reasoning about observations, goals and actions of individuals must comply. We framed those principles in a client-caregiver-agentFootnote 2 interaction in four common AT scenarios, as follows:

  1. S1.

    Independent activity execution: a client does not need to be assisted during the execution of an activity, an AT is present but it takes no action.

  2. S2.

    Human support: a caregiver assists a client that needs support during an activity execution. An AT is present but it takes no action.

  3. S3.

    Agent supporting: a client is supported by an AT. A caregiver is not present during such interaction.

  4. S4.

    Joint assistance: a client is supported by a caregiver and an AT.

Formal argumentation theory [3] is used to embed non-monotonic reasoning in an agent, i.e., resembling the kind of assessment reasoning performed by clinicians: (1) gathering data through observations; (2) handling ambiguous and uncertain observation information; (3) generating current function status hypothesis; (4) deduce an explanatory outcome of explanation; and (5) retracting the explanation under new evidence [18]. In this sense, the proposed argumentation-based algorithm (Algorithm 1) takes different decisions depending on the observations, goals and actions in particular scenarios (S1–4).

Scenarios S1–4 are analyzed from an activity theory [13, 21] perspective, which investigates these AT contexts as a continuum of support adaptation. We analyze S1–4 as “distances” from what a client can do independently, to the activity potential of that client supported by a caregiver or an AT system. We explore computational versions of the so-called zone of proximal development (ZPD) [38] for each scenario.

We present a basic architecture for an AT system able to identify these assistive scenarios. We implement a prototype of such architecture using a projected augmented reality the AT system supports a client displaying personalized information. Our AT system captures information from the client and caregiver using 3D cameras, goals and hypothetical actions are embedded in a program using a multi-agent system platform.

This paper is organized as follows: in Sect. 2 methods and theories utilized as foundation of our proposal are presented. Section 3 introduces an algorithm goal-based reflection as an internal mechanism of an agent. Section 4 introduces a set of general principles that an AT system should fulfill. In Sect. 5, the architecture of an AT system that we developed using projected augmented reality is presented. A discussion with our future paths of this investigation are presented in Sect. 6.

2 Theoretical Background

In this section, some concepts of activity theory [22] and formal argumentation theory [3] are introduced. The former, is used in this paper as a framework to represent knowledge about an activity; the later is used to characterize the internal decision-making process of the intelligent assistive system.

2.1 Activity Theory

In this paper, activity theory is used for two purposes: (1) for knowledge representation, structuring information of clients and caregivers following a hierarchical model; and (2) to understand the potential level of activity achievement of a person.

Fig. 1.
figure 1

Adapted from [21]

Hierarchical structure of activity. Activities are composed of actions, which are, in turn, composed of operations. These three levels correspond, respectively, to the motive, goals, and conditions, as indicated by bidirectional arrows.

Activity theory describes an activity as a hierarchical structure composed of actions, which are composed of operations as is represented in Fig. 1. Actions are directed to goals; goals are conscious, i.e., a human agent is aware of goals to attain. Actions, in their turn, can also be decomposed into lower-level units of activity called operations. Operations are routine processes providing an adjustment of an action to the ongoing situation, they are oriented toward the conditions under which the agent is trying to attain a goal.

In this paper we use logic programs to capture information about an activity, we denote P as a program and \(\mathcal {L}_P\) the set of atoms which appear in such program. In this regard, an activity model (\(\mathcal {A}\)) corresponds to information characterizing mental states of an agent framed on a particular activity. \(\mathcal {A}\) can be expressed using propositional logic as a syntax language.

Definition 1

(Activity model). Let P be a logic program capturing the behavior rules of an activity. An activity model \(\mathcal {A}\) is a tuple of the form \( \langle \textsf {Ax}, \textsf {Go}, \textsf {Op} \rangle \) in which:

  • \( \textsf {Ax} = \{ ax_1, \dots , ax_j\} (j>0)\) is a set of atoms such that \( \textsf {Ax} \subseteq \mathcal {L}_P\). Ax denotes the set of actions in \(\mathcal {A}\).

  • \( \textsf {Go} = \{ g_1, \dots , g_k \} (k>0)\) is a set of atoms such that \( \textsf {Go} \subseteq \mathcal {L}_P\). Go denotes the set of goals in \(\mathcal {A}\).

  • \( \textsf {Op} = \{ o_1, \dots , o_l \} (l>0)\) is a set of atoms such that \( \textsf {Op} \subseteq \mathcal {L}_P\). Op denotes the set of operations in \(\mathcal {A}\).

In our approach, an activity model \(\mathcal {A}\) (Definition 1) may capture information from a client or a caregiver (as in [18]) or/and a software agent-based system (as in [14]). In this paper, we denote \(\mathcal {A}_c\), \(\mathcal {A}_g\) and \(\mathcal {A}_a\) to represent the activity models of a client a caregiver and an agent respectively. In terms of activity theory, \(\mathcal {A} = \langle \textsf {Ax}, \textsf {Go}, \textsf {Op} \rangle \) can be seen as a partial description of a complex activity.

In this paper, activity theory is also used to quantify the potential level of activity achievement aiming to frame the decision-making process of the intelligent assistive system. Vygotsky [38] proposed to measure the level of development not through the level of current performance, but through the difference (“the distance”) between two performance indicators: (1) an indicator of independent problem solving, and (2) an indicator of problem solving in a situation in which the individual is provided with support from other people [21]. This indicator was coined as a zone of proximal development (ZPD) and it has been used extensively in social sciences (see [1, 9, 19, 34]) to understand changes of individuals during assisted learning processes.

In order to create a computable version of the concept of zone of proximal development, we use a function dist that compares two variables (e.g. observations of an activity) and returns a numerical value \( \alpha \in \mathbb {R} \) representing in this case, a ZPD difference. For convenience, we rename scenarios described in Sect. 1 S1–4 as \( ZPD_i\), \( ZPD_h\), \( ZPD_s\) and \( ZPD_{h+s}\) respectively.

2.2 Formal Argumentation Theory

Generally speaking, a formal argumentation process can be seen as a mechanism consisting of the following steps (see Fig. 2): (1) Constructing arguments (in favor/against a “statement”) from a knowledge base; (2) Determining the different conflicts among the arguments; (3) Evaluating the acceptability of the different arguments; and (4) Concluding, or defining the justified conclusions. From artificial intelligence perspective, the important and distinctive characteristics of this process are: (1) their non-monotonic behavior, i.e., changing the conclusion when more knowledge is added, and (2) their traceability, providing explanations in every step of the reasoning process.

Fig. 2.
figure 2

Inference of an argument-based conclusion using a formal argumentation process

We define the concept of an activity framework which frames the necessary knowledge that an agent needs to take a decision.

Definition 2

(Activity framework). An activity framework ActF is a tuple of the form \( \langle P, \mathcal {H}_A, \mathcal {G}, \mathcal {O}, \mathcal {A} \rangle \) in which:

  • P is a logic program. \( \mathcal {L}_P\) denotes the set of atoms which appear in P.

  • \(\mathcal {H}_A = \{h_1, \dots , h_i\} \) is a set of atoms such that \(\mathcal {H}_A \subseteq \mathcal {L}_P\). \( \mathcal {H}_A \) denotes the set of hypothetical actions which an agent can perform in a world.

  • \( \mathcal {G} = \{ g_1, \dots , g_j \} \) is a set of atoms such that \( \mathcal {G} \subseteq \mathcal {L}_P\). \( \mathcal {G} \) denotes a set of goals of an agent.

  • \( \mathcal {O} = \{ o_1, \dots , o_k \} \) is a set of atoms such that \( \mathcal {O} \subseteq \mathcal {L}_P\). \( \mathcal {O} \) denotes a set of world observations of an agent.

  • \( \mathcal {A} \) is an activity model of the form: \( \langle \textsf {Ax}, \textsf {Go}, \textsf {Op} \rangle \), following Definition 1.

ActF according to Definition 2 defines the space of knowledge of an assistive agent. In this space, an argument-based process (see Fig. 2) can be performed to obtain sets of explainable structures support-conclusion for what is the best assistive action to take. These structures can be seen as fragments of an activity [18] (see Fig. 3) and can be generated as follows:

Fig. 3.
figure 3

Fragments and sub-fragments of an hierarchical activity

Definition 3

(Hypothetical fragments). Let \( ActF = \langle P, \mathcal {H}_A, \mathcal {G}, \mathcal {O}, \mathcal {A} \rangle \) be an activity framework. A hypothetical fragment of an activity is of the form \(HF= \langle S, O^{'}, h, \; g \rangle \) such that:

  • \( S \subseteq P,\; O^{'} \subseteq \mathcal {O}, \; h \in \mathcal {H}_A, \; g \in \mathcal {G}\);

  • \( S \cup O^{'} \cup \{h \}\) is consistent;

  • \( g \ne \perp \); and

  • S and \( O^{'}\) are minimal w.r.t. set inclusion.

Let us introduce a function \( \textsf {Supp}(HF) \) which retrieves the set \( \{ S, O^{'}, h\} \) of a given fragment, which can be seen as the support for concluding a goal g. Next step in the argumentation-based process is find different types of contradictions among such fragments (Definition 3): (1) when two fragments have conclusive evidence about opposed achievement of goals; and (2) when a fragment contradicts the support evidence of another. These two types of relationships among fragments resembles the well-known notions of undercut and rebut in argumentation theory [4, 32].

Definition 4

(Contradictory relationships among fragments). Let \( ActF = \langle P, \mathcal {H}_A, \mathcal {G}, \mathcal {O}, Acts \rangle \) be an activity framework. Let \( HF_1 = \langle S_1, O^{'}_1,\) \(a_1, \; g_1 \rangle \), \( HF_2 = \langle S_2, O^{'}_2, a_2, \; g_2 \rangle \) be two fragments such that \( HF_1, HF_2 \in \mathcal {HF}\). \( HF_1 \) attacks \( HF_2\) if one of the following conditions hold: (1) \( g_2 = \lnot g_1 \); and (2) \(g_2 \subseteq \textsf {Supp}(HF_1) = \,\perp \) or \(g_1 \subseteq \textsf {Supp}(HF_2) = \,\perp \).

An argumentation framework is a pair \( \langle Args, att \rangle \) in which Args is a finite set of arguments and \( att \subseteq Args \times Args \). In [17] an argumentation-based activity framework for reasoning about activities was proposed. We reuse this concept for in our paper, as follows:

Definition 5

(Activity argumentation framework). Let ActF be an activity framework of the form \( \langle P, \mathcal {H}_A, \mathcal {G}, \mathcal {O}, Acts \rangle \); let \(\mathcal {HF}\) be the set of fragments w.r.t. ActF and \( Att_{\mathcal {HF}}\) or simply Att the set of all the attacks among \( \mathcal {HF}\). An activity argumentation framework AAF with respect to ActF is of the form: \( AAF = \langle ActF, \mathcal {HF}, Att \rangle \).

Dung [11], introduced a set of patterns of selection of arguments called argumentation semantics (SEM)Footnote 3. SEM is a formal method to identify conflict outcomes from argumentation frameworks, such as an activity argumentation framework.

Definition 6

Let \( AAF = \langle ActF, \mathcal {HF}, Att \rangle \) be an activity argumentation framework AAF with respect to \( \textit{ActF} = \langle P, \mathcal {H}_A, \mathcal {G}, \mathcal {O}, Acts \rangle \) An admissible set of fragments \( S \subseteq \mathcal {HF} \) is stable extension if and only if S attacks each argument which does not belong to S. preferred extension if and only if S is a maximal (w.r.t. inclusion) admissible set of AAF. complete extension if and only if each argument, which is acceptable with respect to S, belongs to S. grounded extension if and only if it is a minimal (w.r.t. inclusion) complete extension. ideal extension if and only if it is contained in every preferred set of AAF.

The sets of arguments suggested by SEM are called extensions. We can denote \(SEM(AAF) = \{Ext_1, \dots , Ext_k \}\) as the set of k extensions generated by SEM w.r.t. an activity argumentation framework AAF. In this setting, from the perspective of an intelligent agent what it is expected to have is: (1) no contradictory or conflicting sets of fragments sets explaining what is happening in the ongoing activity, and (2) fragments sets defending/supporting a hypothesis about the activity from other fragments. These two notions defines two main concepts in Dung’s argumentation semantics: acceptable and admissible arguments.

Definition 7

(1) An fragment \( HF_{A} \in \mathcal {HF}\) is acceptable w.r.t. a set S of fragments iff for each fragment \( HF_{B} \in \mathcal {HF}\): if \( HF_{B} \) attacks \( HF_{A}\), then \( HF_{B} \) is attacked by S. (2) conflict-free set of fragments S in an activity is admissible iff each fragment in S is acceptable w.r.t. S.

Using these notions of fragment admissibility, different argumentation semantics can draw given an activity argumentation framework:

Definition 8

Let \( AAF = \langle ActF, \mathcal {HF}, Att \rangle \) be an activity argumentation framework following Definition 5. An admissible set of fragments \(S \subseteq \mathcal {HF}\) is: (1) stable if and only if S attacks each fragment which does not belong to S; (2) preferred if and only if S is a maximal (w.r.t. inclusion) admissible set of AAF; (3) complete if and only if each fragment, which is acceptable with respect to S, belongs to S; and (4) the grounded extension of AAF if and only if S is the minimal (w.r.t. inclusion) complete extension of AAF.

Conclusions of an argument-based reasoning about an activity may be obtained using a skeptical perspective, i.e., accepting only irrefutable conclusions as follows:

Definition 9

(Justified conclusions). Let P be an extended logic program, \(AF_P = \langle \mathcal {A}rg_P, At(\mathcal {A}rg_P) \rangle \) be the resulting argumentation framework from P and \(SEM_{Arg}\) be an argumentation semantics. If \(SEM_{Arg}(AF_P) = \{E_1,\dots ,E_n\} (n \ge 1)\), then \( \mathsf {Concs}(E_i)= \{\mathsf {Conc}(A) \mid A \in E_i\}(1\le i \le n). \) \( \mathsf {Output}= \bigcap _{i=1\dots n} \mathsf {Concs}(E_i).\)

Where \( E_i \) are sets of fragments called extensions. The set of all the extensions generated by \( SEM_{Arg}(AF_P) \) are denoted as \( \mathcal {E}\).

3 Reflection on Decisions About Human Activity

Reflection, as an internal mechanism of a rational agent to (re)consider the best decision alternative (inferring strategies), has been an important line of research in AI particularly in practical reasoning (see [20, 37]). In this paper, we do not consider an agent with pro-attitudes as in Bratman model [6], we propose a control loop algorithm (as in [33, 36]) to design the action selection and its reflection based on an activity model.

figure a

Given a set of hypothetical fragments suggested by an argumentation process, our algorithm selects an agent’s action that maximize humans’ goals. This mechanism is summarized in Algorithm 1.

Algorithm 1 prioritizes the activity model of a client over an agent and, at the same time, it computes a distance between activity variables. In lines 8–15 of Algorithm 1, such distance is calculated (line 12) over sets of hypothetical fragments. This distance calculation is based on computing a similarity function between the current achievement of human goals in the activity model w.r.t. a set of goal reference (\(\mathsf {Ref}_{\textsf {Go}}\) line 12). The \( \textsf {dist}\) function in line 12 follows the notion of ZPD, by measuring in every computation the distance between the current development of a person and a reference, which can be given by a caregiver. This approach for comparing current activity execution with a reference has been used in previous approaches [17, 18].

The importance of Algorithm 1 lies on the mechanism for associating a human activity quantification with the internal action decision of an agent. The Algorithm output depends entirely on previous extensions computation. Propositions 1 and 2 present two special cases of agent’s behavior when Algorithm 1 is usedFootnote 4. One is the possibility to have a conclusion with no action, and the second expresses an inconclusive behavior given that stable semantics may return \( \emptyset \) as output.

Proposition 1

An agent calculating the goal-based action reflection Algorithm 1 using a skeptic semantics, grounded or ideal, may result in a conclusive empty decision.Footnote 5

Proposition 2

An agent calculating the goal-based action reflection Algorithm 1 using the credulous semantics: stable, may result in an inconclusive decision.Footnote 6

3.1 Support in Relation to the Zone of Proximal Development Using Formal Argumentation

In this section, based on the common-sense reasoning of activities using argumentation theory, we propose a theory to calculate the following four scenarios in assistive agent-based technology:

I. \(\varvec{ZPD}_{i}\) independent activity execution. This scenario describes an observer agent which takes the decision to do nothing to support a person. More formally, the type of fragments (Definition 3) generated by the agents are of the form \(HF= \langle S, O^{'}, h^{*}, \; g \rangle \) such that \( h^{*} \in \mathcal {H}_A = \{\emptyset , do\_Nothing\}\). In this setting, all the extensions generated by \( SEM(AF_P) = \mathcal {E}\) during a period of time will create an activity structure. In other words, the cumulative effect of generating fragments, re-construct an activity in a bottom-up manner. Moreover, Algorithm 1 returns only values of \( \alpha \), i.e. the current value of a qualifier when the agent does not take any supportive action. This context defines the baseline of activity execution independence of a person.

II. \( \varvec{ZPD}_{h}\) activities supported by another person. Similarly to previous scenario, the role of the software agent is to be an observer. However, built fragments have the form \(HF= \langle S, O^{*}, h^{*}, \; g \rangle \) such that \( h^{*} \in \mathcal {H}_A = \{\emptyset , do\_Nothing\}\) and \( O^{*} = O^{'} \cup O^{''} \), where \( O^{*}\) is the set of joint observations from the agent’s perspective about the individual supported (\( O^{'} \)) and the supporter \( O^{''}\). We have that \( O^{'} \subseteq O^{''}\), and \( O^{'},O^{''} \ne \emptyset \). In this scenario, \( O^{''}\) is considered a reference set of observations (\( \textsf {Ref}\) lines 3 and 12 in Algorithm 1). Algorithm 1 will return a value of \( \alpha \) which measures to what extent an individual follows the guide provided by another person.

When multiple extensions are collected during the period of time that the individual is supported, then a different set of activities than individual activity execution may be re-generated in a bottom-up manner.

III. \( \varvec{ZPD}_{s}\) activities supported by an agent. In this scenario, an assistive agent takes a decision oriented to uphold human interests, priorities and ability to conduct an activity. This is a straightforward scenario where \( h \in \mathcal {H}_A \ne \{\emptyset , do\_Nothing\}\).

IV. \( \varvec{ZPD}_{h+s}\) caregiver and agent supporting cooperatively. In this scenario, the main challenge for the agent perspective is to detect: (1) actions that an assistant person executes, and (2) observations of both, the person assisted and the person who attends. This is similar to \( ZPD_H\) but with fragments built from \( \mathcal {H}_A \ne \{\emptyset , do\_Nothing\}\). In this case, the level of \( ZPD_{H+S}\) is given by Algorithm 1, and the set of extensions \( \mathcal {E}\) with aligned goals between agent and the caregiver.

4 Principles for Providing Consistent Assistive Services

In this section, we propose a set of general principles that AT based on deductive systems should follow to warranty consistency in their outputs.

4.1 Activity-Oriented Principles

Based on previous detailed analysis of different ZPD scenarios, we propose in the following a set of principles that need to be fulfilled to provide assistive services.

Proposition 3

Let \(\mathcal {A}^{*}\) be the set of all the possible activity models; let \( R \subseteq \mathcal {A}_j \) a set of fragments from an activity model; let \( h_j \in \mathcal {H}_a\) and \( g_j \in \mathcal {G}_a \) be an agents’ action and goal; and let \( E_k \subseteq \mathcal {E} \) be an extension of hypothetical fragments. The following holds:

$$\not \exists \langle R,h,g \rangle in E_k \notin \mathcal {A}^{*}$$

Proposition 3 establishes that there is no hypothetical fragment that can be built that does not belong to the set of all the activity models. This proposition defines a principle of closure, i.e. that an AT system should not generate outputs (e.g. AT services) that are not contained in the main set of activities.

Proposition 4

Let \(\mathcal {A}^{*}\) be the set of all the possible activity models; and let \( \mathcal {A} \langle \textsf {Ax}, \textsf {Go}, \textsf {Op} \rangle \) be an activity model with \( \mathcal {A} \subseteq \mathcal {A}^{*}\). The following holds: \( \not \exists \) any \(ax \in \textsf {Ax}\), \( g \in \textsf {Go}\) or \( o \in \textsf {Op} \notin \mathcal {A}^{*}\).

Proposition 4 seems straightforward but it establishes that only those activities framed on an activity model can be seen as actions, goals or operations. Out of an activity, individually, those elements have not influence in the decision-making of an argument-based assistive system. Proposition 4 has a social science background, activity theory defines an activity by its motive, and activity necessarily builds on the hierarchy of actions and operations, roughly saying that there not exists any action, goal or operation out of an activity. These elements of an activity can not be considered separately or independently [21]. In this sense, Proposition 4 establishes the same principle, defining with Proposition 3 basic conditions of activity knowledge closure.

Postulate 1

Let \( \mathcal {O}_{\textsf {Go}} \) be a set of observations about human goals (Go) and actions (Ax) framed on an activity, captured by an agent using an activity model \( \mathcal {A} \). Let \( \mathcal {G} \) and \( \mathcal {H}_A \) be agent’s goals and its hypothetical actions. In order to provide non-conflicting assistance two properties have to be fulfilled:

  • PROP1: \( \mathcal {O}_{\textsf {Go}} \cap \mathcal {G} \ne \emptyset \)

  • PROP2: \( \mathcal {O}_{\textsf {Ax}} \cap \mathcal {H}_A \ne \emptyset \).

Postulate 1 can be seen as a self-evident rule that any intelligent assistive system should follow. PROP1 and PROP2 provides coherence among human-agents actions and goals. This two properties may define a first attempt to establish consistency principles of agent-based assistance. This is a future work in our research.

Fig. 4.
figure 4

Smart medicines cabinet using argument-based reasoning and an augmented reality projection. (I) Gesture recognition using three Kinect cameras, one for client body capture, another for assistant personal gesture recognition, last one (Kinect sensor 2) on the top of the cabinet to recognize text from medicines boxes; (II) Google API for text recognition; (III) argument-based reasoning; (IV) goal-based action reflection to consider human side; (V) database containing doses and timing of pill intake.

5 Implementation

The scenario selected for implementing a demonstrator for the formal results describes the situation where an older adult performs the the activity to distribute medication into a medicine cabinet. This activity is supported by an intelligent system and technology for augmented reality that is used for mediating the information provided by the system (see Fig. 4).

Fig. 5.
figure 5

Text

The prototype architecture consists of five main parts (see Fig. 5): (1) gestures recognition: obtaining observations from individuals using Kinect cameras; (2) text recognition using another Kinect camera with Google API text recognition (https://cloud.google.com/vision); (3) argument-based reasoning: the main agent-based mechanism of common sense reasoning; (4) goal-based action reflection generating an augmented reality feedback: a module to generate support indications as projections in the smart environment; and (5) a database of medicine doses to obtain appropriate messages.

We use three 3D cameras to capture: (1) observations of an individual that needs help in a physical activity; (2) observations of the smart environment, including a supporting person; and (3) information of the handle gestures of medicine manipulation. A central computer was connected to the cameras, processing the information in real-time analyzing gestures of individuals as observations for the agent. The agent platform (JaCaMo) was used to build the agent. An argumentation process was used using an argumentation library previously developed (see [16]). An agent updates/triggers its plan every time that a pre-defined gesture of the 3D camera is identified. Those pre-defined gestures were defined and trained based on data from three older adults and two medical experts.

6 Discussion and Conclusions

Our main contribution in this paper is a formal understanding of the interplay among an assistive agent-based software, a person to be assisted and a caregiver.

Argumentation-based systems, have become influential in artificial intelligence particularly in multi-agent systems design (see [8] for a systematic review). Argumentation theory can be seen as a process to provide common-sense to the decision-making process of a deductive system. Common-sense reasoning about an activity implies a non-monotonic process in which the output may change, when more knowledge is added. In the context of this paper, the contrary of a non-monotonic behavior is, for example a stubborn system, providing support when an individual does not need it and even under direct negative response from a user. In this paper we argue that non-monotonic reasoning may be used as main mechanism for decision-making of intelligent assistive systems. In fact, in ambient-assistive literature few authors have explore this approach (see [18, 25, 27, 28, 30]).

We propose an algorithm to integrate client’s information (the activity model Definition 1) into the final decision-making process of an agent. This mechanism captured in Algorithm 1, resembles a process of “reflection” which in humans is a re-consideration of actions and goals given some parameters. Our reflection mechanism can be seen as an “action-filtering” process with the human-in-the-loopFootnote 7. We also analyze different outputs of Algorithm 1 considering two groups of argumentation semantics (Propositions 1 and 2).

We propose different properties that software agents should follow if their goals are linked to human goals. We highlight the relevance of Postulate 1 which is understood as a primary rule for an intelligent assistive system. The relevance and impact of these properties not only covers agents based on formal argumentation theory, but other approaches, such as those based on the Belief Desire Intention model [5].

Our proposed principles are a starting point for evaluating assistive technology systems. This is a first step to establish general properties that such system should follow. We are aware that several principles can be added and we are aiming to continue this research line as future work.

We are also interested in the analysis of activity dynamics extending our formal results. In activity theory, the hierarchical structure is dynamic, there are transformations among internal levels of the hierarchy triggered by the demands and prerequisites in the environment [23]. We aim to investigate transformations in the activity, for example when the ZPD “increases”, i.e. a person can achieve more activities with help of a caregiver or an assistive technology system, the activity hierarchy changes. From computational point of view, such change implies a modification at the information structure level, which may define scenarios where consistency can not be assured. In this sense, part of the future work will be focused on analyzing activity dynamics, but leveraged by the current “static” research of activities e.g. in [15, 17, 18, 29].