Keywords

1 Introduction

Patient-reported acceptability for health technology has gained increasing attention in health care context. E-health systems are becoming the most important technology of health care organization [1]. Among E-health systems, Embodied Conversational Agents (ECAs) could enable to increase the quality of human-machine interactions that is of paramount importance to promote the use of E-health systems by the patients [2, 3]. ECAs combine verbal, facial and gestural expressions in order to conduct a face-to-face interview. ECA have a positive effect on the user’s perception of a computer-based interaction task, well-known as the “persona effect” [4]. This effect was principally evaluated in healthy subject. Indeed, the extent to which patients in a health care context find ECAs acceptable and interaction with ECAs satisfying remain under evaluated [5]. Thus, the originality of this study is to evaluate the acceptability of an ECA performing a clinical structured interview in comparison with the same clinical structured interview presented in written form on a tablet screen.

2 Method

2.1 Population

Outpatients were recruited by psychiatrists in the Sleep Clinic of Bordeaux University Hospital from November 2014 to June 2015 in a consecutive sample design. All participants provided written informed consent and the study was approved by the local ethical committee. The study was classified as a clinical trial by the US National Institutes of Health (ClinicalTrials.gov identifier: NCT02544295, date of registration: September 3, 2015). This project was supported by the grant EQUIPEX PHENOVIRT ANR-10-EQPX-12-01.

2.2 The Computerized Clinical Interviews

A clinical structured interview script was designed with a sequence of questions based on Diagnostic and Statistical Manual of Mental Disorders-5 criteria. The aim was to diagnose Major Depression Disorder. Fluency of the questions was optimized with iterative processes. For each question the patients had to respond by yes or no. The script of the computerized clinical interview was implemented in two different digital tools: one by an Embodied Conversational Agent (ECA), the other in written form on a tablet screen. The ECA was adapted from previously developed software designed to self-conduct interactive face-to-face clinical interviews [2]. The ECA face-to-face interview can be seen in http://www.sanpsy.univ-bordeauxsegalen.fr/Papers/IVA_Additional_Material.html. The tablet screen was designed to self-conduct the same set of questions than the ECA.

2.3 Evaluation of the Acceptability

Patients completed the two computerized clinical interviews in a randomized order. Few minutes separated the interviews. After the structured interview by the ECA and on the tablet screen, acceptability of each digital tool was evaluated by the patient with the French version of the Acceptability E-Scale (AES) [5]. The French version of the AES explored two factors that refer to the Technology Acceptance Model (TAM) [5]. Items: 3, 4 and 6 of the AES evaluated the “satisfaction”, and items 1, 2 and 5 of the AES evaluated the “usability” [5].

3 Data Analysis

The outcome variables for the AES scale included: Acceptability total score, Usability sub-score, Satisfaction sub-score.

Acceptability total score variable was analyzed with a two-way ANOVA with the repeated factor “digital tool” (Embodied Conversational Agent vs. Tablet) and the between subject-factor “order” (order ECA-Tab vs. Tab-ECA). Usability-Satisfaction sub-scores variables were analyzed with a three-way ANOVA with the repeated factors “Acceptability sub-score” (Usability vs. Satisfaction) and “digital tool” (Embodied Conversational Agent vs. Tablet), and the between subject-factor “order” (order ECA-Tab vs. Tab-ECA). Alpha criterion was set at P = .05. Statistica® (StatSoft Inc. 2010) was used.

4 Results

4.1 Population

Out of 209 patients, the data of 178 was available for analyses (102 females (57.3 %); mean age = 46.5 years ± 12.9; mean educational level = 13.3 years ± 3.1).

4.2 Acceptability Total Score

The main effect “digital tool” reaches significance for the Acceptability total score [F(1,176) = 5.228, P < .05]. The main effect “order” does not yield significance [F(1,176) = 0.487, NS]. The factor “digital tool” significantly interacts with the factor “order” [F(1,176) = 13.944, P < .001].

4.3 Usability-Satisfaction Sub-scores

The main effect “digital tool” reaches significance for the Acceptability sub-scores [F(1,176) = 5.228, P < .05]. The main effect “Acceptability sub-score” reaches significance [F(1,176) = 151.50, P < .001]. The main effect “order” does not reach significance [F(1,176) = 0.487, NS]. The factor “digital tool” significantly interacts with the factor “order” [F(1,176) = 13.944, P < .001]. The factor “digital tool” significantly interacts with the factor “order” and “acceptability sub-score” [F(1,176) = 8.463, P < .01] with an absence of decrease in the Satisfaction score for the ECA when it is presented after the tablet (Fig. 1). Stable usability scores are observed whatever digital tool and order.

Fig. 1.
figure 1

Usability and satisfaction sub-scores on AES scale for the ECA and the tablet in function of the order of presentation (Embodied Conversational Agent in first order: order ECA-Tab; embodied conversational agent in second order: order Tab-ECA).

5 Discussion

This study shows, in a health care context, that patients who complete the same clinical structured interview script implemented in two different digital tools perceive globally the acceptability of the ECA higher than the tablet. This higher acceptability is related rather to higher satisfaction than to higher usability. However, this result is modulated by the order of presentation, as this effect is driven by a decrease of satisfaction when the Tablet was completed after the ECA. Indeed, this study shows that the repeated clinical structured interview is perceived less acceptable when the tablet screen is used to repeat the interview. The patient reports higher satisfaction when they repeat the clinical interview with the ECA than with the tablet.

This study confirms that ECA increases the patient-reported acceptability for health technology in health care context [4, 6]. Thus ECA could be largely used to collect medical information in order to optimize health care organization.