Abstract
When designing an artificial tutor, the question arises: should we opt for a virtual or a physical embodied conversational agent? With this work we contribute to the ongoing debate of whether, when and how virtual agents or robots provide more benefits to the user and conducted an experimental study on linguistic alignment processes in HCI in the context of second language acquisition. In our study (n = 130 non-native speakers) we explored the influence of design characteristics and investigated the influence of embodiment (virtual agent vs. robot vs. speech based interaction) and system voice (text-to-speech vs. pre-recorded speech) on participants’ perception of the system, their motivation, their lexical and syntactical alignment during interaction and their learning effect after the interaction. The variation of system characteristics had no influence on the evaluation of the system or participants’ alignment behavior.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
- Linguistic alignment
- Second language acquisition
- Virtual agent
- Robot
- Speech output
- Nonverbal behavior
- Embodiment
- Experimental study
1 Introduction
When designing an artificial tutor, numerous decisions have to be made regarding system characteristics. One of the most influential decisions pertains to the question of whether to employ and develop a virtual or physically embodied conversational agent. Virtual agents are comparably cheap and more flexible, but robots are seen to provide an even richer interactive experience, because they can manipulate their environment and actually get in physical contact with users [1]. New approaches opt for migrating both types of artificial entities into one entity represented differently (e.g. as robot or as screen agent) depending on where the user is located [2]. However, it is unclear whether there is a preference of one embodiment form over the other dependent on the specific task the user intends to complete with the help of the system. Results of previous research are somewhat inconclusive (cf. [3] for an extensive review). While quite a number of studies showed that a robot is more persuasive, receives more attention and is perceived more positively than a virtual agent, there is a general lack of studies examining different behavioral outcomes, particularly, with regard to linguistic behavior. One important future application field for virtual agents and robots are tutoring systems. Artificial tutors could especially be helpful to assist with second language acquisition, because they could help overcome inhibition effects which can occur in human-human interaction due to native speakers linguistically aligning “downward” to non-natives and simplifying their language use. Because people tend to align more strongly to computers than to humans [4], this mechanism might lead to enhanced learning outcomes when computers expose a high standard in the language to learn so that learners are prompted to align upward. Thus, we hypothesize that processes of linguistic alignment in HCI can be exploited to help second language acquisition. However, system characteristics such as embodiment might influence alignment processes. Moreover, alignment processes are supposedly dependent on the learners’ comprehension of the speech output of the artificial tutor. Hence, it is important to investigate whether current text-to-speech (tts) software is of sufficient quality to not inhibit alignment and hence also learning processes. Although using prerecorded natural speech would annul comprehension deficits of tts systems, it is more effortful to add study units later on. Thus, we explore whether linguistic alignment can be used in the context of second language acquisition to support learning and whether and if yes how system design characteristics such as embodiment and quality of speech output influence evaluation and learning outcome.
1.1 Effects of Differently Embodied Artificial Entities
Virtual agent or robot? This is an essential design decision developers have to make when designing new embodied conversational agents. Both embodiment types provide unique interaction possibilities, but also come along with certain restrictions. In a sense virtual agents are more flexible than robots in that we can easily change a virtual agent’s appearance. Hence the appearance can be matched to users’ preferences, to the needs of special target groups or to the corporate design of the developing company. Virtual agents can appear on different devices including smartphones. Moreover, they have unlimited degrees of freedom and can perform actions that are not possible in real life. In contrast, robots have limited degrees of freedom, their design cannot be changed easily. Most of the available products are quite stationary and thus can only be used at home or at work. A big advantage of robots is that they are “tangible artifacts” [1], which can be touched, and are able to manipulate their environment by means of physical contact to objects as well as to human interaction partners. Studies comparing robots and virtual agents led to inconsistent results. A majority of findings suggests that robots are superior to virtual representations with regard to the perceived social presence of the entity [5], the evaluation of the entity as entertaining or enjoyable [1, 6], and trustworthy [6]. Furthermore, robots have been demonstrated to be more persuasive [7], elicit more attention [8], and increase user’s task performance [9]. On the contrary, other results point to superiority of a virtual representation when the outcome variable is information disclosed to the entity [7]. In fact, there seems to be an interaction effect of embodiment and task. Regarding the evaluation of task attractiveness, Hoffmann and Krämer [10] demonstrated that a robot was better evaluated in a task-related scenario, while a virtual representation was favored for a conversation. Finally, a study by Bartneck [9] yielded no differences between a robot and a screen animation with regard to how entertaining the interaction was evaluated. Also, Kennedy et al. [8] observed no difference in children’s learning increase when interacting with a real NAO robot or an animation of the robot on a screen. Only one study investigated participants’ linguistic behavior on the context of differently embodied agents [11]. It was found that verbosity and complexity of linguistic utterances did not differ between a virtual agent or a robot, but participants used more interactional features of language towards the robot such as directly addressing it by its name. The interplay of embodiment and linguistic alignment has not been investigated so far.
1.2 Linguistic Alignment in HHI, HCI and in Context of Second Language Acquisition
Empirical evidence in human-human-interaction (HHI) research showed that interaction partners align linguistically in conversations on different levels, for instance, regarding accent or dialect [12], speech rhythm [13], lexical choices and semantics [14] as well as syntax [15]. Quite a number of these effects also occur in interactions with artificial entities. Similarly to HHI, users align to computers, for instance, with regard to prosody, lexis, and syntax (for an overview cf. [16]). Studies with virtual agents showed the same tendencies: in the interaction with a virtual tutor users aligned to lay language or medical jargon [17] and to dialect or standard language [18]. However, studies suggest that alignment in HHI and HCI is similar but not the same, since people tend to show stronger alignment with computers [4] presumably to compensate the computers weaker communicative abilities. However, although initial beliefs about their artificial interaction partner are taken into account by human users, recent comparative work showed that “when social cues and presence as created by a virtual human come into play, automatic social reactions appear to override initial beliefs in shaping lexical alignment.” [19]. An open question is whether the physical embodiment of the artificial interlocutor strengthens this effect of blurring boundaries or not. This would be especially important to know when designing artificial language tutors for SLA. Native-speakers often adapt to non-natives in order to foster mutual understanding and successful communication, sometimes with the negative outcome of interfering with successful SLA on a native-speaker level. Using artificial tutors could help to overcome this bias. Since users more strongly align to computers in order to ensure communicative success, there is a potential to exploit these alignment processes for SLA. A first study with native and non-native speakers showed that both groups aligned lexically to a virtual tutor. However, alignment was weaker for non-natives [20] due to a substantial lack of fluency. For instance, if people are not able to conjugate a verb or have trouble to pronounce words correctly, they tend to choose easier vocabulary [21]. Hence, participants might not be able to reproduce all linguistic nuances. This might also be due to the speech output quality of the agent since tts systems do not expose perfect pronunciation. Still, alignment is seen as core to language acquisition, thus, also to SLA [22] and the tendency of non-natives to align to technology in a learning setting could be exploited for SLA. Admittedly, system characteristics have to be taken into account and their potential inhibiting effects need to be explored – especially in the case of speech output quality.
1.3 Research Questions and Hypotheses
In this work we explore the potential of artificial tutors to avoid inhibition effects and exploit linguistic alignment processes in HCI for SLA. In particular, we examine whether an artificial tutor’s embodiment (virtual agent vs. robot vs. speech based interaction) influences participants’ evaluation of the tutor, their lexical and syntactical alignment during interaction and their learning effect after the interaction (RQ1). Since previous work showed that robots can elicit more positive evaluations than virtual agents (5–9), we propose that the robot will be rated most positively followed by the virtual agent and the solely language-based tutor (H1). In contrast to classic language learning software like DVDs or online platforms, most virtual agents and robots do not use prerecorded natural speech, but tts software which could affect listening comprehension and thereby alignment. Thus, quality of speech output is also varied in our study (RQ2). Moreover, we want to know whether alignment in dialog results in better performance in a post interaction language test (RQ3).
2 Method
2.1 Experimental Design and Independent Variables
In order to determine which system characteristics people prefer in their interaction with an artificial tutor, we chose a 3 × 2 between-subjects design with speech output and embodiment as independent variables. We used three types of embodiment of the artificial tutor. Participants either interacted exclusively language-based (and saw only a blue screen with the text “language learning system”), or they interacted with a virtual version of the Nao robot or the physical present Nao robot. Secondly, we varied the artificial tutor’s speech output. Participants were either confronted with speech output generated by tts software or with prerecorded natural speech (ns). Since Nao’s tts system is installed on the physical Nao itself and thus is not available for the virtual Nao, we generated wave files by recording the tts speech output. Natural speech was recorded after generating the tts soundfiles. The speaker was instructed to speak similarly, i.e. imitate intonation and speed (sounds examples can be found in the supplementary material). In order to avoid different perceptions of presence due to sound quality (and not type of speech output), we also used the sound files for the people interacting with the physical Nao.
2.2 Participants and Procedure
One hundred and thirty volunteers (74 female, 56 male) aged between 15 and 53 years (M = 26.6; SD = 6.87) participated in this study. Seventeen participants had previously interacted with a robot and 26 had interacted with a robot. Participants stem from more than 40 different countries, speak more than 25 different native languages and exposed different levels of German language skills (with a minimum of an intermediate level). Participants were recruited on campus or in German classes in the local adult education center. The study was approved by the local ethics committee. Upon arrival participants read and signed informed consent. They completed two language tests: a test on grammar and reading and listening comprehension and a so called C-Test (www.c-test.de), a cloze test which also addresses language skills with regard to different dimensions. Based on their test results, their country of origin and first language, respectively, participants were distributed equally across conditions where possible and were invited for a second appointment. On the second appointment participants were instructed about the different tasks to be solved with the artificial tutor. Each task was again explained by the tutor during the interaction (cf. Fig. 1). Participants were also given a folder with detailed instructions in case they did not understand the tutor. Participants completed five tasks: (1) introducing themselves, (2) describing a picture in detail, (3) playing a guessing game, (4) playing a search game, and (5) again describing a picture. The order of tasks was always the same for all participants. The first two tasks were used to make participants comfortable at speaking loudly to the system. The two structured games (guessing game and search game) were used to analyze alignment processes. We repeated the task of describing a picture to give participants another possibility to speak quite freely at the end of the learning session. This was done to create a more believable training environment for the participants. After the interaction, participants completed a second C-Test as a measure of learning outcome and a questionnaire asking for their experiences and assessment of the interaction. Finally, they were debriefed, reimbursed (€10) and thanked for participation.
2.3 Dependent Variables: Self-report
Perception of the Artificial Tutor.
For the person perception of the artificial tutor, we used the Godspeed Questionnaire [23], a semantic differential with 25 bi-polar items which are rated on a 5-point scale. We used the four subscales Anthropomorphism (attribution of a human form, characteristics, or behavior to nonhuman things; 5 items, e.g. fake-natural, machinelike-humanlike; Cronbach’s α = .889), Animacy (perception of lifelikeness; 5 items, e.g. dead-alive; stagnant-lively; Cronbach’s α = .880), Liking (5 items, e.g. dislike-like, unfriendly-friendly; Cronbach’s α = .844), and Perceived Intelligence (5 items, e.g. incompetent-competent; Cronbach’s α = .789).
Social Presence.
We assessed participants’ sense of co-presence with the Nowak and Biocca Presence Scale [24], which contains 12 items on the concept of “perceived other’s co-presence” (Cronbach’s α = .716) and 6 items on “self-reported co-presence” (Cronbach’s α = .716), both rated on a 5-point Likert scale.
General Evaluation of the Interaction.
The general evaluation of the interaction was assessed by eight items that asked for the participants’ sense of control during the interaction, the enjoyment of the interaction, and whether participants like to use a system like this for other tasks (rated on a 5-point Likert scale; Cronbach’s α = .793).
Speech Output.
Additionally, we asked, on a 5-point Likert-scale from “very mechanical” to “very humanlike”, how humanlike they perceived the speech output to be.
2.4 Dependent Variables: Linguistic Alignment
In order to analyze linguistic alignment with the artificial tutor, participants played two structured games (guessing game and search game) in which the tutor and the participant took turns in constructing sentences.
Guessing Game.
The first structured game was a dialog based game adapted from Branigan et al. [15]. In the original game participants took turns in describing a card and trying to find this card out of a set of cards. The description was originally one sentence. In our adaption of the game participants took turns in guessing the two persons and their interaction on so-called interaction cards (cf. Fig. 1) by asking only yes-or-no questions similarly to the “Who am I” guessing game (e.g. “Is the person on the left side female?”; “Is the person on the right side old?”, “Is the interaction between the two friendly?”). Questions are asked in a structured manner: first guess who is one the left, then who is on the right and lastly, find out the interaction between the two. By this we created more opportunities to vary lexical and syntactic choices within one round of the guessing game. There were two rounds of guessing in which the system first guessed the participant’s card and then the participant guessed the system’s card. Between the two rounds, the system changed linguistic choices (e.g. lexical choices (mustache vs. beard); usage of different prepositions, verbs, adjectives or active and passive sentences). Participants´ verbal utterances were analyzed with regard to their lexical choices. A ratio was built for alignment (usage of the same lexical choice (e.g. mustache)/occurrence of the concept (e.g. number of linguistic expression referring to a beard)).
Search Game.
The second structured game was also a dialog based game in which the participant and the tutor took turns in describing picture cards to one another. For this game is used the original experimental setup used by Branigan et al. [15] in order to study syntactic alignment. The cards displayed two characters (e.g. policeman and cowboy) a verb (e.g. to give), and an object (e.g. balloon, cf. Fig. 2). Participants had two sets of cards (reading cards and search cards). The task was to take a card from the first card set (reading cards) and to form a sentence based on the characters, verb and object displayed on the card (e.g. the balloon was given to the policemen by the cowboy). The interaction partner’s task was to search in their set of “search cards” for this exact card. This means that the system’s search cards are identical to participant’s reading cards and vice versa. The system began the interaction and built a sentence. The participant had to find the card and put it away and in turn had to take a card from the “reading” set and form a sentence so that the tutor can find the card in its (imagined) search card pool and put it away. In total, the system read out 15 cards, thereby formed 15 sentences in three “blocks”. The first block i.e. the first five sentences were formed as passive voice, the second five sentences as prepositional phrase and the last as accusative. A ratio was built for syntactical alignment (usage of the same case/5 sentences). Since previous research showed that alignment can occur with a delay [25], we also examined whether participants aligned to previously heard syntactic choices, and e.g. in the second block aligned to the first block and in the third block aligned to the second block, respectively.
2.5 Dependent Variables: Learning Outcome
As described in Sect. 2.2, participants exposed different levels of German language skills which were assessed by two language tests: first a standard test on grammar and reading and listening comprehension and second a so called C-Test (www.c-test.de), a cloze test which also addresses language skills with regard to different dimensions. The C-Test was completed again after the interaction with the tutoring system. To explore whether the interaction has a positive effect on participants’ language skills we analyzed the results of the C-Tests prior and after the interaction. The C-Test has been used previously for accessing language skills and also improvement in language skills [26]. It usually comprises of five short pieces of self-contained text (ca. 80 words or four to five sentences) in which single words are “damaged”. The first sentence is undamaged. Beginning with the second word in the second sentence there are 20 damaged words alternating with undamaged words. In order to reconstruct the sentences, participants have to activate their language fluency. Text pieces were taken from reading exams on an academic language level. Tests are analyzed by true-false answers. Participants could reach 100 points at most.
3 Results
Data were analyzed with ANOVAS and correlation analyses using IBM®SPSS Statistics 21. However, we also estimated Bayes factors using Bayesian Information Criteria [17], comparing the fit of the data under the null hypothesis and the alternative hypothesis using R and the package BayesFactor by Richard D. Morey.
3.1 Participants’ Self-reported Experiences
First, the speech output conditions did not differ regarding how humanlike the voice was perceived (ns: M = 2.80, SD = .70; tts M = 2.72, SD = .75). In order to examine whether embodiment or speech output affects the evaluation of the tutor or the interaction, we conducted ANOVAS with these both factors as independent variables and the dependent variables general evaluation, perceived others co-presence, self-reported co-presence, likability, perceived intelligence, anthropomorphism, and animacy. There were no significant differences between the groups nor did we find significant interaction effects (no support for H1, cf. Table 1). An estimated Bayes factor (null/alternative) suggested that the data were between 2.3 and 13.1 times more likely to occur under a model without including an effect of embodiment or speech output, rather than a model with these factors.
3.2 Participants’ Linguistic Alignment
Guessing Game.
With the guessing game we examined participants’ syntactical and semantical alignment during the interaction. Therefore, the system’s utterances between the two rounds varied in lexical choices when describing the features of the displayed characters (age (old vs. advanced in years), gender (male/female vs. a man/a woman), facial hair (mustache vs. beard)). Moreover, the system used different verbs (has vs. wears), adjectives (friendly vs. kind) and syntactical constructions (person on the left side vs. the left person; active vs. passive). As described above, a ratio was calculated for alignment (usage of the same lexical/syntactical choice (e.g. lexical choice mustache)/occurrence of the concept (e.g. number of expressions referring to a beard)). To examine whether embodiment or speech output affects participants’ linguistic alignment, we conducted ANOVAS with both factors as independent variables and the seven ratios for linguistic alignment as dependent variables. There were no significant differences between the groups nor did we find significant interaction effects (cf. Table 2). An estimated Bayes factor (null/alternative) suggested that the data were between 1.7 and 9.4 times more likely to occur under a model without including an effect of embodiment or speech output, rather than a model with these factors.
Search Game.
The search game focused on the syntactical alignment. Regarding all 15 sentences, participants most often used accusative (M = 6.61, SD = 5.08), followed by prepositional phrases (M = 3.49, SD = 3.91) and passive voice (M = 3.01, SD = 3.73). In order to examine whether embodiment or speech output affects participants’ syntactical alignment, we conducted ANOVAS with both these factors as independent variables and the alignment ratio. There were no significant differences between groups nor did we find significant interaction effects. Since studies have shown that alignment can occur with a delay [25], we also analyzed whether participants aligned to the previous blocks. Again there were no significant differences between the experimental groups with regard to embodiment and speech output nor interaction effects (cf. Table 3). An estimated Bayes factor (null/alternative) suggested that the data were between 2.4 and 11.7 times more likely to occur under a model without including an effect of embodiment or speech output, rather than a model with these factors.
3.3 Language Skills and Learning Effect
To explore whether the interaction has a positive effect on participants’ language skills we analyzed the results of the C-Tests prior and after the interaction. Thus, we conducted split-plot ANOVAS with the group factors embodiment and speech output and repeated measures for the C-Test. Two main effects emerged. First, participants’ C-Test scores were worse after the interaction (M = 51.90, SD = 17.09) than at their first appointment to assess their language proficiency level (M = 55.77, SD = 19.43; F(124, 1) = 29.97; p < .001, ηp 2 = .195). Moreover, the system’s embodiment influenced participants’ C-Test scores after the interaction (F(124, 2) = 6.24; p = .003, ηp 2 = .091). The descriptive data suggests that participants interacting with a robot had lower test results than those interacting with a virtual agent or only language-based (cf. Table 4). The factor speech output showed no effect, nor did we find interaction effects. One goal of this study was to explore the potential of artificial tutors to exploit linguistic alignment processes in HCI for SLA. Thus, we correlated participants’ alignment ratios with their C-Test results after the interaction, but did not find a significant correlation.
4 Discussion
With this work we contribute to the ongoing debate of whether virtual agents or robots provide more benefits to the user. Previous work predominantly found that robots were more persuasive, entertaining, enjoyable, and trustworthy. Moreover, they elicit more attention, and increase user’s task performance (cf. [3] for an overview). However, there is a lack of research on behavioral effects, particularly, with regard to linguistic behavior. Moreover, some studies found interaction effects of type of embodiment and type of task showing that robots were preferred for (physical) tasks and virtual agents for conversations [10]. We conducted an experimental study on linguistic alignment processes in HCI in the context of SLA and varied the artificial tutor’s embodiment (virtual agent vs. robot vs. speech based interaction). Moreover, we argued that tts systems might be problematic in SLA since they do not always pronounce words correctly and therefore we varied whether participants heard tts or prerecorded natural speech. We found that neither embodiment nor quality of speech output influenced participants’ perception of the system or their lexical and syntactical alignment during interaction. There were no differences in perceived human-likeness of the speech output. Regarding language skills, participants performed worse in the language test after the interaction compared to the first appointment where participants’ language skills were assessed in order to distribute them equally across conditions. We did not find the hoped for positive influence of the system on language skills. This might be due to the different workload of the two appointments. In the initial appointment participants only completed these language tests. On the second appointment they first interacted with the system for about 50 min and then completed the language test. Since participants had to concentrate on the tutoring system and interact with it by speaking German this means constant cognitive effort. Hence, participants were probably less concentrated and more exhausted in the posttest than in the pretest. However, there was one effect regarding embodiment: Participants interacting with a robot (regardless of quality of speech output) performed worse in the posttest. Although there were no evaluation differences, we observed that participants were more excited meeting the physical Nao than meeting the virtual or speech-based system. Some of them explored the Nao or even took a picture. Supposedly, at least some participants paid less attention to the actual tasks ahead and concentrated on the robot itself. In sum, the variation of system characteristics had barely influence on the evaluation of the system (H1) or participants’ alignment behavior – neither for embodiment (RQ1) nor for quality of speech output (RQ2). In this study we kept the appearance of the system between the virtually embodied and physically embodied condition consistent with the virtual version of the actual Nao robot. It could be that the recorded speech might be evaluated differently when matched with a human form like a humanlike virtual agent. It is, however, striking that prerecorded speech and tts did also not differ in perceived human-likeness in the language-based only conditions when there is no possible match or mismatch with the appearance of the tutor.
There are several implications relevant for designers of artificial tutors. First, at least in the domain of language learning with predominantly conversation based tasks the type of embodiment or more precisely embodiment itself in whatever form did not result in more positive evaluation effects or different linguistic behavior. Hence, developers can opt for the more flexible and inexpensive virtual agent or solely speech-based system. Second, tts systems have a sufficient quality to be used for SLA purposes. Astonishingly, the tts was perceived only marginally less humanlike than the actual human voice. Since the usage of prerecorded speech is more expensive, harder to implement and to change later on (e.g. extend learning system with new learning situations/games etc.), it is good news that tts systems are perceived equally positive.
Moreover, we wanted to know whether alignment in dialog results in better performance in the post interaction language test (RQ3). We found that although participants aligned to the artificial tutor in all conditions comparably to previous studies [17, 19, 20] this did not significantly contribute to the post interaction test performance. Maybe participants would need more learning sessions to benefit from the system and to transfer the alignment into a learning progress. Moreover, the descriptive analysis of the seven alignment ratios showed that participants aligned differently strongly. For instance, they aligned more when referring to gender than to facial hair. Moreover, alignment is generally lower for any passive construction, because they are also rarely used in everyday conversations. Moreover, we observed that participants with lower initial language skills had trouble in producing sentences. This may have confounded the process of alignment which is at least in part an unconscious process based on priming [14–16]. If words or constructions are not known, they cannot be easily activated by primes thereby eliciting alignment. Hence, linguistic alignment in SLA might only be effective for very advanced language learners. Future work should explore whether repeated tutoring sessions accumulate in a learning effect that might be moderated by alignment during the interaction sessions. Moreover, more distinct groups of participants regarding their initial language skills might give insight into the question of whether alignment contributes effectively to SLA only for advanced learners.
References
Wainer, J., Feil-Seifer, D.J., Shell, D., et al.: Embodiment and human-robot interaction: a task-based perspective. In: 16th IEEE International Conference on Robot and Human Interactive Communication, pp. 872–877. IEEE, Piscataway (2007)
Aylett, R., Kriegel, M., Wallace, I., et al.: Memory and the design of migrating virtual agents. In: Proceedings of the 2013 International Conference on Autonomous Agents and Multi-agent Systems, pp. 1311–1312. International Foundation for Autonomous Agents and Multiagent Systems (2013)
Li, J.: The benefit of being physically present: a survey of experimental works comparing copresent robots, telepresent robots and virtual agents. Int. J. Hum. Comput. Stud. 77, 23–37 (2015). doi:10.1016/j.ijhcs.2015.01.001
Branigan, H.P., Pickering, M.J., Pearson, J., et al.: Beliefs about mental states in lexical and syntactic alignment: evidence from human-computer dialogs. In: Proceedings of the 17th CUNY Conference on Human Sentence Processing (2004)
Fasola, J., Mataric, M.: A socially assistive robot exercise coach for the elderly. J. Hum. Robot Interact. 2(2), 3–32 (2013)
Kidd, C.D., Breazeal, C.L.: Effect of a robot on user perceptions. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004), pp. 3559–3564. IEEE, Piscataway (2004)
Kiesler, S.B., Powers, A., Fussell, S.R., et al.: Anthropomorphic interactions with a robot and robot-like agent. Soc. Cogn. 26(2), 169–181 (2008). doi:10.1521/soco.2008.26.2.169
Kennedy, J., Baxter, P., Belpaeme, T.: Comparing robot embodiments in a guided discovery learning interaction with children. Int. J. Soc. Robot. 7(2), 293–308 (2015). doi:10.1007/s12369-014-0277-4
Bartneck, C.: Interacting with an embodied emotional character. In: Proceedings of the International Conference on Designing Pleasurable Products and Interfaces, pp. 55–60. ACM Press, New York (2003)
Hoffmann, L., Krämer, N.C.: Investigating the effects of physical and virtual embodiment in task-oriented and conversational contexts. Int. J. Hum. Comput. Stud. 71(7–8), 763–774 (2013). doi:10.1016/j.ijhcs.2013.04.007
Fischer, K., Lohan, K.S., Foth, K.: Levels of embodiment: linguistic analyses of factors influencing HRI. In: Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2012), pp. 463–470 (2012)
Giles, H.: Accent mobility: a model and some data. Anthropol. Linguist. 15, 87–105 (1973)
Reed, B.S.: Speech rhythm across turn transitions in cross-cultural talk-in-interaction. Special Issue Pragmat. Perspect. Parliam. Discourse J. Pragmat. 42(4), 1037–1059 (2010). doi:10.1016/j.pragma.2009.09.002
Brennan, S.E., Clark, H.H.: Conceptual pacts and lexical choice in conversation. J. Exp. Psychol. Learn. Memory Cogn. 22, 1482–1493 (1996)
Branigan, H.P., Pickering, M.J., Cleland, A.A.: Syntactic co-ordination in dialogue. Cognition 75(2), B13–B25 (2000)
Branigan, H.P., Pickering, M.J., Pearson, J., et al.: Linguistic alignment between people and computers. J Pragmat. 42(9), 2355–2368 (2010). doi:10.1016/j.pragma.2009.12.012
Rosenthal-von der Pütten, A.M., Wiering, L., Krämer, N.C., et al.: Great minds think alike. Experimental study on lexical alignment in human-agent interaction. i-com 12(1), 32–38 (2013). doi:10.1524/icom.2013.0005
Kühne, V., Rosenthal-von der Pütten, A.M., Krämer, N.C.: Using linguistic alignment to enhance learning experience with pedagogical agents: the special case of dialect. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) IVA 2013. LNCS, vol. 8108, pp. 149–158. Springer, Heidelberg (2013)
Bergmann, K., Branigan, H.P., Kopp, S.: Exploring the alignment space – lexical and gestural alignment with real and virtual humans. Front. ICT 2, 7 (2015). doi:10.3389/fict.2015.00007
Wunderlich, H.: Talking like a machine?! Linguistic alignment of native-speakers and non-native speakers in interaction with a virtual agent. Bachelor thesis, University of Duisburg-Essen (2012)
Costa, A., Pickering, M.J., Sorace, A.: Alignment in second language dialogue. Lang. Cogn. Proc. 23(4), 528–556 (2008). doi:10.1080/01690960801920545
Atkinson, D., Churchill, E., Nishino, T., et al.: Alignment and interaction in a sociocognitive approach to second language acquisition. Mod. Lang. J. 91(2), 169–188 (2007). doi:10.1111/j.1540-4781.2007.00539.x
Bartneck, C., Kulić, D., Croft, E., et al.: measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1(1), 71–81 (2009). doi:10.1007/s12369-008-0001-3
Nowak, K.L., Biocca, F.: The Effect of the agency and anthropomorphism on users’ sense of telepresence, copresence, and social presence in virtual environments. Presence Teleop. Virtual 12(5), 481–494 (2003). doi:10.1162/105474603322761289
Reuter, M.: Linguistic alignment with virtual agents. Bachelor thesis, University of Duisburg-Essen (2011)
Baur, R.S., Meder, G.: C-Tests zur Ermittlung der globalen Sprachfähigkeit im Deutschen und einer Muttersprache bei ausländischen Schülern in der Bundesrepublik Deutschland. In: Grotjahn, R. (ed.) Der C-Tes. Theoretische Grundlagen und praktische Anwendungent. Brockmeyer, Bochum (1994)
Wagenmakers, E.J.: A practical solution to the pervasive problems of p values. Psychon. Bull. Rev. 14(5), 779–804 (2007)
Acknowledgements
The noALIEN (Using linguistic alignment in German language promotion for immigrants on the basis of human-technology interaction) project was funded by the German Federal Ministry of Education and Research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Rosenthal-von der Pütten, A.M., Straßmann, C., Krämer, N.C. (2016). Robots or Agents – Neither Helps You More or Less During Second Language Acquisition. In: Traum, D., Swartout, W., Khooshabeh, P., Kopp, S., Scherer, S., Leuski, A. (eds) Intelligent Virtual Agents. IVA 2016. Lecture Notes in Computer Science(), vol 10011. Springer, Cham. https://doi.org/10.1007/978-3-319-47665-0_23
Download citation
DOI: https://doi.org/10.1007/978-3-319-47665-0_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-47664-3
Online ISBN: 978-3-319-47665-0
eBook Packages: Computer ScienceComputer Science (R0)